On this page you can see an overview of all chapters and their corresponding abstract. Under each chapter you can also find a link to the chapter page where additional material is available.
Direct links to the chapter pages are also available in the menu “Chapter resources”.
Iain Chalmers and Doug Altman edited the first edition of this book, which was published in 1995 and called simply Systematic Reviews. Their foreword focused on the “poor scientific quality of [traditional] reviews of clinical research” and the “disregard of scientific principles,” which may harm decision- making and patient outcomes [1]. Systematic reviews allow a more objective appraisal by systematically identifying, scrutinizing, and synthesizing the relevant studies. Today, more than a quarter of a century and two editions later [1, 2], systematic reviews and meta- analyses have become widely established, with thousands of such studies published every year. Also, their function has broadened, from the scientific synthesis of evidence to inform clinical practice, the main concern in 1995, to becoming part of the methods toolkit to address a wide range of questions, including evaluation of interventions, diagnosis and prognosis, prevalence and burden of disease, and discovery research.
This broader role is reflected in the current third edition. Of the 27 chapters, 12 are new, covering systematic reviews of prediction models, genetic association studies, and prevalence studies. The focus on methods has become stronger too, with new chapters dealing with missing data, network meta- analysis, and dose–response meta- analysis. These changes explain the book’s new title (Systematic Reviews in Health Research, rather than in Health Care) and revised structure. The first chapter discusses the rationale, history, and strengths and limitations of systematic reviews and meta- analysis. This introductory chapter is followed by a section on principles and procedures (six chapters), a section on meta- analysis (seven chapters), and one on systematic reviews and meta- analysis of specific study designs (six chapters). Two chapters cover Cochrane (formerly the Cochrane Collaboration) and systematic reviews and meta- analyses in guideline development and the GRADE approach (Grading of Recommendations Assessment, Development and Evaluation). The book ends with an outlook on innovations in and the future of systematic reviews and meta- analysis (two chapters) and a section on software (three chapters). The 15 chapters that made the journey from the second to the third edition have all been thoroughly updated. Nevertheless, all chapters will age, some more quickly than others. The companion website (www.systematic-reviews3.org) will be our antidote to premature aging, by offering updates of some chapters (for example, those on soft-ware packages) and highlighting references to recent critical articles and instru-ments, new software, and other developments.
The intended audience will be similar to that for the popular second edition [2]: methodologically inclined clinicians, epidemiologists, health services researchers, and public health specialists interested in conducting high- quality systematic reviews and meta-analyses. The book will also be of interest to doctoral and other students in the health sciences. Indeed, we see this book as an excellent resource for teaching and will provide exercises and computer practicals on the book’s website.
On the long journey from the second to the third edition, we lost our dear friend, collaborator, and co-editor, Douglas Graham Altman. Doug sadly died, aged 69, on June 3, 2018. He made essential contributions to this third edition, helping define focus and content and co- authoring several chapters. On the following pages, we reprint Iain Chalmers’ wonderful tribute to Doug’s visionary role in developing systematic reviews and clarifying the role of meta- analysis [3]. We miss you very much, Doug, and dedicate this book to you.
Last but not least, huge thanks to all contributors to this book. They made it possible by patiently and stoically updating their chapters over several years. Many thanks to our project managers and editors in Bern, Carole Dupont, Chris Ritter, and Geraldine Wong, who supported us so well on this long journey. A big thank- you also to the editors at Wiley, Jennifer Seward, Samras Johnson V, and Ella Elliot, for their brilliant help and their patience.
The first edition of this book was applauded for its “simple and often humorous” discussion and its “intuitively appealing explanations” [4]. We hope you will agree that the third edition continues in this tradition.
Bern and Bristol, March 2022
Matthias Egger, Julian P.T. Higgins, George Davey Smith
- Chalmers, I. and Altman, D. (1995). Foreword. In: Systematic Reviews (ed. I. Chalmers and D. Altman). London: BMJ Publishing Group.
- Egger, M., Davey Smith, G., Altman, D.G. et al. (ed.) (2001). Systematic Reviews in Health Care: Meta-Analysis in Context, 2e. London: BMJ Books.
- Chalmers, I. (2020). Doug Altman’s prescience in recognising the need to reduce biases before tackling imprecision in systematic reviews. J. R. Soc. Med. 113: 119–122. https://doi.org/10.1177/0141076820908496.
- DerSimonian, R. (1997). Book Review: Systematic Review. I. Chalmers and D. G. Altman (eds), BMJ Publishing Group, London, 1995. No. of pages: 117. ISBN 0-7279-0904-5. Stat. Med. 16: 2930–2930.
Matthias Egger, Julian P.T. Higgins, George Davey Smith
We begin with an introduction to systematic reviews and meta-analysis, discuss terminology and scope, provide historical background, and examine the potential, promise, and limitations of systematic reviews and meta-analysis. The statistical basis of meta-analysis begins with Laplace in the late eighteenth century, while efforts to summarize research for medical practitioners reach back to James Lind’s 1753 “Treatise of the Scurvy.” Today, reviews have become essential tools for health care workers, researchers, consumers, and policymakers who want to keep up with the evidence they need. Systematic reviews and meta-analyses allow more objective appraisal of the evidence of traditional reviews and may resolve uncertainty when original research, reviews, and editorials disagree. A meta-analysis of randomized trials may enhance the precision of estimates of treatment effects and lead to the timely introduction of effective interventions. In contrast, meta-analyses of heterogeneous observational studies will often not be appropriate. Exploratory analyses, for example of subgroups of patients who are likely to respond particularly well to treatment (or the reverse), may generate promising new research questions to be addressed in future studies. Finally, systematic reviews may demonstrate the lack of adequate evidence and identify areas where further studies are needed.
Julian P.T. Higgins, George Davey Smith, Douglas G. Altman, Matthias Egger
The principles and steps of systematic reviews are similar to any other research undertaking: formulation of the problem to be addressed, collection and analysis of the data, and interpretation of the results. A study protocol should be written, which states objectives and eligibility criteria, describes how studies will be identified and selected, explains how any assessment of methodological quality or risk of bias will be undertaken, and provides details of any planned synthesis methods. The results from eligible studies are expressed in a standardized format and are often displayed graphically in forest plots with confidence intervals. Heterogeneity between study results and possible biases may be explored graphically in a forest, funnel, and other plots, and in statistical analyses. If a meta-analysis is deemed appropriate, a typical effect is estimated by combining the data. Most meta-analysis methods follow either fixed-effect(s) or random-effects approaches, which differ in the way they treat between-study heterogeneity. Meta-analyses generally use relative effect measures, while absolute measures are used when applying the findings to clinical or public health situations.
Julie Glanville, Carol Lefebvre
In 1993, before the establishment of Cochrane (formerly the Cochrane Collaboration), only 19 000 reports of randomized controlled trials were readily identifiable (i.e. indexed as such) in MEDLINE, though there were many more within the database. In 1996, the Cochrane Central Register of Controlled Trials (CENTRAL) was launched, in the Cochrane Library, and now contains nearly 2 million records or reports of trials from MEDLINE, Embase, ClinicalTrials.gov, and other sources. In addition to CENTRAL, other sources should be searched to try to minimize publication bias, such as national, international, and regional databases; subject-specific databases; trials registers, regulatory agency sources, and sources of clinical study reports; and sources specific to systematic reviews. A number of recent initiatives have contributed to greater transparency in trial registration and reporting, but much remains to be done. Highly sensitive, but adequately precise, searches of appropriate sources are fundamental to the process of conducting reliable, trustworthy systematic reviews.
Matthew J. Page, Douglas G. Altman, Matthias Egger
Studies at high risk of bias may distort the results of systematic reviews and meta-analyses. Based on empirical evidence and theoretical considerations, the following sources of bias should be assessed when including randomized trials in a review: bias arising from the randomization process, bias due to deviations from the intended interventions, bias due to missing outcome data, bias in measurement of the outcome, and bias due to selective reporting. The use of summary scores from quality scales is problematic. Results depend on the choice of scale, and the interpretation of the results is difficult. Therefore, judging risk of bias within separate specified bias domains and recording the information on which each judgment is based – the domain-based approach – are preferred. Assessments of risk of bias of included studies should routinely be incorporated in systematic reviews and meta-analyses. Currently, this is best done using sensitivity analyses.
Matthew J. Page, Jonathan A.C. Sterne, Julian P.T. Higgins, Matthias Egger
A p value, or the magnitude or direction of results, can influence decisions about whether, when, and how research findings are disseminated. Regardless of whether an entire study or a particular study result is unavailable because investigators considered the results to be unfavorable, reporting bias in a meta-analysis may occur when available results differ systematically from missing results. This reinforces the need for review authors to search or consult multiple sources that include bibliographic databases, trials registers, manufacturers, regulators, and study authors or sponsors where or through whom study reports and results may be located. Unless prospective approaches to meta-analysis can eliminate the potential for bias due to missing results, review authors should formally assess the risk of bias in their review. Several approaches can facilitate such assessment: tools to record selective reporting of results, ascertaining qualitative signals that suggest not all studies were identified, and the use of funnel plots to identify small-study effects, one cause of which is reporting bias.
Eliane Rohner, Julia Bohlius, Bruno R. da Costa, Sven Trelle
Conducting a systematic review is a team effort. Open and clear communication is therefore critical. The management of people is as important as the management of data for the success of a systematic review. The data assembled for a systematic review are generally complex and a dedicated data management system that covers all relevant steps, from literature screening to data extraction, should be in place. Reviewers may want to manage their data on paper or electronically, with dedicated systematic review data management software or a generic database or spreadsheet software. Electronic solutions have several advantages, but lack the flexibility of a paper-based system.
Larissa Shamseer, Beverley Shea, Brian Hutton, David Moher
Reports of systematic reviews should transparently, completely, and accurately reflect what was done and what was found. This will facilitate optimal use, replication, and evaluation of systematic reviews by others. Several tools are available to guide the reporting of systematic reviews, including the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement and the Methodological Expectations of Cochrane Intervention Reviews (MECIR). The completeness and quality of reporting of systematic reviews have improved over time, but there is still room for improvement against established standards. This chapter summarizes available tools to optimize the reporting of systematic reviews and to appraise their methods.
Julian P.T. Higgins, Jonathan J. Deeks, Douglas G. Altman
Here we describe the different effect measures that are typically used in meta-analyses of randomized trials, many of which are also used in observational studies. For dichotomous data (e.g. dead or alive), the three main options are the odds ratio (OR), the risk ratio (RR) and the risk difference (RD). We describe how these are computed from results of individual trials, along with measures of uncertainty. For continuous outcomes (e.g. body mass index), the three main options are the difference in means, a standardized difference in means, and the ratio of means (RoM). Again, we provide computational formulae for these, and we also discuss effect measures suitable for time-to-event outcomes, rates, and ordinal outcomes. We then explore how one might decide among different effect measures when several are available. The key considerations are (i) the likelihood that effects will be consistent across studies, (ii) mathematical properties, and (iii) ease of interpretation. We close with two case studies to illustrate some of these ideas.
Jonathan J. Deeks, Richard D. Riley, Julian P.T. Higgins
Meta-analysis of controlled trials is usually a two-stage process involving the calculation of an appropriate summary statistic of the intervention effect for each trial, followed by the combination of these statistics into a weighted average. Two models for meta-analysis are in widespread use. A fixed-effect meta-analysis (also known as common-effect meta-analysis) assumes that the intervention effect is the same in every trial. A random-effects meta-analysis additionally incorporates an estimate of between-trial variation of the intervention effect (heterogeneity) into the calculation of the weighted average, providing a summary effect that is the best estimate of the average intervention effect of a distribution of possible intervention effects. Selection of a meta-analysis method for a particular analysis should reflect the data type, choice of summary statistic (considering the consistency of the effect and ease of interpretation of the statistic), and expected heterogeneity, and the known limitations of the computational methods.
In this chapter we consider the general principles of meta-analysis, and introduce the most commonly used methods for performing meta-analysis. We shall focus on meta-analysis of randomized trials evaluating the effects of an intervention, but much the same principles apply to other comparative studies, notably case–control and cohort studies evaluating risk factors. An important first step in a systematic review of controlled trials is the thoughtful consideration of whether it is appropriate to combine all (or perhaps some) of the trials in a meta-analysis, to yield an overall statistic (together with its confidence interval) that summarizes the effect of the intervention of interest. Decisions regarding the “combinability” of results should largely be driven by consideration of the similarity of the trials (in terms of participants, experimental and comparator interventions, and outcomes), but statistical investigation of the degree of variation between individual trial results, which is known as heterogeneity, can also contribute.
Julian P.T. Higgins, Tianjing Li
Studies brought together in a meta-analysis are likely to vary in terms of where, when, why, and how they were undertaken. This diversity across the studies often gives rise to heterogeneity: the statistical variability of results beyond what would be expected by chance alone. As a general principle, heterogeneity in meta-analysis should be investigated. In this chapter, we discuss the main two approaches to investigating heterogeneity, which are (i) to group the studies (or participants) into subsets and compare the overall effects across these subsets, an analysis known as subgroup analysis; and (ii) to explore the gradient of effects across studies according to one or more numeric study features, an analysis known as meta-regression. However, subgroup analyses and meta-regression can be problematic for several reasons. Associations between study characteristics and study results are observational and are subject to confounding. Aggregation biases can obscure true associations. Small numbers of studies and numerous potential causes of heterogeneity create a high risk of finding spurious associations. Some analyses that seem appealing, including investigations of small-study effects and baseline risk, can give rise to spurious findings due to statistical artefacts.
Ian R. White, Dimitris Mavridis
Missing data can cause bias and uncertainty in single studies. This bias and uncertainty naturally affect meta-analysis of studies that contain incomplete outcome data. Conventional analysis using only individuals with available data is adequate when the meta-analyst can be confident that data are missing at random in every study – that is, that the observed data are representative of all the outcome data for each arm. Such confidence is usually unjustified. Two methods, each based on a plausible assumption about data that are missing, may be used to compensate for missing data. Either the distribution of reasons for missing data may be used to impute the missing values, or the analyst may specify the magnitude and uncertainty of possible departures from the missing at random assumption, and these may be used to correct bias and re-weight the studies. The methods are illustrated in two examples with binary and continuous outcomes.
Mark C. Simmonds, Lesley A. Stewart
Most systematic reviews are based on identifying reports of eligible studies and base their analyses on data presented in these reports, typically journal publications. This can lead to bias and imprecision if studies do not report their results, or report them in inconsistent or unhelpful ways. Collection of individual participant data (IPD) from every participant in every study may reduce these problems. Using IPD offers many advantages, including collaboration with study authors, allowing for more consistent and complete analyses of the data, and investigating the impact of individual-level covariates. However, collecting and managing IPD generally require considerable time and expertise. While many aspects of an IPD systematic review are similar to those of a standard systematic review, important differences include establishing the collaborations or access necessary to obtain IPD, managing and checking, and the statistical methods used for meta-analysis of the data.
Georgia Salanti, Julian P.T. Higgins
Health care practitioners, decision-makers, and consumers want to know which treatment is preferable among many competing options. Network meta-analysis is an extension of meta-analysis that allows the simultaneous comparison of multiple interventions. It combines the results of studies that compare two or more interventions for the same condition; it is not necessary for all of the treatments to have been compared within any of the studies. The name network meta-analysis derives from a representation of the different within-study comparisons as a network of evidence linking each intervention with every other in the network. The validity of network meta-analysis rests on an assumption of consistency or transitivity, requiring that the studies comparing different interventions have similar variables that modify effects. The standard output of a network meta-analysis is a set of estimates of relative effects for all pairs of interventions. Network meta-analysis can also provide a hierarchy of all the interventions.
Nicola Orsini, Susanna C. Larsson, Georgia Salanti
Characterizing the change in the risk of a health-related outcome according to the levels of an exposure is increasingly popular in systematic reviews. We provide an introduction to the statistical methods currently used to perform linear and nonlinear dose–response analysis based on aggregated data from multiple studies. We explain how dose–response associations are estimated within a study and summarized across studies. A re-analysis of observational studies about coffee consumption and mortality is used to illustrate how to move beyond a linear dose–response trend, and the complexity of making inference on risk ratios when using restricted cubic splines.
Jelena Savović, Penny F. Whiting, Olaf M. Dekkers
Systematic reviews of randomized controlled trials (RCTs) are considered the gold standard evidence for health care interventions. Nevertheless, nonrandomized studies of interventions (NRSIs) are often necessary to answer research questions about long-term or rare (adverse) outcomes or to evaluate interventions for which RCTs are unavailable. Review methods for NRSIs are broadly similar to those for reviews of RCTs. Assessing risk of bias is a particularly important step for NSRIs, because NRSIs have an inherently greater risk of bias than RCTs. NRSIs may be available in a variety of different designs, each with their own strengths and weaknesses. The decision on whether to combine estimates from NRSIs statistically should be guided by the similarity of populations, interventions, and outcomes in included studies, and informed by the risk of bias assessment. If such a synthesis is conducted, careful consideration and exploration of differences in study characteristics and statistical heterogeneity are essential.
Yemisi Takwoingi, Jonathan J. Deeks
Information from various sources, including medical history, clinical and physical examination, imaging, and laboratory tests, may be diagnostic. Systematic reviews of diagnostic accuracy can aid in test selection and interpretation of test results. They may assess the accuracy of a single test or compare two or more tests. The lack of design-related indexing terms hampers the identification of diagnostic accuracy studies. Critical appraisal of studies considers the biases specific to diagnostic accuracy studies. The most commonly used measures of test accuracy are sensitivity and specificity. Due to the correlation between sensitivity and specificity induced by the threshold for defining test positivity, statistical methods for combining study results must allow for this. The choice of a statistical method depends on variation in thresholds across studies and the focus of inference. When thresholds vary between studies, study results are best summarized using a summary curve rather than a summary point. Reporting test accuracy using natural frequencies and visual aids can facilitate understanding, making the reviews more accessible to their target audience.
Richard D. Riley, Karel G.M. Moons, Douglas G. Altman, Gary S. Collins, Thomas P.A. Debray
Primary studies to identify prognostic factors are abundant, but often have conflicting findings and variable quality. This motivates systematic reviews to identify, evaluate, and summarize prognostic factor studies. Broad search strings are required to identify relevant studies, and the CHARMS-PF checklist guides subsequent data extraction. The QUIPS tool examines each study’s risk of bias; unfortunately, many studies will have high risk of bias due to poor design and analysis. Meta-analysis can be used to combine and summarize prognostic effect estimates (such as hazard ratios or odds ratios) across studies, but may not always be sensible. Between-study heterogeneity is expected. Ideally, separate meta-analyses are performed; for example, for each method of prognostic factor measurement, for each cut point (for categorized continuous prognostic factors), and for unadjusted and adjusted prognostic factor estimates. The adjusted prognostic factor estimate is usually more relevant, because prognosis in clinical practice is commonly based on multiple prognostic factors, and so the prognostic information from a particular factor needs to add value over others. Publication bias is also a major threat in reviews of prognosis studies. Availability of individual participant data alleviates many, but not all, of the challenges.
Gary S. Collins, Karel G.M. Moons, Thomas P.A. Debray, Douglas G. Altman, Richard D. Riley
Prediction models combine values of multiple predictors to estimate an individual’s risk of having a certain outcome or disease (diagnostic models) or developing a future outcome (prognostic models). Systematic reviews are needed to identify existing prediction models for a certain target population or outcome and to summarize their predictive performance and heterogeneity in their performance. Appraising the quality and reporting of a prediction model study is essential. Studies describing the development or validation of a prediction model often do not conform to prevailing methodological standards and key details are often not reported. Meta-analysis of the predictive performance of a specific prediction model from multiple external validation studies of that model is possible, focusing on calibration and discrimination. In this chapter, we describe the types of systematic reviews that can be conducted on prediction model studies and discuss the challenges faced in identifying, appraising, and qualitatively and quantitatively synthesizing these studies.
Matthias Egger, Diana Buitrago-Garcia, George Davey Smith
Systematic reviews and meta-analyses of observational, epidemiological studies are common. In this chapter, we focus on epidemiological studies of etiology and prevalence. We discuss the rationale for systematic reviews of such studies, highlighting fundamental differences between observational studies and randomized controlled trials (RCTs). We address the steps from shaping the research question, to defining the Population, Exposures, Comparators, and Outcomes (PECO) or Population and Condition (PC) in reviews of etiology or prevalence, to exploring heterogeneity and interpreting results. In contrast to high-quality RCTs, confounding and bias often distort the findings of epidemiological studies. Bigger is not necessarily better: smaller studies may devote more attention to characterizing populations, exposures, or conditions than larger studies. Indeed, there is a danger that meta-analyses of observational data produce precise but spurious results. A set of criteria should be developed, guided by general principles, to assess the risk of bias in different observational study designs. In the analysis and interpretation of observational studies, more is often gained by examining possible sources of heterogeneity between these studies’ results than by calculating overall estimates of relative risks or prevalences.
Gibran Hemani
The past 30 years of genetic analysis have helped to confirm theories, conceived at the beginning of the twentieth century, that complex diseases are heritable and likely to be influenced by many genetic factors, each exhibiting small effects. Detecting these genetic factors could be valuable for developing new drugs or predicting adverse outcomes before onset, but detection is generally only possible when the sample sizes are very large. We describe the failures in genetic analyses that have brought us to this understanding, and led to the development of the genome-wide association study (GWAS) framework. Meta-analysis has played a central role in GWAS and has helped to yield tens of thousands of genetic associations for a wide range of complex traits and diseases. We discuss how meta-analysis is implemented in the GWAS context, noting the technical challenges that arise and the various contexts in which it can be applied.
Gerd Antes, David Tovey, Nancy Owens
The Cochrane Collaboration was founded in 1993 as an international organization of health care professionals, practicing physicians, researchers, and consumers. Cochrane aims to help people make well-informed decisions about health care by preparing, maintaining, and promoting the accessibility of systematic reviews. The task of overseeing the preparation of Cochrane reviews is undertaken by about 50 Cochrane Review Groups working closely with review author teams. Reviews proceed through a process that begins with title registration followed by the production and publication of a peer-reviewed protocol. The protocol and subsequent Cochrane Reviews may be updated periodically depending on need, and are both published as part of the Cochrane Database of Systematic Reviews (CDSR). Cochrane has played an important role in the development and improvement of methods used in systematic reviews and the establishment of registers of controlled trials. The CDSR sits within the Cochrane Library, which is available online at www.cochranelibrary.com.
Holger J. Schünemann
Systematic reviews are essential to produce trustworthy guidelines. To assess the body of evidence included in a systematic review, the Grading of Recommendations Assessment, Development and Evaluation (GRADE) working group has developed an approach that is used by over 100 organizations, including the World Health Organization and Cochrane. GRADE provides operational definitions and instructions to rate the certainty of the evidence for each outcome in a review as high, moderate, low, or very low for interventions, prognostic questions, values and preferences, test accuracy, and resource utilization. GRADE includes assessing risk of bias, imprecision, inconsistency, indirectness, and publication bias, the magnitude of effects, dose–response relations, and the impact of residual confounding and bias. Assessments are presented in GRADE tables, which may be produced using the GRADEpro software tool. Guideline panels can use these tables to produce recommendations based on the GRADE Evidence to Decision frameworks.
Julian Elliott, Tari Turner
The methods of systematic review are well developed but resource intensive, which limits the feasibility, timeliness, and currency of high-quality evidence synthesis. Several new technologies and processes are emerging that address these challenges by improving the efficiency of systematic review production. Specialized software platforms continue to develop functionality that improves workflow and review team collaboration. Text mining and machine learning algorithms are being developed to semi-automate the most time-consuming tasks in systematic review, for example by reducing the number of citations to be screened. Many review tasks are also amenable to crowdsourced contributions from citizen scientists. Underpinning these innovations is the development of metadata systems that can enhance the discoverability and reuse of research and systematic review outputs. Together, semi-automation and metadata systems are creating the possibility of a more integrated evidence ecosystem in which frequently updated “living” systematic reviews and guideline recommendations can efficiently and dynamically connect research and other data outputs to point of care and policymaking. With the increasing availability of large datasets relevant to health, these technologies and processes are required to make sense of the data deluge in ways that are efficient, trustworthy, and minimize risks of bias. Ultimately, open and structured datasets and sophisticated artificial intelligence will lead to health decision support systems with little or no direct human input. The evidence-based practice community has important roles to play in the development, implementation, and governance of these systems.
Shah Ebrahim, Mark D. Huffman
The demand for and supply of systematic reviews and meta-analyses are booming, yet this growth threatens the sustainability and growth of both. The volume and complexity of science continue to expand exponentially, competition and duplication are rampant, and standards of reporting have led to greater detail without necessarily greater clarity of systematic review reports. As agencies that gain from systematic reviews, private, governmental, nonprofit, and professional organizations should invest more in systematic reviews. The future of systematic reviews will include access to an even greater amount of information for synthesis, including unpublished and regulatory data, which will require technology to manage. For example, top-down informatics approaches will be complemented by bottom-up crowdsourcing from the scientific community. Priorities will need to balance the competing demands of systematic review teams and end users. Systematic reviews may even extend into basic and translational sciences, with a long-term goal of improving the quality and efficiency of primary research.
David J. Fisher, Marcel Zwahlen, Matthias Egger, Julian P.T. Higgins
Stata is a commercial general-purpose, programmable statistical package. A comprehensive set of commands are available for meta-analysis of different types of studies and data. Meta-analysis of studies comparing two treatments can be performed for binary (relative risk, odds ratio, risk difference) or continuous outcomes (difference in means, standardized difference in means). Meta-analysis of proportions or of generic effect estimates can also be performed. All widely used fixed-effect (inverse-variance method, Mantel–Haenszel method and Peto’s method) and random-effects (DerSimonian and Laird, restricted maximum likelihood [REML], and many more) models are available. Forest plots, funnel plots, and other type of plots can be obtained, and statistical tests for funnel plot asymmetry computed. Additional commands are available for meta-analysis of specific applications such as diagnostic test accuracy and dose–response, as well as generalizations such as meta-regression, multivariate meta-analysis, and network meta-analysis. These additional commands also include tailor-made plotting options. Further details on meta-analysis in Stata are available from a dedicated book published by Stata Press (2016), although the information presented in this chapter is more up to date.
Guido Schwarzer
The general statistical package R can be used to conduct basic meta-analysis tasks. This chapter starts with a short introduction to R and an explanation of how to install R packages. R functions for meta-analysis with binary outcomes are introduced and applied to two classic meta-analysis examples. Meta-regression with R is described along with methods to evaluate small-study effects. The chapter concludes with an overview of R packages and functions for more advanced statistical methods such as network meta-analysis.
Michael Borenstein
Comprehensive Meta-Analysis (CMA) is a computer program for meta-analysis that was developed with funding from the National Institutes of Health in the United States. CMA features a spreadsheet view and a menu-driven interface. As such, it allows a researcher to enter data and perform a simple analysis in a matter of minutes. At the same time, it offers a wide array of advanced features, including the ability to plot the distribution of true effects, to compare the effect size in subgroups of studies, to run meta-regression, to estimate the potential impact of publication bias, and to produce high-resolution plots. The program’s website, http://www.Meta-Analysis.com, features PDFs and videos that explain how to perform, interpret, and report a meta-analysis.