|
|
STATISTICS |
|
Year : 2021 | Volume
: 12
| Issue : 1 | Page : 53-57 |
|
Study designs: Part 9 – Meta-analysis (II)
Priya Ranganathan1, Rakesh Aggarwal2
1 Department of Anaesthesiology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, Maharashtra, India 2 Director, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India
Date of Submission | 01-Dec-2020 |
Date of Acceptance | 02-Dec-2020 |
Date of Web Publication | 19-Jan-2021 |
Correspondence Address: Dr. Rakesh Aggarwal Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry India
 Source of Support: None, Conflict of Interest: None  | Check |
DOI: 10.4103/picr.PICR_369_20
Abstract | | |
The previous two articles in this series gave an overview of the methodology of systematic reviews and meta-analysis. In this third and concluding article, we look at the different types of biases that can confound the results of a meta-analysis and briefly describe some special types of meta-analysis. Keywords: Meta-analysis, research design, review, systematic
How to cite this article: Ranganathan P, Aggarwal R. Study designs: Part 9 – Meta-analysis (II). Perspect Clin Res 2021;12:53-7 |
In two previous articles in this series, we discussed the methodology of systematic reviews and meta-analyses.[1],[2] In this article, we discuss various biases that can occur in a meta-analysis and offer an overview of some special types of meta-analyses.
Biases in Meta-Analysis | |  |
Meta-analyses are considered the highest level of evidence. The validity of their results derives from a high-quality systematic review to ensure an unbiased inclusion of all the published studies. However, despite meticulous care during this step, several “reporting” biases can creep in, affecting the validity of a meta-analysis; publication bias is the most important of these.
Publication bias
Publication bias is defined as “a bias in the published literature, where the publication of research depends on the nature and direction of study results.”[3] The objective of a meta-analysis is to summate all the available evidence on a particular intervention (or diagnostic method). Thus, it should include all the studies ever undertaken. However, if some studies on the subject are selectively not published and hence are not included in the meta-analysis, the results can be expected to be biased.
Publication bias is a phenomenon whereby studies with positive results or large effect sizes are more likely to be published than those with negative results or smaller effect sizes. This can be related to journal policies which favor acceptance of studies showing significant results, as also to a hesitancy of researchers to write up and submit studies with “negative” results. This can influence the overall estimate of treatment effect because the “negative” studies are underrepresented in the meta-analysis.
Other reporting biases
Bias in selection of literature for inclusion in a meta-analysis can arise at several other levels. For instance, trials with positive findings are more likely to be published in English journals; hence, restricting a systematic review to articles published in English may preferentially exclude smaller, less significant, and negative studies (language bias).[4] Studies with positive results tend to get published faster than those with insignificant results; thus, limiting search by date of publication or reviews for newly-described treatments may lead to an overestimation of treatment effect (time lag bias). Large or positive trials are more likely to result in multiple publications, e.g., in the form of papers with incremental cases. If each of these is included in a meta-analysis as a separate study, the effect of intervention may be overestimated (duplication or multiple publication bias). In such cases, care should be taken to include only the latest results from studies with multiple publications. Furthermore, studies with positive results, having being cited more often, and hence being easier to identify (citation bias) and those published in journals that are easier to access (e.g., free-online) are more likely to be included in systematic reviews. Another phenomenon that may bias the results of study is selective reporting of study outcomes (selective outcome reporting bias), i.e., for studies that assess several outcomes, the investigators tend to preferentially report data for outcomes with positive (or statistically significant) findings, while leaving out those with negative results.
Detection of Reporting Bias | |  |
Several graphical and mathematical methods have been used to detect reporting biases.
Funnel plot
In this method, the magnitude of effect of the intervention (a treatment or a diagnostic test) being studied is plotted against the sample size or estimated precision. The magnitude of effect (plotted on X axis) used is usually the mean difference or standardized difference between the two groups being compared (for continuous data) or relative risk or odds ratio (for dichotomous or event-rate data) (these have been detailed in a previous article).[2] If the effect size measure used is a difference, a linear scale is used; on the other hand, if the measure is a ratio, a logarithmic scale is used. The estimated precision (plotted on Y axis) is represented by standard error of observed estimate of the effect or by sample size. The scale of this axis is reversed so that studies with less precision (e.g., those with larger standard error or smaller sample size) are plotted at the bottom, whereas those with greater precision (e.g., with smaller standard error or larger sample size) are plotted at the top. A vertical line is drawn perpendicular to the X axis scale at the point that represents the pooled estimate of all the studies, i.e., the result of the meta-analysis.
Ideally, if there is no publication or reporting bias, one would expect the dots representing the included studies to be distributed roughly symmetrically around the vertical line, and to assume an inverted funnel shape. This is because the effect sizes of studies plotted toward the top (with higher precision) would be expected to show less variation from each other, whereas those of studies plotted at the bottom (with less precision) would be expected to have a greater spread. In the presence of reporting bias, i.e., preferential publication of small positive studies over small negative studies, the symmetry and funnel shape are likely to be lost. As an example, let us look at a meta-analysis by Qiao et al. for the incidence of esophageal adenocarcinoma among persons with Barrett's esophagus undergoing surveillance endoscopy.[5] Their analysis showed that the results of 40 available studies failed to form a symmetric funnel [Figure 1], indicating the existence of a strong publication bias. These changes can be assessed either visually or using some supposedly more objective techniques, such as Egger's test and the “trim-and-fill” method. In the study by Qiao et al., Egger's test also showed the presence of publication bias.[5] | Figure 1: An example of a funnel plot showing asymmetry which indicates existence of a publication bias. The diamond at the bottom and the vertical line above it represent the pooled effect of 40 studies (each represented by a circle) included in the analysis. In the absence of publication bias, the dots would be expected to be symmetrically located in an inverted funnel formed by the two diagonals. At the top, where studies with greater precision (or larger sample size) are located, the dots are symmetric; however, in the lower part (studies with less precision or smaller sample size), the dots are all located on the one side. The absence of dots in the left lower part of the funnel area indicates bias against publication of smaller studies with negative results, while smaller positive studies and larger studies in either direction get published. The Egger's test also showed presence of a publication bias with P < 0.001[5]
Click here to view |
It should be noted that the presence of asymmetry in a funnel plot could also indicate true heterogeneity between study results, poor methodological quality of the original studies, or a chance finding.[6] Similarly, the absence of asymmetry does not rule out a reporting bias. In particular, the funnel-plot method is very sensitive to the number of published studies available; if only a few studies are available, the method may fail to detect a bias even if it exists.
Fail-safe N
The fail-safe N refers to the minimum number of undetected negative studies that would be needed to change the conclusions of a meta-analysis. A small fail-safe N suggests that the conclusion of the meta-analysis may be susceptible to publication bias and therefore less reliable. However, the reliability of this technique is debatable and its use is not recommended.
Avoiding Reporting Bias | |  |
A comprehensive search strategy is the key to avoiding reporting bias. Researchers undertaking a systematic review should expand the scope of their search to include non-English journals and gray literature. Gray literature refers to information available outside regular publication channels (not controlled by journal publishers), e.g., conference abstracts, dissertations, and government reports. However, often, such “unpublished” data are of lower methodological quality than the published findings, and this can, in itself, introduce a bias. The mandatory requirement to prospectively register all clinical trials in a registry has reduced the risk of reporting bias to a large extent since it allows researchers to identify all studies on a topic (overcomes publication bias) and provides a complete list of outcomes being measured (prevents selective outcome reporting bias). Outcome reporting bias can also be prevented by carrying out individual participant data (IPD) meta-analysis; this is discussed in the next section.
Special Types of Meta-Analysis | |  |
Meta-analysis of individual participant data
This is a specific type of meta-analysis where, instead of combining aggregate data from published literature, authors of systematic reviews are contacted and requested to provide data on individual participants. The data are then re-analyzed and, if suitable, combined into a meta-analysis. This method is more accurate and reliable, and allows performance of additional analyses. In such analysis, it is helpful to standardize the unit of analysis across studies. As compared to a traditional meta-analysis, an “IPD” meta-analysis allows inclusion of a larger number of studies, including those in which specific outcomes had not been reported in the original paper but can be calculated from the raw data. However, IPD meta-analyses are time-consuming and need greater resources. In addition, some countries may have specific regulatory requirements against researchers sharing participant data.
Statistical analysis for IPD meta-analysis can be performed in two ways – two-stage and one-stage methods. In the two-stage method, data from each included study are re-analyzed and a summary estimate is calculated for each study; the results of the different studies are then combined into a meta-analysis using either a fixed- or a random-effects model, as described in the previous article.[2] In the one-stage analysis, data from all the studies are pooled and a single summary estimate is calculated (while allowing for clustering within studies). Each of these techniques has its own merits and limitations.[7] Jolliffe et al. studied the effect of Vitamin D supplementation on reducing the exacerbations of bronchial asthma in an IPD meta-analysis of seven trials. Both one-stage and two-stage analyses were done, and each of these showed the intervention to be beneficial.[8]
Network meta-analysis (also known as multiple treatment meta-analysis)
Network meta-analysis allows one to compare multiple interventions simultaneously by including studies on different comparisons in the same meta-analysis. For example, a conventional systematic review looking at the efficacy of treatments for hypertension would include all studies that compare the effects of drug A versus drug B. However, a network meta-analysis on this subject can simultaneously look at multiple comparisons, e.g., drug A versus drug B and drug A versus drug C. For example, Shi et al. performed a network meta-analysis to look at the effect of using different support surfaces on the prevention of pressure ulcers. They compared 14 different interventions in a single meta-analysis.[9]
Such an analysis has several advantages. First, it allows one to indirectly compute the likely result of a comparison of drug B with drug C using data from “A versus B” and “A versus C” studies, even though no study on a head-to-head comparison of drug B and drug C is available. In the study by Shi et al., of the 91 possible comparisons, studies were available for 24 comparisons and the results of the remaining 67 comparisons were calculated indirectly. Further, if studies on all the possible comparisons for three or more interventions (e.g., A versus B, B versus C, and C versus A) are available, the technique can strengthen the results of a particular comparison (e.g., A versus B) using the data from other comparisons (e.g., A versus C and B versus C).
However, network meta-analysis uses complex statistical modeling techniques and several underlying assumptions. The first of these is transitivity, i.e., if drug A is better than drug B, and drug B is better than drug C, then it is assumed that drug A is better than drug C. The second assumption for both direct and indirect comparisons is that the settings, populations, and methods of all the included studies are similar. Thus, an inconsistency in the results of direct comparison (actually observed) and indirect comparison (computed value) of a pair of interventions in a network meta-analysis should alert one to the possibility of heterogeneity across studies; this would suggest that the results of such a network meta-analysis may not be robust.
Cumulative meta-analysis
This term refers to a series of meta-analyses, performed each time the result of a new study on a topic becomes available. This enables the reviewer to study how the result of each additional study affects the pooled estimate of treatment effect and allows early detection of efficacy of a new treatment. A classic example is a review of intravenous streptokinase as therapy for acute myocardial infarction.[10] In this, Lau et al. identified 33 trials investigating this therapy that had been carried out between 1959 and 1988. By performing a cumulative meta-analysis, they showed that statistically significant evidence supporting the benefit of streptokinase had been available when the results of the first 8 trials had been published (by 1973). Further, they found that the remaining 25 trials carried out between 1973 and 1989 had not altered the summary estimate (odds ratio) for the effect of streptokinase and had served only to increase the precision of this estimate. Thus, they argued that, if a meta-analysis had been carried out earlier (i.e., at the end of 1973), the efficacy of streptokinase could have been established much sooner and the subsequent clinical trials could have been obviated. This would have allowed the patients enrolled in the placebo arms of these subsequent trials to benefit from streptokinase treatment.
Meta-analysis of noninterventional studies
Meta-analysis may also be used to synthesize evidence from noninterventional studies such as observational studies on epidemiology (MOOSE guidelines), observational studies on etiology (COSMOS-E guidelines), or diagnostic tests.[11],[12]
Meta-regression
Meta-regression is carried out as an extension to a meta-analysis. If a meta-analysis shows significant heterogeneity between the results of various studies included in it, the use of meta-regression may help identify whether one or more characteristics of the study or of the study populations are associated with the effect size of the study outcome. This is similar to the subgroup analyses carried out at the end of an individual clinical trial.
Meta-Analysis Versus Randomized Trials | |  |
While meta-analysis is considered the highest level of evidence on a particular topic, the technique is not infallible. In particular, the validity of a meta-analysis depends on the quality of the studies included and divergence between the characteristics of the various studies (heterogeneity). There are several examples of meta-analysis, whose results were contradicted by subsequent adequately-powered randomized trials.[13]
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
References | |  |
1. | Ranganathan P, Aggarwal R. Study designs: Part 7 - Systematic reviews. Perspect Clin Res 2020;11:97-100. [Full text] |
2. | Ranganathan P, Aggarwal R. Study designs: Part 8 – Meta-analysis (I). Perspect Clin Res 2020;11:178-81. [Full text] |
3. | Royle P, Waugh N. Literature searching for clinical and cost-effectiveness studies used in health technology assessment reports carried out for the National Institute for Clinical Excellence Appraisal System. Health Technol Assess 2003;7:iii, ix-x, 1-51. |
4. | Egger M, Zellweger-Zähner T, Schneider M, Junker C, Lengeler C, Antes G, et al. Language bias in randomised controlled trials published in English and German. Lancet 1997;350:326-9. |
5. | Qiao Y, Hyder A, Bae SJ, Zarin W, O'Neill TJ, Marcon NE, et al. Surveillance in patients With Barrett's esophagus for early detection of esophageal adenocarcinoma: A systematic review and meta-analysis. Clin Transl Gastroenterol 2015;6:e131. |
6. | Page MJ, Higgins JP, Sterne JA. Chapter 13: Assessing risk of bias due to missing results in a synthesis. In: Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al., editors. Cochrane Handbook for Systematic Reviews of Interventions. Ver. 6.1. Cochrane; 2020. Available from: http://www.training.cochrane.org/handbook. [Last updated 2020 Nov 30]. |
7. | Burke DL, Ensor J, Riley RD. Meta-analysis using individual participant data: One-stage and two-stage approaches, and why they may differ. Stat Med 2017;36:855-75. |
8. | Jolliffe DA, Greenberg L, Hooper RL, Griffiths CJ, Camargo CA Jr., Kerley CP, et al. Vitamin D supplementation to prevent asthma exacerbations: A systematic review and meta-analysis of individual participant data. Lancet Respir Med 2017;5:881-90. |
9. | Shi C, Dumville JC, Cullum N. Support surfaces for pressure ulcer prevention: A network meta-analysis. PLoS One 2018;13:e0192707. |
10. | Lau J, Antman EM, Jimenez-Silva J, Kupelnick B, Mosteller F, Chalmers TC, et al. Cumulative meta-analysis of therapeutic trials for myocardial infarction. N Engl J Med 1992;327:248-54. |
11. | Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, et al. Meta-analysis of observational studies in epidemiology: A proposal for reporting. Meta-analysis of observational studies in epidemiology (MOOSE) group. JAMA 2000;283:2008-12. |
12. | Dekkers OM, Vandenbroucke JP, Cevallos M, Renehan AG, Altman DG, Egger M, et al. COSMOS-E: Guidance on conducting systematic reviews and meta-analyses of observational studies of etiology. PLoS Med 2019;16:e1002742. |
13. | LeLorier J, Grégoire G, Benhaddad A, Lapierre J, Derderian F. Discrepancies between meta-analyses and subsequent large randomized, controlled trials. N Engl J Med 1997;337:536-42. |
[Figure 1]
|