No, a systematic review may stand alone; a meta-analysis is added only when studies can be combined.
A reader often wonders whether every evidence review ends with a pooled number. The short answer is no. A review that follows a predefined protocol can map, appraise, and synthesize studies in words only. When the data across studies line up, authors might also compute a single summary estimate. That second step is the meta-analytic layer, and it is optional.
What Each Evidence Synthesis Method Actually Means
Before we sort out when numbers are pooled, it helps to define the pieces. A systematic review uses a protocol, a comprehensive search, clear inclusion rules, and critical appraisal. The aim is to reduce bias in how evidence is found and read. A meta-analysis is a statistical technique that merges effect estimates from multiple studies to give a more precise summary, with confidence intervals and model choices laid out. Many reviews include that step, and many do not.
| Approach | What It Does | Does It Pool Numbers? |
|---|---|---|
| Narrative Systematic Review | Finds, screens, and critically appraises studies; synthesizes in text with tables and figures. | No pooling; describes patterns and strength of evidence. |
| Quantitative Meta-Analysis | Calculates a combined effect across studies with weights, CIs, and model choice. | Yes; produces a single estimate for each outcome. |
| Network Meta-Analysis | Extends pooling to compare multiple options at once using direct and indirect evidence. | Yes; ranks or estimates effects across many options. |
Do All Systematic Reviews Include A Meta-Analysis? Criteria That Decide
Not all reviews can or should merge results. Pooling makes sense only when studies are sufficiently alike in participants, interventions, comparators, outcomes, and methods. If designs or measures vary too widely, a single number would mislead. In that case, authors keep the review qualitative and explain the body of evidence in a structured way.
Core Conditions For Pooling
- Compatible outcomes: Studies report the same or comparable measures, such as risk ratio for the same event.
- Comparable populations and interventions: The clinical question matches across trials, with like settings and dosing.
- Methodological fit: Study designs and risk of bias sit within a range where combining estimates is defensible.
- Enough studies: At least two independent estimates per outcome; more gives better precision.
- Heterogeneity checked: Variation across studies is assessed and handled with subgroup analysis, random effects, or by not pooling.
When these conditions hold, a pooled effect can sharpen the picture. When they do not, a careful narrative synthesis often serves readers better than a forced average.
Where Reporting Standards Fit In
The PRISMA 2020 checklist sets clear reporting items for reviews, with or without a pooled analysis. It asks authors to explain why the review was done, how studies were found and screened, how risk of bias was appraised, and how evidence was synthesized. If a meta-analysis is run, the checklist also asks for model details, effect measures, and data handling. These standards improve clarity for readers and make methods repeatable. You can read the open access PRISMA 2020 statement published in BMJ.
Why A Review Might Skip The Pooled Estimate
There are sound reasons to keep synthesis qualitative. Here are the common ones and what they mean in practice.
Common Barriers To Pooling
Evidence may be abundant yet too mixed to merge. Outcomes may be measured in incompatible ways. Key subgroups may vary. Study limitations may be too severe. Any of these can make a single combined number fragile or even misleading.
Examples Of Sensible Non-Pooling
- Trials use different outcome scales with no accepted method to standardize.
- Observational designs dominate while a few small trials report conflicting effects.
- Follow-up times vary wildly, changing event rates in ways that break comparability.
- Interventions cluster into distinct categories that warrant separate summaries.
In such cases, a structured summary that compares direction and strength of findings across groups can give a clearer picture than a single estimate would.
How A Meta-Analysis Works When It Is Appropriate
When conditions allow, reviewers extract effect sizes from each study and assign weights, often inverse variance weights. They pick a model, commonly fixed effect when one true effect is assumed, or random effects when true effects vary across settings. They check for heterogeneity with tools like forest plots and I2, run sensitivity checks, and may probe subgroups or use meta-regression when justified. The result is a pooled estimate with a confidence interval and, at times, a prediction interval that shows the range a true effect may take in a new setting.
Choosing The Right Effect Measure
For binary outcomes, risk ratio, odds ratio, or risk difference might be used. For continuous outcomes, mean difference or standardized mean difference usually fits. For time-to-event outcomes, hazard ratio is standard. Selection depends on study reporting and interpretability for the target audience.
Dealing With Variation Across Studies
Variation is expected. Reviewers inspect clinical and statistical sources of spread. If the spread is small and defensible, they may still pool with a fixed model. If spread is moderate or large, a random effects approach or subgroup models may fit better. When spread signals real differences that cannot be reconciled, not pooling is the safer path.
Reading A Forest Plot Without Getting Lost
A forest plot lists study-level effects with confidence intervals and shows the pooled diamond. Look first at the line of no effect, then at where most study bars lie. If bars cross the line widely, the body of evidence is inconsistent. If they cluster on one side, and the pooled diamond sits away from the line, the estimated effect is more convincing. Always read this picture alongside risk-of-bias judgments and the certainty of evidence.
Certainty Of Evidence And What It Tells You
Many reviews rate certainty by outcome using domains such as risk of bias, inconsistency, indirectness, imprecision, and publication bias. A strong pooled estimate with low certainty tells a different story than a modest estimate with high certainty. Both the number and the confidence grade matter for decisions.
Where Authoritative Guidance Draws The Line
Trusted sources state that a review does not always include a pooled analysis. Guidance from leading groups explains when pooling is sound, which models to use, and how to report methods and results. These resources help teams plan protocols and help readers judge quality.
Cochrane’s overview page also notes that not every review contains a pooled estimate; see About Cochrane reviews for that clarification.
When You Should Expect A Pooled Estimate
Pooling tends to appear when the question is focused, outcomes match, and designs are close. Drug trials that measure the same event at similar time points are classic candidates. Device trials with uniform protocols often qualify. Reviews of practice changes across many settings often do not. In policy or service reviews, context differences are wide, so synthesis may rely on structured text and visual evidence tables.
Red Flags That Warn Against Pooling
- Outcome definitions shift across studies in ways that change meaning.
- Clustered or crossover designs mix with parallel trials without proper adjustment.
- High risk of bias across the body of evidence that would skew a pooled result.
- Small study effects and missing data that inflate or deflate impact.
Practical Steps For Authors Planning A Review
Plan the review with a protocol that states the question, outcomes, eligibility rules, and synthesis plan. Predefine when pooling will be attempted and which models are in scope. List decision rules for unit-of-analysis issues, cluster trials, and overlapping populations. Explain how you will assess risk of bias and how those judgments feed into synthesis. If meta-analysis is not planned, describe how narrative synthesis will be structured, with clear logic and visual summaries.
Transparent Reporting Helps Readers
Use flow diagrams to show records through the process. Provide a full search strategy in a supplement. Present risk-of-bias tables by outcome. Show the dataset and code when journal policy allows. These steps improve trust and help others reproduce or update the work. Post your protocol in a registry and share a data extraction template so others can build on your work. That habit also reduces duplication.
Typical Questions Readers Ask
“Why Didn’t The Authors Pool The Data?”
Often because outcomes or methods did not align, or because study quality made a pooled number unreliable. The authors should explain the choice and show evidence tables so readers can still compare studies.
“What Does A Pooled Estimate Add?”
It increases precision and can make patterns clearer, especially when single trials are small. At the same time, the combined figure does not fix design flaws. Always weigh the estimate alongside quality and context.
Quick Reference: Reasons To Pool Or Not To Pool
| Reason | What It Means | Typical Action |
|---|---|---|
| Aligned PICO elements | Participants, interventions, comparators, and outcomes match closely. | Pool with an appropriate model. |
| High inconsistency | Effects vary across studies with no coherent explanation. | Do not pool; explain patterns in text. |
| Sparse or biased data | Few studies or serious limitations across the body of evidence. | Summarize narratively; weigh certainty. |
| Multiple options | Several interventions need a connected comparison. | Consider a network approach if links permit. |
| Different metrics | Outcomes reported on unlike scales or time frames. | Standardize if valid; otherwise avoid pooling. |
Takeaway
A protocol-driven review can answer a question without merging numbers. When studies align, adding a pooled analysis gives a sharper estimate. When they do not, a clear narrative synthesis is the honest choice. Readers should look for transparent methods, sensible decisions about pooling, and links to trusted guidance.
