Can You Do A Systematic Review Without A Meta-Analysis? | Clear Methods Guide

Yes, you can complete a systematic review without pooling effects when data or methods don’t allow it.

Plenty of high-quality reviews don’t combine effect sizes. Reasons vary: incompatible outcomes, sparse or biased data, or wide study differences that make a pooled average unhelpful. In these cases, your job shifts from calculating a single summary to presenting a transparent, structured synthesis that lets readers see what the studies show and why. Cochrane’s Handbook and the SWiM reporting checklist explain how to do that with care.

What “No Meta” Actually Means

No statistic isn’t the same as no synthesis. You still plan comparisons, group studies, and communicate findings in a way that decision-makers can use. Common options include narrative synthesis with structured text, vote counting by effect direction, and visual summaries like effect-direction plots or harvest plots. Each option has rules and limits, and you still assess risk of bias and certainty.

When Pooling Isn’t Sensible

There are sound reasons to skip a pooled estimate. These reasons don’t lower the review’s value; they show method discipline. The table below gives typical triggers and a safer next step.

Trigger Why Pooling Misleads Better Move
Outcomes on incomparable scales Mixing metrics can distort the average Map to a common metric or use structured text
Incomplete reporting Missing variance or sample data blocks effect sizes Contact authors; use effect direction summaries
Few small studies with high bias Pooled number may look precise but isn’t trustworthy Describe findings and downgrade certainty
Extreme clinical or method differences Average hides real variation Group by design/population and compare patterns
Different effect measures across studies Odds ratios, risk ratios, and MDs don’t mix cleanly Convert if valid; else use non-pooled synthesis

Doing A Systematic Review Without Pooled Estimates: When It Works

This path works when you still apply the same rigor: pre-register, use a tested search strategy, screen with two reviewers, extract in duplicate, assess bias, and pre-specify how you’ll group and summarise studies. Clear reporting matters. The PRISMA 2020 update sets expectations for flow, tables, and transparency across methods, not just for pooled analyses.

Core Methods For Non-Pooled Synthesis

Structured Narrative Synthesis

Tell the evidence story with a template, not freeform prose. Define review questions, theory, and logic links. Explain study groupings, data handling, and how the text reflects the data. Use summary paragraphs that mirror tables and figures. This approach is well described in the classic guidance from Popay and colleagues.

Vote Counting By Direction Of Effect

Here you tally which studies favour the intervention, show no clear difference, or favour the comparator. Avoid counts based on p-values; direction is safer and aligns with newer guidance. Pair counts with sample sizes and risk of bias to avoid giving tiny studies the same weight as large ones. Cochrane sets out these cautions and suggests visual aids to reduce misreadings.

Effect Direction Plots And Harvest Plots

These visuals show, study by study, the direction and strength signals alongside study features. They make patterns across groups easy to see without a pooled number. Tutorials and training packs expand on layouts that keep the link between data and inference tight.

Make Reporting Bulletproof

Readers and editors want to know exactly how you got from data to statements. The SWiM checklist gives nine items that anchor that transparency: how you grouped studies, which common metric you used, which synthesis method you chose, how you handled study size and quality, and how you presented the data. Use those items as subheads while you draft.

For page-level reporting, PRISMA 2020 spells out what to show in the abstract, methods, results, and appendices. That includes your full search string, a flow diagram for study selection, risk-of-bias judgments, and any reasons for not pooling. These expectations apply whether you compute an average or not.

Quick Links To Authoritative Guidance

You can anchor your methods to trusted sources. See the SWiM reporting guideline and the PRISMA 2020 statement for checklists and examples that editors recognise.

Step-By-Step Plan That Works In Practice

1) Plan The Synthesis In Advance

State the decisions you’ll make if pooling isn’t possible. Pre-define grouping variables, preferred metrics, and fallback displays. Keep these rules in your protocol so choices don’t drift mid-review.

2) Group Studies Logically

Use outcome type, population, intervention format, or study design. The goal is to compare like with like before you summarise across groups. Document these choices and stick to them.

3) Choose A Common Way To Express Effects

Pick a standard sign or scale for direction (benefit vs no clear difference vs harm). If units clash, report percentage change, standardized mean difference, or plain direction only, and explain why.

4) Display Findings So Patterns Are Obvious

Use structured tables, effect-direction plots, and short summary paragraphs that match table columns. Link each claim to concrete rows, not general impressions.

5) Weigh Study Quality And Size

Signal when high risk-of-bias studies drive an apparent pattern. Run sensitivity checks that drop weak studies or limit to pre-registered outcomes only.

6) Explain Why You Didn’t Pool

State the barrier: incompatible metrics, missing dispersion data, or extreme diversity. Quote numbers where helpful, like I2 or range of baselines, but don’t over-interpret.

7) Be Cautious With Terms

Avoid language that sounds definitive when your method isn’t estimating a single effect. Use hedged phrasing tied to study quality and size. End each section with what a practitioner can do next, not grand claims.

Common Mistakes To Avoid

Unstructured writing that hides which data back each claim. Counting only p-values. Mixing apples and oranges in one paragraph. Ignoring risk of bias when summarising. Skipping certainty grading. These traps are common in published reviews and flagged by method groups. A clear plan and the SWiM items help you dodge them.

What Editors And Reviewers Expect

Editors want clarity on choices. They’ll scan for a prespecified plan, a flow diagram, risk-of-bias tables, and a section that says why a pooled number wasn’t suitable. They’ll also look for consistency between text and figures. PRISMA gives that spine, and the Handbook fills in the method detail.

Mini-Template For Your Results Section

Use this outline to keep results tight and traceable across outcomes.

Outcome Header

Define the outcome and time point. Say how many studies report it and the direction of effects by group. Point to the figure or table that shows the pattern.

Study Grouping

State the variables used for grouping and why they suit the question. Mention any cross-over in populations or settings that could blur signals.

Synthesis Method

Name the non-pooled approach, such as vote counting by direction or structured narrative. Explain any rules for handling multiple outcomes per study.

Presentation Choice

Call out the figure or table used: effect-direction plot, harvest plot, or a compact evidence table. Explain symbols and shading so readers don’t guess.

Summary Statement

Write one tight line that ties the pattern to the strength of the evidence. Avoid grade-inflated wording when evidence is thin or mixed.

Helpful Tools And Resources

The Cochrane Handbook chapters on analysis and non-pooled synthesis give step-by-step method notes, cautions, and worked examples. PRISMA provides templates and checklists. Bookmark both so your team stays aligned across drafts and revisions.

If you work in a topic with designs or outcomes, keep a library of templates. One for narrative paragraphs, one for direction-of-effect tables, and one for risk-of-bias notes. Reusing templates speeds drafting and reduces errors. Version them in your repository.

At-A-Glance Reporting Items (SWiM And PRISMA)

The table below condenses frequent items that make editors happy and help readers verify claims.

Item What To Show Where
Study grouping Rules and rationale Methods; early in Results
Standardised metric Direction, scale, or unit choices Methods; figure notes
Synthesis method Narrative, vote count by direction, or visual Methods; subtitle in Results
Data display Which tables/plots show which outcomes Results; legends
Study weights How size and bias affected judgments Results; sensitivity text
Limitations Data gaps and how they shape claims Discussion
Reason for no pooling Concrete barrier and numbers Methods; Results
Flow and search Full strings and selection counts Appendix; figure
Risk of bias Tool used and judgments Tables; appendix

Editorial Standards, Networks, And Certainty

Leading journals publish well reported non-pooled syntheses when data don’t justify an average. What they check is clarity: a prespecified plan, a flow figure, risk-of-bias tables, and a crisp note on why pooling wasn’t feasible. Point editors to your figures that map patterns across outcomes, and cite SWiM in the cover letter to signal method care.

Networks need a connected set of studies and compatible outcomes. If those preconditions fail, keep the synthesis non-pooled rather than forcing an indirect model. When a network is coherent and outcomes align, the Handbook chapter on networks is the route to follow.

Certainty grading still applies. Judge the pattern against bias, imprecision, and inconsistency, and say how much weight a reader can place on it. Be plain about limits and what new studies would need to show to move that judgment.

Bottom Line

You can deliver a rigorous review without a pooled estimate. The method is to plan the synthesis, keep the link from data to claims tight, and report each step with the clarity that SWiM and PRISMA expect. That way, readers get insight they can use, even when the numbers won’t line up for a single average.