Can A Systematic Review Include Other Systematic Reviews? | Methods That Work

Yes, a systematic review can include other systematic reviews when run as an overview (umbrella review) with safeguards for overlap and quality.

Researchers sometimes want a high-level answer across many related questions or populations. In those cases, a review of reviews—often called an overview or umbrella review—can be the right design. It gathers published reviews, checks their methods, maps where they agree, and flags conflicts or gaps. This guide shows when that route makes sense, how to run it well, and where it can go wrong.

What “Review Of Reviews” Means

An overview draws on completed systematic reviews as its main evidence units. You set a protocol, search for eligible reviews, appraise each one, and then synthesize their findings. You do not rerun every primary study from scratch; the goal is to summarize review-level results and explain patterns across them.

Evidence Synthesis Options At A Glance
Approach What It Synthesizes Best Use Case
Primary-Study Review Trials or observational studies Direct estimates when no strong reviews exist
Overview/Umbrella Review Completed systematic reviews Broad view across conditions, settings, or outcomes
Scoping Review Any evidence type without effect pooling Map a field and refine questions

Including Published Reviews Inside A New Evidence Synthesis: When It Works

Choose an overview when the topic already has multiple solid reviews that answer nearby questions, or when decision-makers need a broad view across several interventions, subgroups, or time frames. It also helps when speed matters and re-extracting hundreds of trials would add little value.

When You Should Not Do It

Skip the overview path if only one or two thin reviews exist, if those reviews miss recent trials, or if their methods are weak. Also avoid it when your users need a single pooled effect built from raw study data; in that case a fresh primary-study review is the better plan.

Core Design Choices

Define A Crisp Question

Frame the question in terms of participants, interventions, comparators, outcomes, and settings. Decide upfront whether you will accept mixed populations, multi-component programs, or non-randomized evidence at the review level.

Set Explicit Eligibility For Reviews

State which review types are in scope (intervention, diagnostic, prognosis) and which are out (narratives without methods, rapid commentaries). Require a reproducible search, clear selection criteria, risk-of-bias assessment, and transparent synthesis. That keeps the evidence base clean.

Plan For Overlap

Different reviews often include the same trials. Overlap can double-count evidence and inflate confidence. Build an overlap matrix that lists primary studies against the included reviews, then choose a handling rule: pick the best review per outcome, or combine after adjusting the weights.

Search Where Reviews Live

Search databases that index reviews (e.g., Epistemonikos, MEDLINE, Embase) and specialist libraries. Add citation chasing and reference checks. Record full strategies and dates to keep the process auditable.

Appraise Each Review With A Standard Tool

Use a validated checklist built for review-level appraisal. AMSTAR 2 is common for healthcare and covers protocol, search, selection, risk-of-bias methods, and synthesis choices. Rate confidence in each review, not just a tally of items.

Extract Review-Level Data

Capture who and what the review covered, search dates, inclusion criteria, number of primary studies, effect measures, risk-of-bias tools, certainty assessments, and main results by outcome. Note any subgroup or sensitivity results that matter for your users.

Synthesize Across Reviews

Start by narrating where reviews align and where they split. If methods and outcomes match closely, you may pool review-level effect estimates, but only after checking overlap. Many teams keep a structured narrative with summary tables and GRADE certainty instead of a second-order meta-analysis.

Report With A Recognized Checklist

Use PRISMA 2020 items adapted for overviews. Provide flow diagrams for review selection, show the overlap assessment, and present certainty ratings clearly. Readers should be able to retrace each decision.

Risks, Biases, And How To Limit Them

Double-Counting Primary Studies

Overlap is the top risk. Use a matrix or heat map to show where the same trial appears across reviews. Then apply a rule that avoids counting it twice in any pooled estimate.

Outdated Searches Inside Included Reviews

A review from five years ago may miss new trials that change the answer. Screen for updates or targeted new searches for high-impact outcomes. If a priority review is dated, you can either exclude it or supplement it with newer primary studies in a linked analysis.

Mixed Quality Across Reviews

Some reviews apply strong methods; others cut corners. Downgrade the weight of weak reviews in your synthesis and be plain about confidence. A summary table that maps AMSTAR 2 ratings to each outcome helps readers see the strength of the foundation.

Different Outcome Definitions

Reviews may label the same endpoint in different ways. Build a harmonization plan: create outcome families with clear rules for time points and measures.

Practical Walkthrough

1) Draft The Protocol

Write aims, eligibility for reviews, databases, search strings, screening steps, overlap plan, appraisal tool, extraction fields, and synthesis rules. Register it where your field expects, then stick to it unless you amend with reasons.

2) Run The Search

Search at least two bibliographic databases plus a review repository. Export to a manager, deduplicate, and screen in pairs by title/abstract, then full text.

3) Build The Overlap Matrix

List all primary studies by identifier down the rows and the included reviews across the columns. Fill cells where a study appears. Compute an overlap metric if you plan any pooling.

4) Appraise With AMSTAR 2

Judge each review across key domains. Mark critical weaknesses such as missing a protocol, poor search, or no risk-of-bias assessment. Use these ratings to set confidence in each outcome.

5) Extract And Harmonize

Use a piloted form. For each outcome, record effect estimates, comparators, time points, and certainty judgements. Align metrics so like is compared with like.

6) Synthesize And Grade

Combine results across reviews using your pre-set rules. If pooling, adjust for overlap or pick a best review per outcome. Rate certainty using GRADE or the method your field prefers, and explain downgrades plainly.

7) Present For Decisions

Lead with a one-screen summary that names the population, intervention, and bottom-line effect per outcome, with certainty. Add expandable sections for methods, tables, and caveats so readers can drill down.

Trusted Methods And Templates You Can Lean On

International handbooks outline this design. The Cochrane Handbook labels these projects “overviews of reviews” and gives step-by-step methods for timing, overlap handling, appraisal, and reporting. JBI uses the term “umbrella review” and offers templates for protocols, extraction, and summary tables. Link your workflow to these playbooks to keep the process reproducible and easy to audit. See Cochrane overviews guidance and the JBI chapter on umbrella reviews.

Common Scenarios And How To Tackle Them

Broad Policy Question With Many Interventions

When a payer asks which programs reduce hospital readmissions across adult populations, pull in multiple intervention-specific reviews. Structure the synthesis by intervention class and outcome, then compare across classes.

Rapid Evidence Need With A Large Literature

If a task force needs an answer in weeks, a well-run overview can surface where reviews already agree. Use strict eligibility and prune weak reviews to keep speed without losing clarity.

Field With Frequent Updates

Choose an overview when monthly trial updates make single study-level meta-analyses age quickly. Your job becomes comparing the newest reviews and flagging any shifts in direction.

Data Management That Prevents Headaches

Build Clean, Reusable Tables

Create machine-readable tables for included reviews, outcomes, and effect metrics. Keep identifiers consistent across files so you can track updates. A simple spreadsheet with stable IDs saves hours later.

Harmonize Outcome Labels

Pick naming rules for time windows, scales, and direction of benefit. Convert metrics to a common form where possible. Record any transformations in a log so others can check the math.

Flag Conflicts Early

When two included reviews disagree on an outcome, list the likely reasons: different eligibility, different risk-of-bias rules, or different effect models. Explain which approach you trusted and why.

Reporting That Readers Can Trust

Use PRISMA 2020 items for transparent reporting: flow diagram, full search strings, selection reasons, appraisal results, and a clear statement of certainty for each outcome. If you adapt the checklist for an overview, say so and note any extra items such as overlap handling.

Review-Of-Reviews Inclusion Checklist
Check Why It Matters How To Document
Clear question and protocol Prevents scope drift Public registration and dated amendments
Explicit eligibility for reviews Blocks weak designs List in/out criteria in the protocol
Wide review search Cuts retrieval bias Full strategies and dates
Overlap assessment Avoids double-counting Matrix or heat map in appendix
Review-level appraisal Weights by credibility AMSTAR 2 ratings by outcome
Consistent outcome mapping Makes results comparable Outcome families and time windows
Transparent synthesis rules Reproducible decisions Pre-set tie-breaks and pooling choices
Certainty judgements Signals confidence GRADE table per outcome
Plain-language summary Faster decisions One-screen top summary

Quick Tips That Save Time

Pick The Best Review Per Outcome

When reviews overlap a lot, select the one with the broadest search, clear methods, and the newest end date for each outcome. Cite the others as context.

Keep A Living Evidence Log

Track cut-off dates for each included review. If a priority outcome has a gap, plan a small update search that targets that gap instead of rebuilding the whole field.

Show Your Work

Put the overlap matrix, AMSTAR 2 forms, and extraction tables in appendices. Readers gain trust when they can see the steps.

Ethics, Conflicts, And Transparency

State funding and any ties to products or interventions. Describe the role of sponsors in topic choice, protocol, and manuscript review. Keep data files and forms available on a public repository when your organization allows it.

Terminology You Will See

“Overview of reviews” and “umbrella review” refer to the same general idea: a synthesis that treats completed reviews as the main unit of evidence. Some fields also use “meta-review.” The workflow still needs a protocol, search, appraisal, overlap handling, and careful reporting.

Bottom Line

Yes—pulling in completed reviews inside a new synthesis is a valid method when you need a broad, structured view. The design has rules: prevent overlap bias, appraise review quality, and report with a checklist. Follow those steps and you can answer wide questions without re-extracting every trial.