A systematic review can include zero to hundreds of studies; the right count depends on scope, while meta-analysis needs at least two studies.
Writers and editors ask this a lot: how many studies should a systematic review include? There is no fixed quota. The number hinges on the question, the search plan, and the eligibility rules you set in advance. Some reviews include a single study; others pull in dozens. A few end up with none and still add value by mapping gaps.
How Many Studies In A Systematic Review: Real-World Ranges
Across audit papers, the study count varies by time, topic, and method. The snapshots below give a sense of scale across common datasets.
Source | Dataset/Year | Included Studies (typical) |
---|---|---|
Mallett & Clarke | 1,000 Cochrane reviews, 2001 | Typical review had 6 trials |
MedRxiv longitudinal cohort | Updates to 2018 | Median rose to 14 per review |
Cochrane Annual Review | 2018 | Mean about 17 per review |
Cochrane prevalence reviews | 2018 set | Median 24 per review |
These numbers are reference points, not targets. A tight PICO can yield a lean set. Broad scope, common outcomes, and mixed designs can raise the count fast. Rather than chasing a number, align the review with the question and report the flow with care.
What Drives The Study Count
Scope And PICO
The more precise your population, intervention, comparator, and outcomes, the fewer eligible studies you will include. Narrow PICO lowers noise and speeds appraisal. Broad PICO can surface more studies, but it also brings wide heterogeneity that you must handle in synthesis and narrative.
Search Depth And Sources
The databases you pick shape yield. Major indexes, trial registers, and grey sources can each add unique records. Language limits and date windows push totals up or down. Document every source and strategy so readers can track how you arrived at the final count.
Study Designs And Outcomes
Intervention reviews often center on randomized trials. Prevalence or risk factor reviews may include cross-sectional or cohort studies. Diagnostic accuracy reviews collect paired sensitivity and specificity and can include many small samples. Different designs change both volume and appraisal work.
Field Norms And Event Rarity
Rare outcomes and narrow subpopulations compress the pool. Common outcomes expand it. Multisite trials can count as one study yet add many participants, which helps with precision without inflating the study tally.
When One Study Or None Turn Up
Empty reviews exist. If no eligible study meets your criteria, you can still publish an “empty” review that documents gaps and points to ongoing trials. Use a clear flow diagram and state the inclusion rules that led to zero. PRISMA 2020 sets the reporting baseline and expects transparent methods and a full account of selection and results.
Meta-Analysis Thresholds And Power
Meta-analysis is a statistical pool of two or more independent studies. That line matters. With one study you report its effect and its precision; with two or more, you can combine estimates. Many Cochrane datasets show that meta-analyses often include just a few studies, so plan methods that cope with small k, wide intervals, and heterogeneity.
When only a handful of studies qualify, pick models carefully, check influence, and present prediction intervals. If effect estimates are not compatible or data are sparse, use structured narrative or methods such as vote counting by direction of effect. The key is fit between data and method, not chasing a big k. The Cochrane Handbook, Chapter 10 sets out pooling choices and cautions for small sets.
Typical Number Of Studies In A Systematic Review By Field
Health intervention reviews often include between one and a few dozen trials, with Cochrane averages climbing over time. Prevalence reviews often include many more studies because they draw on observational designs and wide populations. Education or public policy topics can swing from a single pilot to long lists of small studies. Methods stay the same: pre-set rules, full search, and a clean audit trail.
Planning Your Likely Study Yield
Aim for a sample that fits the decision the review must inform. Use a pilot search to gauge volume, tweak filters, and test screening speed. Log the ratio of records screened to studies included. That ratio helps you budget time and personnel and gives readers context for the final k.
Scenario | What Usually Happens | Practical Tip |
---|---|---|
Narrow, Specialist PICO | Small k; may be 0–5 | Pre-register and plan for an empty or single-study review |
Common Drug Vs Placebo | Moderate k; often 5–20 | Expect multiple outcomes and subgroup work |
Prevalence Estimate | Large k; dozens possible | Standardize case definitions and extract sample frames |
Diagnostic Accuracy | Variable k; many small studies | Use paired measures and consider HSROC models |
Rare Events | Tiny k; effects unstable | Plan exact methods and report prediction intervals |
Reporting The Numbers With Clarity
Use A Transparent Flow
Show the count of records at each step: found, deduplicated, screened, full texts checked, excluded with reasons, and included. Readers should be able to trace every record.
Give The Denominator
Always pair the study tally with the search coverage and dates. A small k can be fine when the search was wide and current. A big k can still mislead if the search missed key sources or if eligibility drifted after the protocol.
Pair k With Precision
Tell the story with both k and sample size. Ten tiny studies do not equal one large trial in weight. When you pool, report model choice, heterogeneity measures, and a prediction interval for context.
Quality And Risk Of Bias Still Rule
More studies do not always mean stronger evidence. Poor design, selective reporting, and small-study effects can pull estimates away from the truth. Rate risk of bias with a vetted tool for the designs you included. Plan sensitivity checks that drop high-risk studies and show how the results move.
Common Mistakes That Skew The Count
Drifting Eligibility
Changing inclusion rules after screening starts inflates or shrinks k in ways readers cannot follow. Lock your protocol. If you must revise it, document the reason and the date, then show the effect on screening.
Shallow Searches
One or two databases rarely capture the full record set. Add trial registers and check reference lists from the included studies. Report every source so others can repeat the path.
Double-Counting Data
Multiple reports from one trial can sneak into the pool as separate studies. Link companion papers and count the underlying study once. Extract the right data version and note which report supplied it.
Mixing Outcomes Without A Plan
Pooling across mismatched outcomes raises noise and hides signals. Pre-specify primary outcomes and time points. If you add outcomes later, label them as post hoc and keep them out of the main claim.
Examples Of Study Flow Math
Say your search finds 3,200 records. After deduplication you have 2,400. Title and abstract screening excludes 2,050. Full-text screening leaves 70 papers, of which 55 fail eligibility for reasons such as wrong design or wrong outcome. You include 15 studies. Report those numbers in the flow, list the main exclusion reasons, and tag ongoing trials. The count tells a clear story and gives readers a handle on effort and scope.
Checklist Before You Stop Screening
- Protocol registered and linked.
- Search strings run across all planned sources.
- Deduplication completed and documented.
- Dual screening calibrated with agreement checks.
- Full-text decisions logged with clear reasons.
- Included studies mapped to outcomes and time points.
- Ongoing or awaiting-classification items listed.
Final Takeaway For Authors
There is no magic number. A systematic review can include zero, one, or many studies and still be valid when the method is tight and the reporting is complete. Meta-analysis needs at least two independent studies, but value rests on fit, quality, and clarity. Aim for a review that answers the question, shows its work, and makes decisions easier for readers.
Further reading: PRISMA 2020 gives the reporting checklist, and the Cochrane Handbook explains study selection and meta-analysis choices in depth. Link both in your protocol and cite them in the methods section of your manuscript.