There is no fixed count for health-science reviews; set a range based on scope, methods, and the evidence you can find and appraise.
Readers ask for a number. Method leaders in health research don’t give one. They ask for a clear plan, a documented search, and an honest account of what you found. This guide shows you how to set a smart target that fits your topic and meets review standards.
What “Enough Sources” Really Means
In health research, “enough” means your search was broad, documented, and unbiased; your screening was fair; and your final set supports the question you posed. Review checklists focus on transparency, not a magic tally. You’ll see that theme across gold-standard manuals and checklists.
Typical Ranges By Review Type (Fast Lookup)
Use these ballpark bands to scope time and effort. They are ranges, not rules. Your topic, time frame, and inclusion rules can push numbers lower or higher.
| Review Type | Usual Range Of Included Studies | Common Sources To Search |
|---|---|---|
| Systematic Intervention Review | 20–150+ included studies; thousands of records screened | MEDLINE/PubMed, Embase, CENTRAL, trial registers, regs/industry files |
| Scoping Review | 50–300+ included items; broad mapping aim | Multiple databases, trial registers, grey literature, policy sites |
| Diagnostic Accuracy Review | 10–100+ included studies; indexing varies by test | MEDLINE, Embase, specialized indexes, trial registers |
| Qualitative Evidence Synthesis | 15–60+ included studies; depth over volume | MEDLINE, CINAHL, PsycINFO, subject repositories |
| Narrative/Traditional Review | 20–80+ cited works; depends on scope and depth | Core databases plus hand-searching and cited-reference chasing |
How To Set A Target Range Before You Search
Pick a working range that fits your method and ask, “Can I justify this range with the question I set and the coverage I plan?” Here’s a quick way to do that.
1) Define A Focused Question
Use PICO or a close variant for interventions; use a fitting frame for qualitative or diagnostic work. A tight question trims duplicates and off-topic hits and keeps screening load sane.
2) Map The Field With A Pilot Search
Run a 30–60 minute scoping search in two core databases. Log:
- Hits for broad strings vs. precise strings
- How many unique records remain after de-duping
- How many look eligible on quick title/abstract scans
From that snapshot, estimate screening volume (e.g., 3,000 records → ~2,400 after de-dupes → maybe 150 full texts → ~40–80 included). That gives you a first-pass range to plan time and staffing.
3) Set Inclusion And Exclusion Rules
Clear rules shrink noise and shape the final count. Be specific on population, setting, design, outcomes, time window, and language policy. Make sure the window matches the pace of the field; vaccines or AI imaging need a fresh cutoff; anatomy atlases do not.
4) Choose Databases And Other Sources
Health topics draw from different wells. Pair a clinical database with a second that covers your field’s edges (e.g., nursing, mental health, policy). Add trial registers and regulatory files when outcomes or safety matter. Document every source and its last-searched date.
5) Plan Screening And Appraisal
Pilot your screening form on 50–100 records. If the yield is thin, widen terms or add a database. If you’re drowning, tighten scope or split screening across more reviewers. Use dual screening for key stages when bias risk is high.
How Many References Fit A Health Review? Practical Ranges
Here are workable, method-aware bands you can defend in a protocol and in peer review. Again, these are guides, not quotas.
Systematic Intervention Reviews
Plan for thousands of records screened and dozens of included studies. For common drugs or procedures, the upper band grows fast. For niche devices, the band may stay small. The key is thorough coverage of all study registries, core databases, and any regulatory dossiers that bear on outcomes and harms.
Scoping Reviews
Expect a wide net and a large final set. You’re mapping concepts and gaps, not judging effects. That means more record types—primary studies, policy papers, and grey literature. The included count often lands in the low hundreds on broad topics.
Diagnostic Test Accuracy Reviews
Indexing of tests can be messy. Plan a database mix that catches both clinical and technical journals. Included counts vary widely; a hot imaging topic can yield dozens, while rare tests may yield only a handful. The search plan and flow chart matter more than the final tally.
Qualitative Evidence Syntheses
Depth beats breadth. Many teams land between 15 and 60 included studies so they can code and theme with care. Broader questions or multi-country aims can push that higher. Scope the workload for coding before you lock your window.
Narrative Reviews
These pieces aim to brief readers on a topic with context and critique. They usually cite dozens of works rather than hundreds. Readers expect up-to-date clinical trials, landmark studies, and any major guidelines, all clearly signposted.
Why Method Standards Don’t Name A Number
Method standards in health research center on process quality: a documented search, fair selection, and full reporting of what you did and found. That approach fits both sparse topics and crowded ones, and it explains why no checklist pins a universal count.
What Checklists Ask You To Show
- Every source you searched and the coverage window
- Full strategies for at least one database
- Dates last searched
- A flow diagram with records identified, screened, excluded, and included
If you can show all that and your included set answers the question with fair certainty, you have “enough.”
Set A Defensible Number: A Simple Formula
Use a small planning formula to size effort and pre-register a range in your protocol.
The “Pilot-Yield” Method
- Run a pilot search in two core databases.
- De-dupe and count unique records.
- Screen 200 titles/abstracts to estimate eligibility rate.
- Project to the full search (records × eligibility rate × expected full-text retrieval rate).
Write that math in your protocol. Editors value a clear, pre-set plan and an audit trail that explains the final number.
Database Mix That Fits Your Topic
Pick sources that match the question, then add coverage for trials and regulatory files where relevant. The mix below keeps you honest across clinical and policy angles.
| Topic Shape | Core Pairing | Add-Ons When Needed |
|---|---|---|
| Drugs, Procedures, Outcomes | MEDLINE/PubMed + Embase | CENTRAL, trial registers, regulator portals, industry reports |
| Nursing, Allied Health | MEDLINE + CINAHL | PsycINFO, Scopus, grey literature |
| Qualitative Or Mixed Methods | MEDLINE + subject database | Theses, repositories, hand-searching key journals |
Grey Literature And Non-Journal Sources
Trial registers, theses, and regulator files can surface unpublished data or harms that don’t show in journals. Include them when the question touches safety, real-world use, or emerging tech. Track where each item came from in your spreadsheet so you can report pathways cleanly.
How To Keep Bias Low While You Grow The Set
Use Clear Strings And Peer Review Of Searches
Build strings with both subject headings and text words, then get a librarian or search-expert check. Small tweaks to adjacency, truncation, and synonyms can change yield a lot.
Record Every Step
Keep a search log with dates, platforms, filters, and limits. Save full strategies and export counts at each stage. That log feeds your flow chart and lets others repeat your work.
Screen In Pairs When Stakes Are High
Dual screening for eligibility reduces random misses. Do the same for risk-of-bias judgements when the synthesis guides care, policy, or funding.
What Editors And Reviewers Expect To See
They look for a protocol or plan, a search that fits the question, and complete reporting. Many journals expect a flow diagram and an appendix with at least one full database strategy. They also look for a clear reason when you exclude a stream of evidence, such as out-of-scope designs or non-clinical populations.
Two Real-World Scenarios
Common Condition, Many Trials
You’re studying a standard drug class. Hits from MEDLINE and Embase are huge. Add CENTRAL and trial registers to cover the randomised space and unpublished trials. Expect a long screening phase and a large included set. Your method write-up and flow chart matter far more than chasing a round number.
Rare Condition, Sparse Evidence
You’re tackling a rare neonatal disorder. Two databases produce a few dozen records; trial registers add a handful more. The final set may be small. That’s fine if your question and reporting fit the field and your search reached every likely source.
Where To Place Your Final Count In The Paper
State total records identified, screened, and included in the abstract and the flow diagram caption. In the results, give the included count by design or outcome group so readers can see spread and weight. In appendices, show at least one full database strategy and the list of sources with dates.
Linkable Standards You Can Cite
Two links cover what reviewers usually check: a reporting checklist and a methods manual. Link both in your manuscript and keep copies of the checklists with your notes. See the PRISMA 2020 update for reporting and the Cochrane Handbook chapter on searching for day-to-day steps. These are the anchors many editors use.
Bottom Line For Planning Time And Sources
Skip the hunt for a magic tally. Size your range with a pilot search, lock methods in a protocol, use a database mix that fits the topic, and report every step. If your flow chart is complete and your included set answers the question with fair certainty, you’ve reached “enough” with a record you can defend.
See the PRISMA 2020 guidance for what to report in a review, and the Cochrane Handbook’s section on searching and selecting studies for step-by-step methods.
