Ready to learn how to do a systematic review search in medicine the right way? This guide lays out a clear path you can follow today, from a tight question to a transparent, reproducible search record that stands up to scrutiny.
Start with a sharp question
Good searches begin with a clear scope. Translate your clinical or public health prompt into a structured question such as PICO or a variant that fits your study type. Pin down the population, the exposure or intervention, the comparator if relevant, the outcomes, and any study design limits. Define what sits inside and outside the review before you write a single search line.
Turn the question into concepts
List the core concepts that drive retrieval. For each concept, jot down synonyms, spelling variants, acronyms, and brand or generic names. Add subject headings used by databases, such as MeSH terms in PubMed. Flag outcome or setting filters only when they reflect necessity, not convenience.
Plan the workflow and recordkeeping
Search projects run smoother when roles and timelines are clear. Assign who drafts strategies, who peer reviews them, and who exports, deduplicates, and logs results. Create a living document for every setting: databases, platforms, date ranges, limits, and the exact strategies used. Store exports and logs in versioned folders.
| Step | Purpose | Useful resource |
|---|---|---|
| Define scope & criteria | Fix inclusion, exclusion, and outcomes | Cochrane Handbook |
| Register protocol | Prevent duplication; improve transparency | PROSPERO registry |
| Build strategies | Translate concepts into syntax across databases | PRISMA-S checklist |
| Peer review search | Catch errors and missing terms | PRESS checklist |
| Run searches | Retrieve records without biasing limits | PubMed, Embase, CENTRAL, others |
| Deduplicate | Remove repeats before screening | Reference manager or SR tool |
| Document & archive | Ensure full reproducibility | Search log and exported files |
Doing a systematic review search in medicine: step-by-step
1) Register a protocol
Create a protocol that states your question, eligibility criteria, outcomes, and analysis plan. Register on a public platform such as the PROSPERO registry when the topic fits its scope. Registration reduces accidental duplication and signals intent to follow a predefined plan.
2) Pick the right databases and platforms
No single database covers the biomedical literature. Combine at least two large bibliographic sources and add trial and preprint servers when relevant to your topic and study design. A common minimum for clinical questions includes PubMed or MEDLINE, Embase, and CENTRAL for trials. Broaden with CINAHL for nursing, PsycINFO for mental health, Web of Science or Scopus for citation chasing, and regional indexes when the topic calls for it.
3) Map subject headings and keywords
For each concept, add controlled vocabulary terms where available, then stack free text synonyms to catch new or not-yet-indexed records. In PubMed, map terms to MeSH and examine the entry terms and tree to decide on explosion. Pair that with fielded text words in title and abstract.
4) Write precise, transparent strategies
Draft the strategy in one database first. Group synonyms with OR; join different concepts with AND. Use phrase searching, truncation with care to avoid noise, and proximity operators where supported. Keep filters to a minimum; use validated study design filters when they are fit for purpose. Translate the strategy line-by-line for each database, adapting field codes and operators without changing intent.
5) Peer review the strategy
Ask an information specialist to review the draft. A PRESS style review checks translation of the question, keyword and subject heading choices, logic and operators, spelling, and line numbers. This step catches omissions and saves time downstream.
6) Run the searches and export without loss
Search each database on the same day or in a short window. Record the date, platform, database name, and coverage years displayed. Export all records with full bibliographic data, abstracts, and unique identifiers. Use RIS, XML, or another rich format supported by your reference manager or screening tool.
7) Deduplicate and manage records
Import all exports into one library. Deduplicate using a tested sequence that compares identifiers, titles, authors, and years. Keep a copy of every raw export. Log counts per source before and after deduplication so screening totals make sense later.
8) Document decisions as you go
Keep a search log that shows every detail needed for reproduction: database, platform, coverage, search dates, complete strategies, and exact result counts. Save screenshots of main settings when helpful. Store the log with the protocol and final strategies.
Build a high-recall strategy without drowning in noise
Balance breadth and precision. Expand recall with synonyms, variant spellings, and brand names; tighten precision with phrase searching, adjacency, and field limits to title and abstract. Examine highly ranked false positives to refine terms. Pilot the strategy on a set of sentinel papers and confirm that they appear near the top of results.
Use controlled vocabulary well
Controlled vocabulary anchors the strategy. In PubMed, MeSH terms group articles under stable headings. In Embase, Emtree terms serve a similar role. Check scope notes and the hierarchy to decide which terms to explode. Combine headings with free text because indexing lags and not every record carries a heading.
Tune free text thoughtfully
Free text captures new jargon and brand names. Use truncation only where it does not invite large blocks of noise. Prefer tested wildcards to cover spelling shifts. Watch for phrase variant traps, such as “meta analysis” versus “meta-analysis”.
Report your search with PRISMA-S
Readers and editors expect a clear account of what you did. The PRISMA-S checklist lays out items to report, including databases and platforms searched, full strategies for each, limits used, deduplication method, and the date last searched. A complete record supports trust and makes updates simple.
What to include in your record
- Full strategy for every database and platform, copied verbatim
- Search dates for each source and the date you last reran the search
- Any language, date, or design limits used with a reason
- Counts retrieved per source before and after deduplication
- Links or attachments to exported files and the log
Systematic review search in medical research: common pitfalls
Relying on one database
Single-source searching misses large parts of the literature. Combine at least two core databases and add others based on topic and design.
Dropping full strategies from the report
Missing strategies block reproducibility. Always include the complete strategy for every database and platform, not just the one you used to draft.
Over-filtering early
Filters that slice too hard drop relevant records. Use tested study filters only when they match your need. Avoid language or date limits unless your protocol justifies them.
Weak deduplication
Messy duplicates inflate screening work and distort counts. Use a planned sequence and record exact numbers removed at each step.
Search beyond bibliographic databases
Grey literature and trials reduce publication bias. Search trial registries, regulatory sources, preprint servers, dissertations, and major conference proceedings when they align with your topic. Add backward and forward citation chasing for cornerstone papers to catch studies that indexing missed.
Trial registries and protocols
Trial registries reveal completed and ongoing work. Scan ClinicalTrials.gov, the WHO ICTRP portal, and specialty registries. Protocols and trial records signal outcomes that never reached journals, which helps with bias assessment.
Handsearching and citation chasing
Screen the tables of contents for top journals over the date range of interest. Use citation indexes to follow studies that cite your sentinel set, and to find earlier work those sentinels referenced.
Document with care: an example layout
Set up a simple template that any teammate can follow. Keep one row per source and include the fields below. Store the file with read-only backups after each major step.
| Field | What to record | Tip |
|---|---|---|
| Database & platform | Exact names and vendors | Record version or coverage years |
| Search date | Day and time run | Use one time zone across the team |
| Strategy | All lines with field codes and operators | Paste as run; avoid retyping |
| Limits | Language, years, study filters | State the reason |
| Results | Raw hits and post-dedup count | Keep snapshots of result pages |
| Export | Format and file name | Keep a checksum or version number |
| Notes | Oddities, errors, reruns | Attach screenshots if needed |
Screening setup that saves time
Two reviewers reduce missed studies and selection bias. Calibrate on a small sample first to tune inclusion rules and to improve agreement. Use a screening tool with dual screening, conflict resolution, and audit trails. Track reasons for exclusion at full text to feed your PRISMA flow.
Data management for sanity and speed
Decide on storage locations for search exports, logs, and screening decisions. Use consistent file names and folder structures. Backup to secure cloud storage with access controls.
Keep searches current
Plan at least one update before submission, and another before final acceptance when the field moves fast. Rerun the full strategies on the same platforms, note the date, add the new records, and document the delta. For living reviews, set a schedule and automate alerts where platforms support them.
Write up methods that readers can reuse
In the methods section, specify who built and who peer reviewed the strategies, every database and platform searched, the exact dates, limits, the deduplication method, and any grey literature sources. Include full strategies in an appendix or a supplement, and link to the search log in a repository when possible.
Quick reference: term crafting tips
Wording and spelling
Add US and UK spellings, singular and plural forms, and hyphen variants. For drugs and devices, include generic, brand, and code names.
Proximity and phrases
Use adjacency operators in databases that support them to tie concepts that belong together, such as disease name near a primary symptom or test. Use quotation marks for phrases that should not split.
Avoid common traps
- Truncation that balloons noise
- Overreliance on one heading when multiple map to the concept
- Using full text search in platforms that mix content types without care
- Dropping trial records because they lack abstracts
Where those links help most
The Cochrane Handbook gives deep methods for planning and searching. The PRISMA-S checklist tells you how to report searches with full transparency. The PROSPERO registry lets others see your plan and helps avoid duplication across teams.
Final checklist before you hit submit
- Protocol registered and linked in your manuscript
- All databases and platforms listed with dates searched
- Full strategies copied exactly as run, for every source
- PRESS peer review completed and archived
- Counts reconciled across sources and after deduplication
- Grey literature and trials searched where relevant
- Update search rerun close to submission
- Search log, exports, and flow diagram saved
With a clear question, a protocol, a peer-reviewed strategy, and a complete record, your systematic review search will be reproducible and ready for scrutiny.
Tailor searches to study types
Intervention reviews
When you target randomized trials, combine disease and intervention terms with a tested trials filter suited to the database. Add terms for dosage forms and delivery routes when they matter to safety or effect. Keep outcome terms out of the main strategy unless they are part of the concept itself.
Observational research
For cohort or case-control questions, stack design terms only if yield grows without drowning signal. Many records describe design in the abstract, not in headings, so fielded text words carry weight. Add exposure synonyms that capture real-world phrasing from clinical notes and registries.
Diagnostic accuracy
Pair the index test with the target condition and terms for sensitivity or specificity only when the platform supports them well. Add names for reference standards and sample types. Examine indexing for common lab panels and device families to broaden reach without losing clarity.
Qualitative evidence
Use terms for interviews, focus groups, and thematic analysis. Database support varies, so expect fewer standardized headings. Track conferences and theses closely, as many qualitative studies first appear there.
Share full strategies in a repository and invite reuse across related updates.
Add logs.