Build a protocol, search multiple sources with tested terms, save full strategies, and report using PRISMA with clear numbers and dates.
Working on a medical review and need a rock-solid search plan? This guide walks you through each step, from question to full report. You’ll learn how to frame the topic, pick the right sources, write search strings that actually find the studies you need, and record every action so anyone can repeat it. The goal is simple: a search that is transparent, thorough, and ready for peer review.
Keep the standard set close at hand: the PRISMA 2020 checklist, the search chapter in the Cochrane Handbook, and the PRISMA-S extension for reporting searches. You’ll draw on these while you plan, run, and write up the methods.
Steps To Run A Medical Systematic Review Literature Search
Define An Answerable Question
Turn the topic into a structured question using PICO, PICOS, or a variant that fits your study type. Write out key concepts, main outcomes, and any limits that matter to care decisions. Note any study designs that match your review scope. Save this as part of the protocol.
Write A Protocol And Lock The Plan
Record the review question, all planned sources, draft strategies, screening rules, data fields, and the plan for risk of bias. Include who will run the search and who will screen. Version the protocol and store it where your team can view changes. When the search starts, avoid unplanned changes unless you log the reason.
Pick Sources That Fit Medicine
Use at least one core biomedical database, plus others that cover the setting or design. Pair database searching with trial registers and grey sources. The matrix below helps you map choices to coverage.
| Source | Coverage And Strength | Notes |
|---|---|---|
| MEDLINE/PubMed | Clinical and biomedical studies with MeSH | Free access; MeSH adds strong subject mapping |
| Embase | Drug and device scope with Emtree | Good for adverse events and European journals |
| Cochrane CENTRAL | Trials register from multiple inputs | Useful for randomized studies |
| CINAHL | Nursing and allied health | Care delivery and practice topics |
| PsycINFO | Mental health, behavior, and outcomes | Broad coverage for psych topics |
| ClinicalTrials.gov / WHO ICTRP | Ongoing and completed trials | Find unpublished or recent trials |
| Grey sources | Guidelines, theses, conference material | Reduce publication bias |
Draft Concepts And Terms
List controlled vocabulary for each concept in each database (MeSH in MEDLINE, Emtree in Embase). Add free-text synonyms, acronyms, variant spellings, and brand or generic drug names. Capture outcome and setting terms that matter to selection. Keep a column for notes on why each term belongs.
Craft PubMed, Ovid, And Embase Variants
Write a base string for one concept at a time. In PubMed, blend MeSH and [tiab] terms and use quotes for phrases. In Ovid, use .ti,ab. and adj operators to control proximity. In Embase on Elsevier, mix Emtree terms with title-abstract fields and NEAR/x for proximity. Run tiny probes to confirm how each operator behaves before you scale up.
Build Search Strings That Balance Recall And Precision
Combine synonyms with OR. Join concepts with AND. Exclude only when needed using NOT and test the effect. Use phrase marks, truncation, wildcards, field tags, and proximity operators that match the platform. Start broad, then tighten with design filters or date limits that you can justify. Save each version and its test results.
Test Against Known Studies
Pick a short set of sentinel papers that must appear if the search works. Run the draft strategies and confirm that set is found in each database. Inspect the first few pages of results to spot gaps, new synonyms, or noise terms that push useful records down the list. Adjust, rerun, and retest.
Run The Full Searches
Execute the final strings in every chosen source on the same day when possible. Record the platform (e.g., Ovid, EBSCO, Web of Science), the database name and coverage dates, the exact query text, the fields used, limits, and the run date and time. Export results with full fields, including abstracts, indexing, and unique IDs.
Manage Records And Remove Duplicates
Import all records into a reference manager or a screening tool that keeps the audit trail. Use a deduplication method that compares multiple fields, not just titles. Keep both the raw set and the clean set, with counts for each step. Note any manual merges or title fixes.
Screen With Clear Rules
Pilot the eligibility rules on a sample. Calibrate two reviewers before full screening. Run title/abstract screening, then full text. Track reasons for exclusion in structured categories. Resolve conflicts with a third person or a set rule. Keep timestamps for each stage.
Update Before You Finalize
Run an update search near the end of data extraction. Use the same sources and strategies, with dates set to start the day after the first run. Screen new hits and add any new studies that fit. Record the date of the update run and any changes that came from it.
Best Practices For Literature Searching In Medicine For Systematic Reviews
Partner With A Trained Searcher
A medical librarian or information specialist brings deep knowledge of platforms, indexing, and filters. Involve this person from protocol to final report. Ask for a peer review of the search using PRESS or a similar checklist when available.
Set Up A Reusable Log
Create a living log that captures dates, people, sources, and file names. Add columns for version notes, sentinel papers found, and hit counts by source. Keep the log in a shared folder so the whole team can view updates. This one file will save hours during write-up.
Common Pitfalls And Fixes
- Over-tight strings: Drop an unnecessary NOT, loosen proximity, or add a missing synonym.
- Too much noise: Add a field tag, a proximity limit, or a study design filter with a clear cite.
- Missed newer work: Add forward citation checks and trial register searches.
- Version drift: Freeze a copy of each final string and store it with the date in the name.
- Weak records: Export with abstracts and indexing fields to help screeners make quick calls.
Document Every Detail So Others Can Repeat It
Post full strategies, not screenshots or summaries. Include the full query text for each database, the platform, limits, fields, and the exact run date. Store search histories and exported files in a shared folder with version names that include dates. Cite who ran each search.
Use Study Design Filters With Care
Validated RCT or cohort filters can cut noise, yet poor filters drop key evidence. If you use a filter, cite its source and test recall with your sentinel set. Avoid language limits unless required by the question or resources; report any such rule.
Add Citation Chaining And Handsearching
Screen reference lists of included studies and related reviews. Use forward citation tools to find newer work that cites your key papers. Scan key journals or conference sets when the topic is niche or the indexing is weak. Record dates and counts for these steps.
Plan For Grey Literature
Include trial registries, theses, dissertations, guideline portals, and agency reports. Grey sources can surface negative or null results and help you judge reporting bias. Record where you searched, how, and what terms or filters you used.
Control Version Drift
Lock strategies before the main run. If you change a term or add a concept later, log the change and the reason, then rerun across all active sources. Keep the original strings as an archive. Mention changes in the report.
Write With PRISMA And PRISMA-S In Mind
Report counts in a flow diagram and provide the full search appendix. Note dates, sources, and who ran the searches, as called for by PRISMA and PRISMA-S. Link to any repository that hosts your strategies or data files.
Handle Complex Clinical Topics
Break multi-part topics into separate concept blocks. Map each block to vocab and text words, then stitch blocks together with AND. When treatments or devices have many names, add brand, model, and class terms. For outcomes, include both clinical end points and validated scales.
Calibrate Recall And Noise
Track two simple signals while you iterate: how many sentinel papers the search finds, and the share of clearly off-topic hits in the first result pages. If recall drops, add synonyms or loosen proximity. If noise grows, add a field tag, a proximity limit, or a concept that narrows the set.
Use Platform Features Wisely
Each interface handles operators in its own way. Ovid has adj operators and multi-purpose fields; PubMed lets you use phrase marks and field tags like [tiab]; Embase on Elsevier includes NEAR/x. Read the help page for each platform and test behavior with tiny sandboxes before you scale up.
Be Ready For Non-standard Study Types
When the review covers diagnostic accuracy, prognosis, screening, or prediction, adjust terms and filters to match. Add index terms and text words for sensitivity, specificity, ROC, validation, or cohort terms as needed. For complex interventions, add delivery setting terms and personnel roles.
Search Documentation Checklist
Use this short checklist during the run and while writing the appendix. It aligns with common reporting items and keeps your audit trail clean.
| Element | What To Capture | Tip |
|---|---|---|
| Database & Platform | Full name, provider, and coverage dates | Example: MEDLINE via Ovid 1946-present |
| Search Date & Time | Local date and 24-hour time | Stamp in the filename |
| Full Strategy Text | Exact query with field tags and operators | Export the history to a text file |
| Limits | Date, language, design filters | Justify each limit in the methods |
| Results | Raw hits by source and after deduplication | Record counts at each step |
| Files | RIS/CSV exports and PDFs | Store in a shared folder with read access |
| People | Searcher, peer reviewer, screeners | Add ORCID where you can |
Time-Saving Habits That Pay Off
- Name files smartly: Use yyyymmdd, source, and version in every file name.
- Save queries inside platforms: Create alerts only after you lock the main run.
- Standardize notes: Keep a brief code list for reasons to exclude to speed screening.
- Mark sentinel papers: Tag them in your manager so you can spot gaps fast.
When And How To Use Automation
Some tools can learn from early screening choices and re-rank later records. Use them only when two people still review decisions and when the tool keeps a clear trail. Report the tool name, version, and settings. If the tool hides records or drops terms, switch it off.
What To Include In The Methods Section
State every database and platform, the full date of each run, the years covered, all limits, who ran the search, and whether you used citation chaining, trial registers, or grey sources. Add a short note on the pilot and any peer review of the search. Point to the appendix for the strings.
Build A Clear Flow Diagram
Map records from each source, show how many were removed as duplicates, and list counts for title/abstract screening, full-text assessment, and final includes. Add labeled boxes for records found through citation checks or registers. State dates near the top. Keep the same labels in your log, your appendix, and the diagram so readers can match numbers with ease.
Peer Review Of The Search
Ask a second searcher to review one full strategy line by line. A PRESS-style check can catch missing terms, weak fields, or limits that trim recall. Share the draft strings, the sentinel set, and a note on any past lines that failed. Record the review date, the reviewer’s name, and what you changed after the check. Repeat when you add a new database. Do same before the update.
From Question To Reproducible Search: Your Action Plan
Set the question, plan the sources, write and test strong strings, and record every step. Run the searches, manage records with care, and report with clear counts and dates. Follow PRISMA and PRISMA-S, point to your files, and invite peer review. With this approach, your medical review will stand on a clear, documented search that others can trust and repeat.