Build a focused question, search core databases with MeSH and Boolean logic, log every step with PRISMA, and export results to a reference manager.
What A Medical Literature Review Search Needs
A good search does three things. It starts with a crisp question, uses planned sources, and leaves a trail others can follow. That trail proves care and saves time when you update the work later.
Frame the question with a simple template. PICO for interventions, PEO for exposures, and SPIDER for qualitative work. Write the parts in words first. Then list synonyms and near terms for each part. Keep a running note of terms you will map to subject headings.
Pick sources based on the question, not habit. For clinical trials, PubMed or MEDLINE, Embase, and the Cochrane Library are common picks. Nursing topics often need CINAHL. Broad scoping often uses Scopus or Web of Science. Add Google Scholar for cited by chains and grey items. Plan preprint and trial registry checks when speed matters. Set a cap on time spent per source so the plan stays on track.
| Source | Best Use | Access & Notes |
|---|---|---|
| PubMed/MEDLINE | Biomed core, MeSH terms, clinical filters | Free; see the PubMed Help for field tags and mapping |
| Embase | Drug and device depth, Emtree terms | Subscription via institutions |
| Cochrane Library | Trials and methods collections | Many regions have access |
| CINAHL | Nursing and allied health | Subscription via libraries |
| Scopus / Web of Science | Broad reach, citation tracking | Subscription databases |
| Google Scholar | Cited by chains, grey finds | Free; results can be noisy |
| ClinicalTrials.gov | Registered trials | Free; good for ongoing work |
| medRxiv / bioRxiv | Preprints | Screen with care |
Doing A Medical Literature Review Search Step-By-Step
1) Write A Clear Question
State the population, the topic, and the outcome or phenomenon. Keep one main idea per line. If the scope drifts, split it into separate questions. That will later reduce false hits and speed screening. Share the draft plan with a teammate for a sanity check right away.
2) Draft Inclusion And Exclusion Rules
Set the study types, language choices, time spans, and settings you accept. Keep rules tight yet fair to the question. Write them before you run searches so that decisions feel even and repeatable.
3) Map Terms To Subject Headings
In PubMed, search each concept in the MeSH browser, open the tree, and tick broader and narrower terms when needed. Record the heading and scope note. Pair it with free text for terms not yet indexed or for new labels people use.
4) Build Your Boolean Blocks
Group synonyms with OR inside brackets. Join your main ideas with AND. Use “quoted phrases” for multi word terms, and the asterisk for truncation where the database allows. Keep a master block for each concept so you can reuse it across sources.
Sample Boolean Pattern
Example core pattern: (asthma OR wheeze OR “reactive airway”) AND (inhaled corticosteroid* OR budesonide OR fluticasone) AND (adherence OR compliance OR persistence).
5) Tune For Each Database
Databases differ. PubMed uses MeSH and field tags. Embase uses Emtree and has proximity operators. CINAHL uses its own headings. Adjust syntax and features, but keep the logic. Save each native string in your log with the run date and any limits used.
6) Run PubMed The Smart Way
Use the Search Builder to watch how Automatic Term Mapping interprets your words. Add MeSH terms with [mh], titles and abstracts with [tiab], and filter out animal only records with the built in humans filter when that fits your plan. Save the search and set an alert if you will update later.
7) Expand Beyond One Engine
Repeat the logic in Embase, CINAHL, and the Cochrane Library. In Google Scholar, run the core phrase and scan the first few pages for strong fits. Use the cited by link on core seed papers to surface newer work. Track every path you take in the log.
8) Capture Grey Literature
Search trial registries, preprints, and core society sites. Conference abstracts and theses can add leads, though data may be thin. Record where you looked and the strings used so the path is to a reader.
9) Export And De-Duplicate
Export citations in RIS or XML. Pull them into a manager such as Zotero or EndNote. Use built in de-duplication, then scan by eye for near duplicates that slipped through because of spelling or accents.
10) Screen In Pairs When Possible
Title and abstract first, then full text. Use your rules as written. If you work solo, do a calibration pass on twenty to thirty records to test the rules, then proceed with the rest. Keep reasons for exclusion short and consistent.
11) Log Decisions With PRISMA
Count records from each source, note after de-duplication totals, and track full text retrieval. Keep a list of common exclusion reasons. Those numbers flow straight into the PRISMA flow diagram later.
Best Tools For A Medical Literature Search
MeSH And Field Tags
The MeSH browser helps you pick the right subject terms. In PubMed, field tags let you aim at author, title, abstract, journal, and more. That control trims noise and lifts recall where it matters.
Proximity And Wildcards
Embase offers NEAR/x to catch words that sit close to each other. Use it for phrases that vary. Truncation with * or ? also helps you grab slight spelling shifts without bloating the set.
Filters That Help, Filters That Hurt
Be careful with quick filters. Language and date make sense when linked to the plan. Study design filters are handy, but they can hide good papers if applied too soon. Test them on a seed set first.
Backward And Forward Snowballing
Reference lists lead backward in time. The cited by feature in Scholar leads forward. Use both around your core set to catch work that indexing missed or that is too new to have full subject tags.
Medical Literature Review Search Mistakes To Avoid
- Starting without a written question and rules.
- Using one database only.
- Relying on one keyword per concept.
- Applying harsh filters before you test a seed set.
- Not saving native strings and dates.
- Skipping trial registries and preprints in fast moving fields.
- Screening without a log of reasons for exclusion.
Quality Checks Before You Write
Check Recall Against A Seed Set
Pick five to ten must include studies from expert input or a scoping pass. Make sure your strings pull them back. If one is missing, study the terms it used and refine blocks or headings.
Crosswalk Your Concepts
Map each concept to the subject heading used in each database. Note any gaps. Add free text lines to fill those gaps. Save the map with your strings so another reader can repeat the work.
Document Every Run
For each source, save the final string, date run, limits, and hit counts. Store exports with clear file names. Keep a readme that lists software and versions used for screening and de-duplication. Save export logs promptly.
| Task | What To Record | Why It Matters |
|---|---|---|
| Database run | Final string, date, limits, hits | Makes counts traceable |
| Export | Format, file name, records | Aids de-dup check |
| Screening | People, tool, rules, reasons | Backs fair decisions |
| Updates | Alert dates, new hits | Shows currency |
Ready To Copy Boolean Blocks
Intervention Studies
(disease OR condition*) AND (drug OR therapy OR “randomized controlled trial” OR placebo) AND (outcome OR mortality OR “quality of life”). Add MeSH or Emtree lines per source.
Observational Studies
(exposure OR risk factor* OR cohort) AND (disease OR outcome) AND (incidence OR prevalence OR odds). Pair with subject headings and test a proximity line in Embase.
Diagnostics
(index test OR biomarker* OR assay) AND (target condition) AND (sensitivity OR specificity OR ROC). Use field tags to aim terms at title and abstract for tighter sets.
Worked Example: From PICO To Search Strings
Suppose you ask whether inhaled corticosteroids improve asthma control in adults who struggle with symptoms. The P part is adults with asthma. The I part is inhaled steroids. The C part could be placebo or usual care. The O part is symptom control or flare reduction. Write these blocks as simple lists first.
Now translate them. For the population, pair the MeSH term Asthma with free text like asthma*, wheeze*, and “reactive airway”. For the intervention, pair the MeSH term Adrenal Cortex Hormones with drug names such as budesonide, beclomethasone, and fluticasone. For outcomes, pair the MeSH term Treatment Outcome with text words like control, exacerbation*, or rescue use. Link synonyms with OR and link blocks with AND.
In PubMed, one string might look like this: (Asthma[mh] OR asthma*[tiab] OR wheeze*[tiab] OR “reactive airway”[tiab]) AND (“Adrenal Cortex Hormones”[mh] OR budesonide[tiab] OR beclomethasone[tiab] OR fluticasone[tiab] OR “inhaled corticosteroid*”[tiab]) AND (“Treatment Outcome”[mh] OR control[tiab] OR exacerbation*[tiab] OR “rescue use”[tiab]). Save this exact string in your log with the run date.
Grey Sources And Registries
Trials that never reach journals still matter. Search ClinicalTrials.gov and the WHO ICTRP for registered work on your topic. Record the filters you used, such as status or phase. For fast fields, check medRxiv and bioRxiv for preprints. Read with care, since peer review has not yet run, but do record them in the log and mark their status so readers know what they are.
Common Boolean Patterns And Translation Tips
Use Blocks You Can Move
Write each concept in its own parenthesis block. That makes translation smoother. You can paste a block into PubMed, Embase, or CINAHL and only swap field tags and headings without rebuilding from scratch.
Know The Limits Of Filters
Hedges for trial design, cohort studies, and diagnostic accuracy are handy. Run them late in the process after you confirm recall on your seed set. Keep the full and filtered counts in the log to show what the hedge did.
Managing Records And Screening Efficiently
Pick a single manager for the whole team. Set the import style to keep full abstracts and keywords. Tag each record with its source so you can later list counts by database. Run de-duplication in the manager and then check the titles by eye for near matches the tool missed.
Before full screening, pilot test on a random slice. Two readers screen the same thirty to fifty titles and abstracts. Compare results, tidy the rules, and write two example decisions for tricky cases. Then split the rest. During full text checks, ask one team member to chase missing PDFs while others keep reading.
Exclusion Codes
Keep a code list for exclusion reasons such as wrong population, wrong design, wrong outcome, or not primary research. Use the same short phrases across all records. That consistency makes the PRISMA figure easy to build and helps readers see how choices were made.
Reporting The Search So Others Can Repeat It
Put the full string for each database in an appendix. Include run dates, platforms, and any limits. List all sources, not just databases. Mention trial registries, preprints, and core websites you checked. Add the de-duplication steps, the tools used for screening, and the dates alerts were active. With that, another team can repeat your path and reach the same set.
Exporting, Reporting, And Next Steps
When screening ends, export the include set to your manager and start full data work. Build the PRISMA flow figure from your log. Keep the search strings in an appendix; the Cochrane Handbook shows clear models. If you plan an update cycle, keep alerts live and add an update note to the file names.
