Start with a precise question, build a concept grid, run database-specific queries, check with PRESS, and report with PRISMA-S.
Great reviews stand on the strength of their search and planning. A clear plan lets you find the right studies, avoid blind spots, and keep the work reproducible. The steps below show a simple way to design, run, and report a search that holds up under scrutiny.
What a search strategy is and isn’t
A search strategy is a written plan that turns your question into database-ready queries, documents every decision, and explains where and how you looked. It is not a single quick query or a vague list of keywords. Think of it as a map: concepts, terms, sources, limits, and the way results will be handled from start to finish.
Planning a search strategy for a medical literature review that works
Before typing anything into a search box, set up the basics. Write the review objective in one sentence. Define who and what you will include and exclude. Pick the main concepts that sit at the core of the question. List your target sources and why each belongs on the list. Decide the time span and language rules. Create a simple tracking sheet for dates, counts, exports, and notes. This groundwork keeps scope creep in check and makes every later step faster.
Build a clear question with PICO or a variant
PICO (Population, Intervention, Comparator, Outcome) is a handy starting point for clinical topics. For exposures or diagnostics, PECO or PIT (Population, Index test, Target condition) can fit better. Write each element in plain language first. Then mark which ones must appear in every record and which can be optional. Keep outcomes flexible at the search stage unless they are true must-haves; you can screen for them later.
Turn concepts into search terms that catch synonyms
Most medical databases use two kinds of terms: controlled vocabulary and free-text. Controlled vocabularies such as MeSH and Emtree group related words under one heading. Free-text terms scan titles and abstracts for the words people actually write. You usually need both. Match each concept to one or more headings, then add common synonyms, spelling variants, brand names, and word variants. Use phrase marks for multi-word terms where needed.
| Concept | Controlled vocabulary (MeSH/Emtree) | Free-text examples |
|---|---|---|
| Population | Humans; Age groups if relevant (Adult, Aged) | adult*; elder*; pediatr*; neonat* |
| Condition / Problem | Type 2 Diabetes Mellitus; Hypertension; Low Back Pain | “type 2 diabetes”; T2DM; “non insulin dependent” OR noninsulin-dependent; hypertension; hypertens* |
| Intervention / Exposure | Sodium-Glucose Transporter 2 Inhibitors; Non-steroidal Anti-Inflammatory Agents; Vaccination | SGLT2 inhibitor*; dapagliflozin; ibuprofen; naproxen; vaccin* |
| Comparator | Placebo; Standard of Care | placebo; usual care; standard therapy |
| Outcome (optional at search) | Glycated Hemoglobin A; Blood Pressure; Pain Measurement | HbA1c; A1c; “blood pressure”; systolic; diastolic; pain scor* |
| Study design | Randomized Controlled Trials as Topic; Cohort Studies | random*; trial; cohort; longitudinal |
| Setting / Context | Primary Health Care; Hospitals | primary care; outpatient; inpatient; emergency |
| Timeframe | — | last 10 year*; since 2015; specific date range |
Keep a living list of candidate terms as you read a few strong papers. Reference lists, trial registries, and guidelines often reveal alternate spellings and drug names that boost recall.
Write database-ready strings: Boolean, fields, and syntax
Combine synonyms with OR inside a concept set. Combine concept sets with AND. Use NOT with care only to remove true noise. Apply field tags when the platform allows it (title/abstract, subject heading, author, journal). Add truncation and wildcards where they work, and test phrase marks versus adjacency operators when available. Nest terms with parentheses to keep logic tidy. Save and label each version as you iterate so you can roll back if needed.
Field tags that matter
Title/Abstract fields give broad reach when you want recall. Subject-heading fields pull in indexing power once a heading fits your topic. Author and journal fields can help chase research groups or specialty outlets during scoping. Mix and match with care so you do not push the query into a corner. Label each fielded line so anyone can scan it and see your intent at a glance.
Adjacency and proximity
Proximity keeps linked ideas close. Where NEAR/n or adj/n exists, try small distances first. Phrase quotes can miss hits when punctuation or filler words break the phrase. Record the distance that worked.
Search across the right sources
No single database includes everything. A solid core for clinical topics often includes MEDLINE via PubMed, Embase, and CENTRAL. Depending on the question, add CINAHL, Web of Science, Scopus, PsycINFO, or subject-specific indexes. Scan trial registries and preprint servers when timeliness matters. Add grey literature where policy, theses, or guidelines might hold answers. Briefly justify every source in your protocol so readers can see why it earned a place.
Run, log, and de-duplicate
Run each database on the same day if you can. Capture the full search string, date, platform, and exact result count. Export with full citation fields and abstracts in a consistent format such as RIS or XML. Load everything into your reference manager, then de-duplicate in passes: exact match, then fuzzy match on author, year, and title stems. Keep a record of how many items you removed at each pass. Store raw exports and de-duplicated sets in a versioned folder so nothing gets lost.
De-duplication workflow
Start with the largest source and import others into it, not the other way around. Sort by title, then by author, then by year to catch obvious clones. Use software rules for close matches and scan near-matches by eye before you delete. When in doubt, keep one copy and tag it for later review. Create a short note describing the rules you applied; copy it into your methods and keep it with the dataset.
Peer review and refine
Fresh eyes catch gaps. Ask an information specialist to review at least one full strategy using the PRESS checklist. Look for missing headings, thin synonym lists, misused truncation, sloppy nesting, or limits that cut out high-value records. Add a few known relevant studies to a “gold set” and make sure your final strings retrieve them. If they do not, adjust and retest.
Make PRESS work for you
Share your question, concept grid, and draft strings ahead of the meeting so the reviewer sees the full picture. Ask for comments on each PRESS domain: translation of the question, Boolean logic, subject headings, text words, and limits. Walk through one tricky concept live and agree on changes you will try next. Log every change you accept or reject so your record tells a clear story.
Document with PRISMA-S and make it reproducible
Transparent reporting saves time for everyone who reads your work. Record the full strategies exactly as run, name every database and platform, list any filters, and include dates and counts. PRISMA-S lays out a clear set of items for search reporting and pairs nicely with your flow diagram. Package your strings, logs, and exports in an appendix or a public repository so others can check and reuse them.
What to include in your appendix
Copy the full strings for every database and platform, not just one. Add screenshots or text exports of search histories. Include the de-duplication note, your gold set list, and the PRESS comments you acted on. Add a table that matches each database to its run date, hit count, and file name. This bundle lets another team rerun the same work months later without guesswork.
Search strategy for medical literature reviews: step-by-step checklist
- Write the objective in one sentence and define inclusion and exclusion rules.
- Pick the concepts that must anchor the search.
- Map each concept to controlled headings (MeSH, Emtree) and list free-text terms.
- Draft concept sets with OR; connect sets with AND; keep NOT to a minimum.
- Apply field tags and phrase marks; test truncation and adjacency where valid.
- Select databases and registries, with a one-line reason for each choice.
- Set language and date limits only when they have a clear rationale.
- Pilot the strings, scan first pages of results, and tune term lists.
- Export full results with abstracts; save the exact strings and counts.
- De-duplicate methodically and log every removal pass.
- Cross-check against a small gold set of known relevant studies.
- Seek a PRESS review; fix issues the review surfaces.
- Write up the strategies and counts using PRISMA-S items.
- Archive the search package so it can be rerun or updated later.
Quick syntax guide by platform
| Database / platform | Core operators & fields | Notes |
|---|---|---|
| PubMed | MeSH terms [mh]; Title/Abstract [tiab]; phrase quotes | ATM maps words to MeSH; check Details to see mapping; truncation not in phrase |
| Embase (Ovid or Elsevier) | Emtree /exp; Title/Abstract; proximity (adj, NEAR/n) | Use explode to include narrow terms; syntax varies by provider |
| CINAHL (EBSCO) | MH headings; TI, AB fields; Nn proximity | Check subject headings tool; watch for auto-plurals with truncation |
| Web of Science | TS=Topic; NEAR/n; phrase quotes | No controlled vocabulary; rely on careful free-text and proximity |
| Cochrane CENTRAL | MeSH; Title/Abstract/Keyword; phrase quotes | Trials database; pairs well with trial registries |
Limits and filters: when and how to use them
Language and date limits can speed up screening but may drop useful material. If you set a date window, tie it to a real reason such as a drug launch or a new test method. Study design filters can be handy, yet some cut too hard. Prefer tested filters from trusted sources, and always check that your gold set still appears in the results after filters are applied.
Grey literature and beyond
Policy papers, guidelines, theses, conference abstracts, and trial registries often surface details that do not appear in journals. Decide which grey sources matter for your topic and list how you will surface them. Use site-specific search where possible and record the exact strings you run. Keep the bar for inclusion clear so you do not drown in noise.
Screens, logs, and version control
Save screenshots of search histories and platform settings. Keep a master log that lists date, database, platform, string version, and hit count. Version file names and store them in a clean folder structure. Small habits like these make audits painless and let others rerun your work without guesswork.
Quality checks and common pitfalls
- Too few synonyms. Expand with spelling variants, acronyms, and brand names drawn from strong papers.
- Missing controlled headings. Browse the thesaurus and add broader or narrower terms as needed.
- Messy logic. Use parentheses liberally and keep one concept per set.
- Overuse of NOT. Drop NOT unless the noise is overwhelming and well defined.
- Hard filters too early. Test without limits first, then layer filters with care.
- Poor de-duplication. Run multiple passes and document the rules you used.
- Thin documentation. Store full strings, provider names, and run dates with the results.
Work with a medical librarian
Information specialists build and review searches every day. Partnering with one can raise recall and precision while saving time. Many libraries offer one-to-one appointments and PRESS reviews. Even a single pass over your main database string can spot missing branches and logic slips.
Template blocks you can reuse
Keep a small library of building blocks you trust: drug name patterns, common study design lines, geographic terms, and frequent risk-of-bias phrases. Pair each block with notes about where it works and where it fails. When you start a new review, copy these blocks into your concept grid and adjust to fit the topic.
Bring it all together
A strong strategy reads like a recipe: clear question, mapped concepts, balanced use of headings and free-text, platform-aware syntax, a sensible set of sources, tidy records, and honest reporting. Build it once, run it cleanly, and share the package so anyone can follow your steps without guesswork.
Further reading and tools: the Cochrane Handbook chapter on searching, the PRISMA-S checklist for reporting, and the PubMed search builder help page clearly gives you step-by-step menus and query history.
