Pick terms with PICO, map to controlled vocabularies, add synonyms and syntax per database, then pilot, peer-review, and document the whole search.
What A Good Search Strategy Looks Like
A search plan for a healthcare review should be structured, transparent, and repeatable.
It starts with clear concepts, moves through controlled words and free text, and ends with records you can reproduce later.
You want sensitivity high enough to catch the right studies, while keeping noise low through smart scoping and tests.
| Element | Purpose | Example |
|---|---|---|
| Controlled vocabulary | Anchor the topic with database subject headings | MeSH: “Myocardial Infarction” |
| Free-text keywords | Catch new terms and naming variants | heart attack OR “acute coronary syndrome” |
| Phrases or proximity | Tie words that belong together | Ovid: heart adj3 failure |
| Concept blocks | Combine Population, Intervention, and Outcome as needed | (asthma) AND (inhaled corticosteroids) |
| Study design filters | Limit to RCTs or qualitative studies when justified | randomized OR trial* |
| Exclusions used sparingly | Reduce off-topic hits without cutting good evidence | NOT animal*[tiab] when humans only |
Plan Your Concepts With PICO Styles
Pick a structure that fits the question. Clinical trials often use PICO.
Etiology or prognosis questions may lean on PECO or PICo. Qualitative syntheses may use SPIDER.
Write each concept in plain words first, then you can translate those into search syntax.
Define Population, Problem, Or Setting
State age group, condition, and any setting limits that matter to your users.
Note common synonyms and spelling variants, including lay terms.
Keep vague labels out unless they help recall in pilot runs.
Name The Intervention Or Exposure
List drug names, class names, device terms, and procedures.
Add brand names when relevant. Include abbreviations and dosage forms only if they change meaning in titles or abstracts.
State The Comparator, If Any
Comparators can be placebo, usual care, or another drug.
They often stay out of the main blocks to keep sensitivity up, unless the head-to-head is the core question.
Decide The Outcomes That Matter
Outcomes seldom belong in search lines for trials because authors do not title papers with every outcome.
Add them only when they are rare terms that help precision without large loss of recall.
Build Terms: Controlled Words First, Then Keywords
Start by mapping each concept to the database’s subject headings.
Controlled indexing ties synonyms together and helps you reach older records.
Then layer free-text to catch new language and records not yet indexed.
Map To Subject Headings
Use each platform’s thesaurus. In MEDLINE this is MeSH, in Embase it is Emtree, in CINAHL it is CINAHL Headings.
Check scope notes, narrower terms, and related headings. Add subheadings only when they sharpen the question without cutting recall.
Layer Synonyms And Spelling Variants
Pull terms from seed papers, guidelines, registries, and expert input.
Add US and UK spellings, hyphenation variants, and common acronyms.
Decide on truncation only after testing for noise and word stems that change meaning.
Account For Acronyms And Phrases
Pair acronyms with their long forms when needed.
Use phrase quotes where the database allows them.
Add proximity where adjacency controls improve precision for word pairs that do not form a fixed phrase.
Use Boolean, Phrase Marks, And Truncation Safely
AND, OR, NOT With Care
OR binds synonyms inside a concept block. AND connects the blocks.
Use NOT sparingly, because it can cut valid records. Parentheses keep the logic tidy and prevent slips in operator order.
Truncation Symbols Differ By Database
Test roots before adding the asterisk. Many platforms need at least a few letters before truncation.
Some will not accept truncation inside quoted phrases. Wildcards and single-character symbols vary by vendor, so read the help page before final runs.
Phrase Searching And Proximity Operators
Phrase quotes can narrow results fast. Where proximity is available, set a small window for tight word pairs and a wider one for loose pairs.
Ovid uses adjN, EBSCO uses N# or W#, Embase uses NEAR/#, and Scopus uses W/n or PRE/n. Always test recall with landmark papers.
Choosing Search Terms For Systematic Reviews In Healthcare: Step-By-Step
- Write the question in one plain sentence. Mark the core concepts.
- List subject headings for each concept. Add narrower terms if needed.
- Draft free-text lines with synonyms, spelling variants, and acronyms.
- Group synonyms with OR. Combine concept blocks with AND.
- Pilot the set on one database and check if landmark studies appear.
- Adjust lines that add noise or miss known seed papers.
- Translate the set to each platform, matching field tags and syntax.
- Search more than one database. Add trials registers and grey sources.
- Export all results with full source details and de-dup in a reference tool.
- Record every line, limit, date, and platform for the final report.
Pick Databases And Grey Literature Sources
Core Health Databases
Use at least two major databases so you are not tied to one index.
Common pairs are MEDLINE and Embase, or MEDLINE and CINAHL for nursing topics.
Subject-specific sets such as PsycINFO or SPORTDiscus may join when relevant to the question.
Trials, Guidelines, And Registries
Add CENTRAL or trial registers for intervention questions.
Scan guideline portals and regulatory sources for terms and linked studies.
Conference abstracts and theses can surface early signals for new drugs or devices.
For method details on planning the search, the Cochrane Handbook sets out good practice.
For reporting, follow PRISMA-S so readers can repeat your work.
Pilot, Peer Review, And Record Everything
Run small trials of the search and check if known landmark papers appear early in the result list.
Invite a second searcher to review logic, fields, and spelling. Use a checklist method such as PRESS where available.
Keep a log of decisions so edits stay traceable across updates.
Common Pitfalls And Safer Moves
| Risky move | Why it hurts | Safer move |
|---|---|---|
| Using only free-text | Misses indexed papers and older records | Blend subject headings with keywords |
| Heavy use of NOT | Accidentally removes relevant studies | Refine with fields or proximity instead |
| Copying filters untested | Unknown loss of recall in your topic | Validate filters and cite the source |
| Skipping database translation | Syntax breaks and mapping fails | Match tags, operators, and subject trees |
| Truncating short roots | Pulls noise or gets ignored | Use four or more letters where required |
Reporting: Make Your Search Reproducible
Write exact search lines for each platform in an appendix.
Add dates, limits, numbers retrieved, and any de-dup steps.
Include a PRISMA flow diagram with counts from databases, registers, and other sources.
Share search files in a public repository.
Quick Start Templates You Can Adapt
Therapy Question (Asthma And Inhaled Steroids)
(exp Asthma/ OR asthma*.ti,ab.) AND ((exp Adrenal Cortex Hormones/ OR exp Budesonide/ OR beclomethasone.ti,ab. OR budesonide.ti,ab.) OR (inhal* corticoster*).ti,ab.) AND (randomi* OR placebo OR trial.ti.)
Diagnostic Question (CT For Appendicitis)
(exp Appendicitis/ OR appendicitis.ti,ab.) AND (exp Tomography, X-Ray Computed/ OR computed tomograph*.ti,ab. OR CT.ti.) AND (sensitiv* OR specific* OR ROC.ti,ab.)
Ethics And Bias Checks
State any language or date limits and why they were used.
Screen titles and abstracts in pairs where possible to limit bias from the search stage onward.
Keep audit trails for changes so an update can be compared against the first run.
When To Use Filters
Use tested filters from trusted groups for study design or humans.
Cite the source and version. Avoid stacking many limits at once, because interactions are hard to predict.
If you tighten the net, run a recall check against a small gold set to measure loss.
Field Tags And Where They Help
Field tags limit where a term is found. Title and abstract fields work well for new drug names and device brands.
Subject heading fields pull in indexed records even when the term is missing in the title.
Keyword fields added by authors can help for niche phrases. Use one field set per line and compare yield to an untaged version to judge trade-offs.
Grey Sources: What To Add And Why
Peer-reviewed databases miss items such as trial protocols, preprints, dissertations, and regulator reports.
Add ClinicalTrials.gov and WHO ICTRP for registration data.
Check preprint servers that match your topic.
Search theses portals for methods that never reached journals.
Note which sources you used so readers can follow the same path.
De-Duplication And Record Management
Export full records in a consistent bibliographic format from each source.
Include database name, platform, and date.
Combine sets in a reference manager and remove exact matches first, then near matches by title, year, and first author.
Keep a backup of raw exports so any removal step can be reversed later if needed.
Timing And Updates
Run an update search near manuscript submission, or before the final decision in policy work.
Use saved alerts during long projects so fresh records are easy to spot.
When a review is kept live, set a refresh cycle and log each rerun with the same level of detail as the first pass.
Work With A Medical Librarian
A librarian trained in health databases can help shape concepts, run peer checks, and translate syntax across vendors.
Invite them as a coauthor when they meet authorship norms.
Shared credit encourages better reporting and long-term upkeep of methods files.
Sensitivity And Precision Checks
Test recall by building a small gold set of known relevant studies from scoping work or citation chasing.
Run your search and confirm that the gold set appears.
If items are missing, add their wording to your lines.
Estimate precision by screening a random slice of the first few hundred records.
Tune proximity windows, phrases, and field tags to raise the share of on-topic hits while holding recall steady.
Stop tuning when new edits no longer lift either metric.