How To Do A Systematic Review Of Literature In Medicine | Practical How To

Define a PICO question, register a protocol, search widely, screen in pairs, appraise bias, synthesize, and report with PRISMA.

Start with a sharp question

Pick a clear clinical problem and turn it into a structured question. Most teams use PICO or a close variant: Patient or population, Intervention or exposure, Comparison, and Outcome. Write one primary question first, then a small set of secondary questions. That tight scope keeps searches precise and makes screening faster.

Draft a short set of eligibility rules that match the question. State study designs you will include, the setting, time frame, languages, and any outcome windows. Keep rules testable and binary so two reviewers would reach the same call. Add two sentinel studies you expect to find; they act as a quick check that your search plan can actually surface the right records.

Doing a systematic review of literature in medicine: core steps

Write and register a protocol

Record the question, rationale, planned methods, and any limits in a protocol document. Then register it in PROSPERO before you run your search. Registration time stamps your plan, reduces bias, and makes the project discoverable. List roles for each team member: content lead, methods lead, statistician, information specialist, and guarantor. Add a change log for deviations.

Map databases and grey sources

Plan a multi-database search. MEDLINE via PubMed or Ovid and Embase span most biomedical trials. Add CENTRAL for controlled trials, CINAHL for nursing and allied fields, and Web of Science or Scopus for citation chasing. List registers and grey sources as well: ClinicalTrials.gov, WHO ICTRP, dissertations, conference books, and preprints when needed. Write the same plan for each source so the approach stays consistent.

Build reproducible search strings

Combine subject headings with text words, synonyms, and spelling variants. Use field tags, Boolean logic, proximity operators, and truncation where the platform allows. Pilot the query and test that your sentinel studies appear. Save exact strategies, dates, and hit counts.

Save exact queries

Store each full string with platform, fields, and limits so others can replay it.

Manage records and de-duplicate

Export all results with abstracts and identifiers. Use reference managers or screening tools to remove exact and fuzzy duplicates. Keep a log that shows counts before and after each step. You will need those numbers for the flow diagram later.

Milestones, deliverables, and quick wins

Milestone Deliverable Quick tip
Protocol ready Registered record and PDF Post a public link from PROSPERO on your project page
Search run Saved strategies and export files Capture platform, dates, limits, and hit counts in one sheet
Screening complete List of included studies Store reasons for exclusion using a fixed menu
Extraction complete Locked dataset Pilot the form on five papers before full data pull
Bias appraisal done Study-level judgments Calibrate on two rounds, then proceed in pairs
Synthesis ready Meta-analysis or narrative bundle Decide model and effect type before you run code
Report drafted PRISMA tables and flow diagram Cross-check each item against the checklist

Screen titles and abstracts

Run dual screening for titles and abstracts using your eligibility rules. Resolve conflicts by a third reviewer or a group huddle. Check early agreement, then refine any vague rule. Move all records marked “include” or “unclear” to full-text review. Log every full-text exclusion with a single reason from a predefined list.

Set up conflict resolution

Assign a default tiebreaker or a short huddle rule before you start.

Extract data with a piloted form

Design a data form that captures study identifiers, methods, participants, interventions or exposures, comparators, outcomes, time points, and analytic notes. Add fields for risk-of-bias items. Pilot on a small batch, clean wording, then lock the form. Extract in pairs with reconciliation. Where data are missing, try author contact once with a short, specific request.

Judge risk of bias

Pick tools that match your study types. Many teams use RoB 2 for randomized trials and ROBINS-I for non-randomized studies. Diagnostic accuracy studies often use QUADAS-2. Train with examples, run a calibration round, then rate in pairs. Record verbatim quotes so readers can see the basis for each call.

Synthesize findings

Choose an effect measure that fits the outcome: risk ratio, odds ratio, hazard ratio, mean difference, or standardized mean difference. When pooling, decide on a fixed or random effects model and justify the choice. Report heterogeneity with I² and a between-study variance. If meta-analysis does not fit due to sparse or mixed data, present a structured synthesis with grouped tables and reasons the data do not pool.

Rate certainty and draft

After synthesis, rate certainty by domain and by outcome. Then write the story in plain language. Place the main answer up top, show what you found, and explain what it means for clinicians and patients.

Report with PRISMA

Use the PRISMA 2020 statement to structure the report. Follow the checklist, complete the abstract items, and build a flow diagram from your logs. The checklist guides what to report in methods, results, and discussion so readers can audit the work.

How to conduct a medical literature systematic review: workflow

Here is a clean path from start to finish:

  1. Frame the question and write eligibility rules that match it.
  2. Draft the protocol and register the record in PROSPERO.
  3. Design and peer review search strategies for each source.
  4. Export, de-duplicate, and prepare the screening set.
  5. Screen titles and abstracts in pairs; record decisions.
  6. Retrieve full texts; screen again with a reason for each exclusion.
  7. Extract data in pairs with a piloted form and a reconciliation step.
  8. Judge risk of bias with a tool that fits each study type.
  9. Plan synthesis; run meta-analysis where fit; otherwise group findings.
  10. Rate certainty and craft a clear, structured report with PRISMA.

Search choices that save time

Pick platforms with strong filters

Some databases have rich filters for study design or subject areas. Use trial filters for randomized work and observational filters for cohort and case-control studies. Avoid language or date limits unless justified by the question. Test a small set of limits on your sentinel studies to confirm no loss.

Write strings that travel

Build a master string with concept blocks, then translate it to each platform. Keep a translation table for operators and field tags. Track every tweak you make so the strategy is fully reproducible later.

Chase citations forward and back

Once you have a core set of included studies, chase citations. Use reference lists to go backward in time and cited-by tools to go forward. This can surface missed trials and protocols.

Bias and heterogeneity: plan, detect, respond

Think about sources of bias

Common concerns include sequence generation, allocation concealment, blinding, missing data, measurement error, and selective reporting. Non-randomized work adds confounding and selection bias. State up front which domains matter most for your outcomes.

Read patterns, not just p values

When pooling, look beyond a single test. Compare direction, magnitude, and overlap of confidence intervals. Use subgroup or meta-regression only if planned in advance.

Check for small-study effects

Plot effect against precision and add contour lines. Where patterns suggest asymmetry, run sensitivity checks. Be clear that such checks are signals, not proof of publication bias.

Data extraction that holds up

Define variables before you pull

Write a data dictionary with coding rules. Clarify how to handle multi-arm trials, crossover designs, cluster trials, repeated measures, and adjusted versus unadjusted results. Pre-specify which time points you will prefer for each outcome.

Handle units and scales

Convert units to a single scale and record any transformations. For continuous outcomes, note the exact measure and scale direction so higher always means the same thing across studies. For proportions near 0 or 1, pick a method that handles extreme values.

Reduce transcription errors

Use copy-paste with checksums where possible. Double entry on a sample can reveal early errors. Keep raw files in a versioned folder so every change is traceable.

Write for readers first

Open methods and data

Post your protocol, strategies, and code in a public repository. Share extracted datasets with labeled variables. A clear path from question to code builds trust and makes updates simple.

Structure the story

Lead with the answer to the main question, clearly. Then show the evidence map, the included studies, the risk-of-bias profile, and the synthesis. Use plain terms, short sentences, and clear tables. Avoid jargon where a simple word will do.

Follow reporting standards

The Cochrane Handbook gives detailed methods for searches, selection, bias assessment, synthesis, and updates. Pair it with the PRISMA 2020 checklist so your write-up meets field expectations.

Common pitfalls and fixes

Pitfall Symptom Fix
Vague question Bloated search and noisy screening Narrow PICO and tighten outcomes
Missing sources Few trials even with broad topic Add CENTRAL, trial registers, and citation chasing
Loose screening Low agreement within the team Refine rules and run a second calibration
Poor data capture Gaps or mismatched units Write a dictionary and convert to one scale
Unclear bias calls Readers can’t see why you rated a domain Store quotes and page numbers for each item
Over-eager pooling Mixed designs, heavy heterogeneity Switch to structured synthesis with grouped tables
Thin reporting Reviewers ask for methods details Fill PRISMA items and share logs, code, and data

When meta-analysis fits

Pool only when studies ask the same question with comparable designs and measures. Match follow-up windows and adjust for cluster or crossover designs when needed. If a meta-analysis goes forward, report the model, estimator, and software. Add a sensitivity run that removes high-risk studies or outliers and show how the result moves.

Choose effect types well

For binary outcomes, risk ratio or odds ratio both work; pick one and stick with it. For time-to-event data, use hazard ratios. For continuous outcomes on the same scale, use mean difference; across scales, use standardized mean difference. Report units and scales so readers can judge clinical sense.

Plan updates and living work

State how often you plan to rerun the search and what would trigger an early update, such as a large new trial or a new drug class. Version your datasets and code so a new run is simple. A living format can help when the field moves fast and the team can sustain monitoring.

Share beyond the paper

Publish a preprint to invite early feedback. Post plain-language summaries and slide decks for clinicians and patients. Share data on an open portal so others can reuse and extend your work.

Team roles and skills that help

Build a balanced team

Blend content knowledge with methods strength. A clinician frames the question and judges clinical sense. A methodologist shapes rules, bias tools, and synthesis steps. An information specialist builds and audits searches. A statistician picks models and checks assumptions. One person acts as guarantor for accuracy.

Pick tools that fit your workflow

Use a reference manager for storage, a screening app for dual review, a spreadsheet or REDCap form for data, and an analysis stack such as RevMan or an R setup. Keep a short file that lists tool versions so others can rerun your work.

Run training and calibration

Start with a small batch of papers and practice your rules. Hold a short huddle to agree on border cases. Repeat until agreement stabilizes. The same pattern works for bias tools and extraction. Ten extra minutes here saves hours later.

Ethics, disclosures, and data sharing

State funding sources, describe any ties with product makers, and point to a statement on conflicts. Share the protocol and any changes. Make a public repository with code, the raw extraction file, a cleaned dataset, and a readme that explains the file layout. If you use patient-level data, describe the permission route and update timeline.