How To Do Systematic Review Research In Medicine | Smart Clear Steps

Start with a PICO question, register a protocol, search widely, screen in pairs, extract and appraise, synthesize, and report with PRISMA 2020.

Doing A Systematic Review In Medicine: Step-By-Step

Define The Question (PICO Or PEO)

Pick one well framed question. Use PICO for interventions: Population, Intervention, Comparison, Outcome. For prognosis or etiology, PEO works: Population, Exposure, Outcome. Write the exact scope, outcomes that matter, and timing. State primary and secondary outcomes. List all subgroups you plan to check. Keep the wording stable across your files.

Draft And Register The Protocol

Write a protocol before you touch the search engines. Set objectives, eligibility rules, outcomes, time points, and planned methods for screening, extraction, bias appraisal, and synthesis. Add details on who will do each task and how you will handle disagreements. Then register the protocol on a public registry so editors and readers can see your plan and track changes.

Build A Sensitive Search Strategy

Work with an information specialist and the Cochrane Handbook. Translate the question into subject headings and text words. Combine synonyms with OR, join concepts with AND, and add filters only when they are safe for the topic. Avoid quick fixes that cut recall. Plan to search at least two major biomedical databases plus one trials source. Save strategies and dates for your report.

Core Sources And What They Add

Source What You Find Tips
MEDLINE or PubMed Core clinical trials and observational studies Use both MeSH and free text. Test for missed terms.
Embase Drug and device coverage and European journals Map to Emtree and watch for duplicates.
CENTRAL or trial registries Randomized trials and records not yet published Search by condition and intervention names.

Set Clear Eligibility Criteria

Define inclusion and exclusion rules up front. Use study design, participants, setting, interventions or exposures, comparators, outcomes, time frame, and language. Be careful with language limits if you want global reach. Keep a short list of reasons for exclusion that match your rules. Pilot the criteria on twenty records and refine the wording once, then lock it.

Screen Studies Reproducibly

Import all records into a manager that has deduplication. Two people screen titles and abstracts in parallel. Resolve conflicts by consensus or a third reviewer. Move to full text screening with the same process. Keep a log of numbers at each step so you can build a PRISMA style flow diagram. Tag records for each reason they do not pass.

Extract Data With Discipline

Create a structured form in a spreadsheet or a tool. Pilot the form on five papers. Capture study identifiers, design, setting, eligibility, arms, sample size, baseline traits, interventions, doses, follow up, and all outcomes with units and time points. Record effect estimates and standard errors where given. Note funding, conflicts of interest, and protocol deviations. Extract in duplicate with calibration until agreement is steady.

Assess Risk Of Bias

Match the tool to the design. Use RoB 2 for randomized trials, ROBINS I for non randomized studies, and QUADAS 2 for diagnostic accuracy. Rate each domain with notes that cite the text. Avoid vague judgments. Make a decision rule for how these judgments will feed into the synthesis and into any summary of certainty.

Medical Systematic Review Methods: From Protocol To Publication

Plan The Synthesis

Decide early whether a meta analysis is feasible. If studies are comparable in PICO, effect measure, and time point, pooling may make sense. Pick a measure that matches the outcome: risk ratio, odds ratio, mean difference, or standardized mean difference. Use a random effects model when you expect real between study variation. Fixed effect fits a narrow set of settings where one true effect is a fair assumption.

Effect Models At A Glance

Model When It Fits Notes
Random effects Studies differ in methods or populations Produces wider intervals and downweights small studies less.
Fixed effect One common effect is a fair working view Narrower intervals, but poor fit when heterogeneity is real.

Handle Heterogeneity

Inspect forest plots by eye. Calculate I squared and tau squared to gauge spread. Pre specify subgroup checks and leave post hoc checks to sensitivity work. Do not slice data too thin. If heterogeneity stays high and patterns do not make sense, use a narrative synthesis and drop pooling.

Assess Certainty Of Evidence

Summarize how much trust you place in pooled or unpooled results. Note study limitations, inconsistency, indirectness, imprecision, and small study effects. Use a clear table that states the starting design level and any downgrade or upgrade reasons. Present the plain language take away for each main outcome.

Deal With Missing Or Messy Data

Contact authors for needed numbers such as standard deviations, event counts, or exact time to event results. When you impute, state the method and run sensitivity checks. Align units across studies and convert where needed. Avoid double counting by picking one effect per study per analysis.

Watch For Small Study And Reporting Bias

Search trial registries and conference abstracts. Compare published outcomes against registered outcomes. Use funnel plots when you have ten or more studies. Fit simple bias tests with care. Describe any signals and what they mean for your confidence in the body of evidence.

Write For PRISMA 2020

Report what you set out to do, what you found, and how you did the work. Include a structured abstract, the PRISMA 2020 checklist with searches and dates, a flow diagram, study tables, risk of bias figures, and results that match the plan. Disclose funding and any role in the review.

Team Roles And Calibration

Name a lead for each task: search, screening, extraction, synthesis, and writing. Run short calibration rounds until the team agrees on rules and coding. Keep a decisions log so you can defend choices during peer review. Use templates for emails, data requests, and forms to keep the pace steady.

Grey Literature And Unpublished Data

Look beyond journals. Search trial registries, preprint servers, dissertations, and regulatory sources. Hand search priority journals and scan reference lists of included studies and related reviews. Email authors for companion papers and missing outcomes. Record these steps so readers can repeat them.

Data Management And Reproducibility

Store raw search outputs, de duplicated sets, screening logs, extraction sheets, and scripts in a shared folder with version control. Use a consistent file naming plan. Share analytic code when you can. Post the protocol, forms, and data dictionary with the article or in a repository.

Common Pitfalls And How To Avoid Them

Scope creep: protect the protocol and file an amendment when true changes are needed. Single screener: pair up to cut missed studies. Overuse of filters: test recall before you commit. Vague bias ratings: cite the lines that back your call. Mixing designs in one meta analysis: separate or use a model that matches the mix and test the impact. Thin reporting: follow the checklist and include appendices.

When A Meta Analysis Is Not Possible

Pool only when studies line up. If designs, outcomes, or time points do not align, write a structured narrative. Group by intervention class, dose, route, comparator, and outcome. Keep the narrative tight and stick to data, not opinion. Use tables to bring structure and clarity.

Time And Resources

Make a timeline with milestones: protocol, search complete, screening, extraction, analysis, draft, and submission. Block time for calibration and for slow replies from authors. Assign backups for each role so illness or leave does not stall the work.

Ethics And Registration

Most reviews do not need ethics board approval, since you use public reports, but do follow journal rules. Register the protocol and keep the record updated with any amendments and the final citation. If your team uses patient data for an individual participant review, follow the rules for data access and consent.

Software And Tools

Use a reference manager, a screening tool with dual review features, a data extraction platform, and a stats package that can run meta analysis and produce forest and funnel plots. Pick tools your team already knows, or budget time for training. Document versions so readers can match your outputs.

Reporting Numbers That Matter

State the start and end dates for every search. Give the full strategy for each database in an appendix. Report the number of records retrieved, de duplicated, screened, excluded at title and abstract, excluded at full text with reasons, and included in each synthesis. Show the total in the flow diagram.

Presenting Results That Clinicians Can Use

Translate pooled effects into absolute terms when possible. Use baseline risks that match the target setting and show both relative and absolute effects. Add short notes on harms and on patient groups where data are thin. Keep figures readable with consistent scales and labels.

From Submission To Publication

Pick a journal that matches your scope and methods. Follow house style for PRISMA and for data sharing. Expect questions on search dates, protocol registration, bias judgments, and synthesis choices. Answer with quotes from the protocol and with links to your materials. Plan a brief visual abstract and a plain language summary for reach.

Keeping Reviews Fresh

Plan an update cycle for topics where new trials appear often. Set alerts in major databases and trial registries. Record the trigger rules you will use for an update, such as a new trial that changes the pooled effect or safety profile. For fast moving topics, a living review model with rolling searches may fit.

Closing Notes

Systematic review work rewards teams that plan, write things down, and stick to clear methods. If you move in this order—question, protocol, search, screen, extract, appraise, synthesize, and report—your review will be usable, transparent, and ready for peer review.

Practical Search Tactics That Save Time

Combine controlled vocabulary with free text for every concept. Explode subject headings and add field tags for title and abstract where needed. Add spelling variants, plurals, and hyphenation. Build a proximity line to catch phrases that appear near each other. Translate the main strategy from one database to another instead of starting from scratch each time. Run a pilot search and screen a random sample to see if known studies appear. If they do not, add missing synonyms and drop risky filters.

Record Management And Deduplication

Export full records with abstracts and identifiers. Standardize fields, then deduplicate with a transparent rule set. Match on distinct identifiers such as DOI and trial registration, then on title, authors, year, and journal. Keep the master file intact and mark, instead of delete, duplicates so you can trace steps later. When you rerun searches before submission, repeat the same rules and store a copy of both the old and new sets.

Meta Analysis Details That Matter

When events are rare, pick a method that handles zeros without bias. Risk difference or Peto odds ratio can help in some settings; just report the choice and test sensitivity to another method. Use Hartung Knapp for random effects when the number of studies is small. Convert medians to means with care and state any formulas used. If you include cluster trials, adjust for design effects. Always line up time points before pooling and avoid mixing early and late outcomes.

Sensitivity And Subgroup Work

Run planned sensitivity checks that test the stability of the main result: remove high risk studies, switch effect models, correct unit errors, and drop studies with imputed data. Subgroups should match choices laid out in the protocol, such as dose, age band, disease stage, or setting. Use interaction tests rather than separate meta analyses when you want to know if effects differ.

Writing Tables And Figures That Earn Trust

Place core tables early: study characteristics, risk of bias by domain, and a summary of findings. Use uniform terms and units across rows. In figures, stick with clear labels and the same order across panels so readers do not hunt for the same study twice. Add footnotes that explain any judgments, imputations, and conversions. Keep captions short and specific. Share editable tables in a public online supplement.