Plan a protocol, search multiple databases, screen in pairs, extract consistently, rate bias, synthesize, GRADE certainty, and report with PRISMA.
You want clear steps, fewer detours, and a result that stands up in peer review. This guide lays out a clinic-friendly method for running a systematic literature review in medicine from scoping to submission. It keeps the language plain, the steps practical, and the deliverables traceable. Use it to plan your work, brief collaborators, and defend choices when editors or supervisors ask tough questions.
What A Systematic Review Does In Clinical Work
A systematic review answers a focused question by finding, appraising, and synthesizing studies with methods set in advance. It differs from a traditional overview because every step is explicit, repeatable, and audited. When done well, it reduces bias, maps uncertainty, and shows where new trials would actually help.
Doing A Systematic Literature Review In Medicine: The Core Steps
Across specialties the workflow is consistent. You set a tight question, register a protocol, build and run searches, select studies in duplicate, extract data with a tested form, judge risk of bias, choose a synthesis plan, grade certainty, then write and share everything with a clear checklist. The table below distills each stage, the concrete outputs, and common tools.
| Stage | What You Do | Outputs & Tools |
|---|---|---|
| Scope & Question | Refine the clinical question using PICO or a variant; set outcomes that matter to patients and decision makers. | One sentence question; primary and secondary outcomes; inclusion and exclusion rules. |
| Protocol | Draft methods before searching; plan screening, extraction, bias tools, and synthesis rules. | Registered protocol number; versioned document in a repository. |
| Searching | Design database strings with subject headings and keywords; include trial registries and grey sources. | Reproducible strings; date limits; search log; export files; de-duplication report. |
| Screening | Two reviewers screen titles/abstracts then full texts against the rules; resolve disagreements by discussion or a third reviewer. | Screening log; reasons for exclusion; PRISMA flow counts. |
| Extraction | Pilot a form; extract outcomes, time points, effect measures, and study features; capture risk of bias data. | Locked extraction template; complete dataset; clarifications from authors if needed. |
| Bias Assessment | Apply a tool matched to study design; rate domains, not just an overall hunch. | Domain-level judgments with support notes; consensus file. |
| Synthesis | Decide on meta-analysis vs structured summary; pick effect models; plan subgroups and sensitivity checks. | Analysis code; forest plots; heterogeneity metrics; small-study checks. |
| Certainty | Use GRADE to judge the body of evidence per outcome. | Summary of findings table with certainty ratings and plain-language statements. |
| Report | Write to a reporting checklist; archive searches, code, and forms. | Complete manuscript; checklists; flow diagram; public repository links. |
Define A Tight Question And Clear Rules
Write the review question as one polished sentence, then break it into the population, intervention or exposure, comparator, and outcomes. Decide the study designs you will include. State time frames, settings, and language limits up front and keep them short unless there is a sound reason to expand. Tie outcomes to patient-relevant measures and prespecify the main time point. For detailed method pointers while drafting eligibility, the Cochrane Handbook is open and thorough.
Register A Protocol Before You Search
Protocol registration prevents scope drift and improves transparency. Include the full search plan, the exact eligibility rules, screening steps, extraction form, all risk of bias tools, and your synthesis choices. Set decision rules for overlapping populations, multi-arm trials, cluster designs, and repeated measures. Name the software you will use for screening and meta-analysis. Version the file and keep a change log. For health topics, register on PROSPERO so others can see your plan and avoid duplication.
Plan Searches You Can Reproduce
Work with a medical librarian if you can. Build strings with both controlled vocabulary and free text. Test synonyms, spelling variants, and drug names. Search at least two major databases that index your field, plus a trial registry and one broad index. Record every date, database, platform, and the full syntax. Export all records and report de-duplication steps so another team can match your counts.
Databases And Sources To Cover
Most clinical topics call for MEDLINE via PubMed or another platform, Embase if available, and CENTRAL for randomized trials. Add CINAHL for nursing and allied health, PsycINFO for mental health, and Web of Science or Scopus for citation chasing. Check ClinicalTrials.gov and the WHO ICTRP for trial records and terminated studies. Scan preprints with care and label them clearly in your notes.
Search Strings That Pull Their Weight
Combine subject headings with text words linked by Boolean operators. Use proximity operators when the platform supports them. Truncate carefully to avoid noise. Keep the core strategy in one database, then translate it field by field to the others. Pilot the string on known key papers and tune until those appear near the top.
Select Studies With Calibrated Screening
Start with a calibration round on fifty to one hundred records so reviewers apply the rules the same way. During title and abstract screening, err on the side of inclusion; the rules bite at full text. Log reasons for exclusion using a short, consistent list. When in doubt, retrieve the full text and decide in pairs.
Extract Data That Stays Consistent
Pilot the extraction form on three to five studies and fix any confusing fields. Capture sample sizes, baseline risk, outcome definitions, time points, and the effect measure used. For multi-arm trials, record which arms contribute to each comparison. Note unit of analysis issues and how you will handle them. Store all forms in a single folder with version control.
Minimal Dataset To Capture
- Study ID, country, setting, design, recruitment window.
- Eligibility features (e.g., disease definition, severity, comorbidities).
- Intervention details (dose, schedule, route, co-interventions).
- Comparator details and co-interventions.
- Outcome definitions, time points, measurement instruments.
- Numbers analyzed, missing data handling, analysis population.
- Effect estimates with precision (CI, SD, SE) and any adjustments.
Judge Risk Of Bias By Domain
Pick tools that fit your designs. For randomized trials, use a domain-based tool that covers randomization, deviations from intended interventions, missing data, outcome measurement, and selection of reported results. For non-randomized studies, use a tool that handles confounding and selection at baseline. For diagnostic accuracy, use a tool that covers patient selection, test conduct, and reference standards. Make notes that back each judgment and resolve differences in pairs.
Choose A Synthesis Plan That Matches The Data
If the data align, run a meta-analysis. State your effect measure in advance: risk ratio or odds ratio for binary outcomes; mean difference or standardized mean difference for continuous outcomes; hazard ratio for time-to-event outcomes. Pick a random-effects model when clinical and method differences suggest more than one true effect, and report the model you used. Quantify heterogeneity with I-squared and tau-squared and show forest plots with study weights. Run preplanned subgroup and sensitivity checks, and probe for small-study effects with funnel plots when you have enough studies.
When Meta-Analysis Is Not Sensible
Sometimes outcomes, measures, or follow-up windows do not line up. In that case, group studies by design, outcome, and time point, then write a structured synthesis. Keep the same outcome order across sections so readers can scan. Use tables to display direction and size of effects with short notes on study features that might explain differences.
Rate The Certainty Of Evidence With GRADE
Judge certainty per outcome across the whole body of evidence. Start at high for randomized trials and lower the rating when you see serious issues with bias, inconsistency, indirectness, imprecision, or publication bias. For observational evidence, you can raise ratings for large effects, dose-response, or when all likely biases would reduce the observed effect. Write one concise statement per outcome that links the effect size to the certainty level.
Report With A Checklist And Archive Everything
Write the manuscript to a reporting checklist so reviewers can find each element quickly. Label figures and tables with self-explanatory titles. Include the full search strategies in an appendix, the flow diagram with counts, and any code used for analysis. Share de-identified extraction forms and bias tables in a public repository if your journal allows it. For reporting, keep the PRISMA 2020 checklist handy and follow it line by line.
Steps For A Systematic Literature Review In Medical Research
Here is a compact plan you can paste into a project brief. 1) Question and rules; 2) Protocol with registry; 3) Reproducible search; 4) Dual screening; 5) Pilot and run extraction; 6) Risk of bias by domain; 7) Synthesis fit for the data; 8) Certainty ratings; 9) Reporting and sharing. Use short weekly check-ins to keep decisions documented and files tidy.
| Design | Tool | What It Covers |
|---|---|---|
| Randomized trials | RoB 2 | Randomization, deviations from intended interventions, missing outcome data, outcome measurement, selection of reported results. |
| Non-randomized interventions | ROBINS-I | Bias due to confounding, selection, classification of interventions, deviations, missing data, measurement, selection of reported results. |
| Diagnostic accuracy | QUADAS-2 | Patient selection, index test, reference standard, flow and timing. |
| Prognostic factors | QUIPS | Study participation, attrition, prognostic factor measurement, outcome measurement, confounding, analysis and reporting. |
Practical Tips That Save Time And Rework
- Name files with a strict pattern so sorting works:
YYYYMMDD_source_action. - Keep a single living log for searches and de-duplication. A simple spreadsheet with one row per run is enough.
- Set rules for multiple reports of one study. Pick a primary report and add linked notes for extensions.
- When data are missing, write to authors once and set a clear wait period; record any replies in the dataset.
- Store all decisions inside the protocol change log, not scattered across messages.
Ethics, Data, And Software
Most reviews use published data and do not need ethics board approval. If you plan to use individual patient data, seek guidance locally. Pick software your team can access and support. Reference managers help with de-duplication, screening tools keep pairs in sync, and meta-analysis packages make analysis reproducible. If you write code, share it with a permissive license.
Peer Review Proofing Checklist
- Title states the design and topic.
- Abstract reports the main effect and certainty for each primary outcome.
- Methods match the registered protocol or explain changes in one place.
- All databases, platforms, dates, and full strings appear in an appendix.
- Dual screening and extraction are evident and documented.
- Bias judgments are domain-based and supported.
- Synthesis rules come before the results, not after.
- Figures are readable on a laptop screen and have legible labels.
- All data behind the figures are shared in a public link.
When Your Topic Is Broad Or Complex
Break a wide topic into prespecified sub-questions or outcomes rather than mixing them in one pool. For complex interventions, group by key components and delivery settings. For living areas, plan a maintenance schedule and state update triggers in the protocol. If you hit heavy overlap across studies, use a staged approach: map first, then focus the analytic piece.
Small Set Of Links Worth Bookmarking
Use a reporting checklist built for reviews, keep a handbook open when you face design quirks, and register the protocol so others can see what you planned. These three links are quick routes to each task: the PRISMA 2020 checklist, the Cochrane Handbook, and PROSPERO.
Good reviews read cleanly because the work behind them is tight. Write clear rules, keep records, use domain-based bias tools, and be candid about certainty. Do that, and your summary will help busy clinicians and guideline groups make better calls.