How To Complete A Systematic Review In Medicine | Clean, Clear Steps

A medical systematic review follows a preplanned method to find, judge, and synthesize studies so readers can trust the answer.

What A Medical Systematic Review Is

A medical systematic review is a structured research article that maps a focused question, searches for all relevant studies, checks study quality, and presents a transparent synthesis. It differs from a narrative piece because every step follows a protocol, with reasons written down in advance. This process reduces bias and gives readers a clear view of what the known evidence shows. When done well, another team could repeat the same steps and reach the same set of included studies.

Completing A Systematic Review In Medicine: Step-By-Step

Frame The Question

Start with a tight question that fits your clinical or policy need. PICO is a handy pattern: Population, Intervention, Comparison, Outcome. Add Study design if needed. For diagnostic or prognostic topics, adapt the elements to fit the domain. A clear question guides the search, the screening rules, and the data fields you plan to extract.

PICO Variants

For tests, swap Intervention and Comparison for Index test and Reference standard. For prognosis, set Index condition and Time horizon. For qualitative syntheses, spell out Perspective and Phenomenon of interest. Keep each element short and concrete so the search can mirror the wording.

Pre-Register The Protocol

Write a protocol that states the question, eligibility rules, outcomes, search plan, screening flow, data items, and analysis plan. Register the protocol on a public portal so readers can see what you planned from the start. Many teams use PROSPERO for health topics. Protocol changes can happen, but each change should be dated and explained.

Core Protocol Elements

Include a short background, objectives, eligibility criteria, full search strategy, study selection steps, data items, risk of bias tool, synthesis plan, and a plan to rate certainty. Name roles and who will act as an arbiter when judgments differ.

Build A Reproducible Search

Work with an information specialist or a trained librarian. Write database-specific strings, list all limits, and record the date for each run. Aim to search at least two large databases for interventions, such as MEDLINE, Embase, and CENTRAL, with subject headings and text words. Add CINAHL or others if the topic warrants it. Document every source, including trial registries and citation chasing. Keep a log of all search decisions and test sets.

Core Search Sources And Practical Notes
Source Why Use It Search Notes
MEDLINE/PubMed Broad biomedical coverage Use MeSH and text words; record filters
Embase Strong pharma and European journals Use Emtree; watch for duplicates
Cochrane CENTRAL Trials register for interventions Good for randomized trials
CINAHL Nursing and allied health Subject headings differ from MeSH
Trial registries Unpublished and ongoing studies Search by condition and intervention

Export all records, remove duplicates with reference software, and save the raw files. Keep the full search strings in an appendix. Use a search reporting checklist, and include a flow figure that counts records at each step.

Search Strings And Peer Review

Build sets for each PICO element and link them with Boolean logic. Include synonyms, acronyms, and spelling variants. Ask a second librarian to scan the strings. Test recall against a known set of key papers before running the full search.

Set Eligibility Criteria

Define clear inclusion and exclusion rules before screening begins. Common fields include participants, setting, intervention type and dose, comparison, outcomes, study design, years, and language. Keep rules tight but fair so relevant work is not missed. Pilot the rules on a small set of studies and refine wording where needed.

Handling Multiple Reports

Map multiple articles that stem from the same study to one record. Note which report supplies methods and which supplies outcomes. This avoids double counting when you extract effects.

Screen Titles And Abstracts

Use two independent reviewers for title and abstract screening with a pilot round to calibrate judgments. Resolve conflicts with a short meeting or a third reviewer. Move to full-text screening with the same two-reviewer approach. Log reasons for exclusion at the full-text stage in short phrases that match your eligibility rules.

Minimizing Errors

Run a calibration set of at least fifty records. Track percent agreement and adjust rule wording if confusion appears. Keep a short glossary beside the screening tool.

Extract The Data

Design a data form before screening ends. Pilot it on a few studies, tune the fields, and then apply it to all included studies with one extractor and one checker, or two extractors. Capture study methods, population details, intervention and comparator, outcomes, follow-up time, effect estimates, and notes on funding and conflicts. Keep a clean codebook so terms stay consistent.

Data Fields That Pay Off

Record how outcomes were measured, when they were measured, and any imputed values. Note unit of analysis issues such as cluster trials or cross-overs. These details save time during synthesis.

Judge Risk Of Bias

Pick a tool that matches the study design. For randomized trials, tools like RoB 2 guide judgments by domain. For non-randomized designs, ROBINS-I is common. Train the team, pilot a few papers, and record a quote and a judgment for each domain. Disagreements should be settled by consensus or a third voice. Present study-level judgments in tables and figures, and explain how they feed into your synthesis.

Calibration And Quotes

Agree on example quotes that represent low, some concerns, and high risk for each domain. Save these in a shared file so calls stay consistent across reviewers and time.

Plan The Synthesis

Map outcomes and time points, and decide on the effect measures that fit each outcome. State rules for grouping studies and for handling multiple arms. Plan how you will handle missing data. For methods and worked examples, the Cochrane Handbook is a trusted guide. If study designs, populations, and interventions align, a meta-analysis may be possible. If not, present a structured narrative with tables that compare key features and results. State how you will explore diversity across studies.

When Pooling Is Not Sensible

Use vote-counting sparingly and only with clear rules. Prefer grouped tables with consistent metrics and time points. Explain why pooling was set aside and what pattern the tables show.

Run The Meta-Analysis When Fit

Choose a model that suits the expected diversity across studies. Fixed effect works when a single true effect is a sound assumption; random effects is often used when study effects vary. Report the method used to pool results and any adjustments. Quantify between-study variation. Inspect influence with leave-one-out checks when helpful. Avoid mechanical pooling when clinical or methodological differences are too large.

Reporting The Numbers

Give the pooled effect with a confidence interval and a plain sentence that states what the number means. Show forest plots and list key study features beside the estimates.

Report With A Transparent Checklist

Use a reporting checklist that matches systematic reviews. PRISMA 2020 is the standard for effects of interventions and related questions; see the PRISMA 2020 reporting checklist and templates. Include the checklist, the abstract template, a flow diagram, and all search strings. Link to your protocol and mark any changes. Share data extraction files and code where feasible so readers can reuse the work.

Flow Diagram And Appendices

Include numbers for records identified, screened, excluded with reasons, and included. Place full database strings and any peer-reviewed search forms in an appendix so the search can be repeated.

Rate Certainty Of Evidence

Summarize each main outcome in a concise table. Rate certainty for each outcome across the body of evidence, taking into account risk of bias, inconsistency, indirectness, imprecision, and publication bias. State reasons for any rating change and keep the language tight and plain. Present the absolute and relative effects so readers can see practical impact.

GRADE Domains And What To Check
Domain Main Question Typical Signals
Risk of bias Are study methods sound? Poor concealment, missing data, selective reporting
Inconsistency Do study results agree? Wide spread of effects, no overlap of intervals
Indirectness Does the evidence fit the question? Different population, intervention, comparator, or outcome
Imprecision Is the estimate tight enough? Small samples, wide intervals, few events
Publication bias Is missing work likely? Small-study effects, unregistered trials

Summary Of Findings Tables

Present one table per main outcome group with effect sizes, baseline risks, absolute changes, and certainty ratings. Keep footnotes short and specific so readers can trace each rating decision.

Write Clear Methods And Results

Use short, direct sentences and place the answer up front. In the abstract and the opening of the results, state the number of included studies, the main effect, and the range of certainty. Keep methods in past tense and results in present or past as your style guide prefers. Match figures and tables to the text so readers can scan and grasp the take-home points without digging.

Common Pitfalls And Practical Fixes

Vague Questions

Broad questions pull in too many designs and outcomes. Tighten the PICO and set clear primary outcomes. State a small set of secondary outcomes only if they inform action.

Thin Searches

One database and a few keywords miss key trials. Expand the sources, add subject headings, and peer-review the search. Record every limit and date so the process can be checked.

Shaky Screening

Single-reviewer screening raises error risk. Use two reviewers and a pilot set, and log reasons for exclusion. Keep the flow figure consistent with the counts from your tools.

Mismatched Pooling

Pooling across apples and oranges hides useful signals. Group studies by design, dose, or time frame and run separate syntheses. If pooling still looks shaky, stick to tables and figures.

Helpful Tools And File Hygiene

Reference And Screening

Common tools include EndNote, Zotero, Rayyan, Covidence, and RevMan. Pick one stack, write down the version numbers, and keep a dated export of every stage. Back up the raw records, the deduplicated library, the screening decisions, and the final included set.

Data And Code

Store the data form, the codebook, and any analysis code in a shared folder with version history. Name files with dates and clear labels. Save the outputs that feed the figures and tables so the trail is clear.

Authorship, Conflicts, And Ethics

Set roles early using a fair authorship scheme. List funding and any ties that readers should know about. Most reviews do not need ethics board review, but check local policies for topics that use patient-level data. State how you handled any contact with study authors to request missing details.

Timeline And Work Plan

A lean timeline for a focused intervention topic might look like this: two weeks to frame the question and draft the protocol; two to four weeks for searches and deduplication; four to six weeks for screening; four weeks for extraction and risk of bias; two weeks for synthesis; two weeks for writing and internal checks. Large or complex topics will need more time. Build slack for team meetings and pilot rounds.

Final Checks Before Submission

Files And Appendices

Include the protocol link, full search strings, the flow diagram, risk of bias tables, data extraction form, and any code. Check that tables and figures match the text and that counts add up across the flow.

Clarity And Plain Language

Use short headings, direct verbs, and simple numerators and denominators. Report both absolute and relative effects. State limits of the evidence with plain reasons tied to the GRADE table.

Where To Share

Post the data files and materials on an open repository if allowed. Many journals ask for a data availability statement, so prepare a link and a brief note that lists what files are shared.