How To Do A Systematic Review Using PRISMA | Quick Pro Tips

Set a clear question, register a protocol, run transparent searches, double-screen, extract data, assess bias, synthesize, and report with PRISMA.

PRISMA helps authors tell readers exactly what they did and what they found. It is a reporting guideline, not a method, yet it pairs nicely with good methods from trusted sources. The plan below keeps the work reproducible from the first search string to the last table and figure.

Steps For A Systematic Review With PRISMA

The stages follow a simple rhythm: plan, search, screen, extract, assess, and synthesize. Track counts at each stage so the PRISMA flow diagram writes itself at the end. The table gives you the whole arc in one place.

Stage What You Do Output You Save
Question Frame PICO (or a fit-for-purpose variant), name outcomes and settings, state comparators, and list study designs in scope. A one-line question plus detailed eligibility notes.
Protocol Draft aims, eligibility, search plan, screening rules, data fields, bias tools, and synthesis plan; agree roles and timelines; pre-specify any subgroup plans. A dated protocol and a registration record.
Search Translate the question into database syntax with keywords and subject headings; include trial registers and preprints when relevant. Full strategies, run dates, platforms, and export logs.
De-duplication Import records, remove exact and fuzzy duplicates with transparent rules, and keep a log of counts removed. A clean library and a count report for PRISMA.
Title/Abstract Two reviewers screen in parallel after a short pilot; conflicts go to a third person or consensus chat. Included list for full-text and reasons for early exclusion.
Full-Text Retrieve full texts, apply the same rules, record a single reason for each exclusion, and capture missing info requests. Final include set and an exclusions list with codes.
Extraction Use a tested form; extract in duplicate when stakes are high; capture effect sizes and context that matters. Row-level data plus a codebook and audit trail.
Risk Of Bias Rate study-level bias with a tool matched to the design; resolve differences with rules set in the protocol. Bias ratings and justifications.
Synthesis Pick a model that fits the data; plan narrative text for areas where models do not fit; check sources of variation. Meta-analysis outputs or structured narrative tables.
Reporting Write to the PRISMA 2020 checklist; produce a flow diagram; share strategies, forms, and code in a repo. A clean manuscript, figures, and open files.

Define The Question (PICO Or Similar)

Write one sentence that captures the population, intervention or exposure, comparator, and outcomes. Add time frame and setting if those matter. Confirm which study designs can answer the question with the least bias and still reflect real practice. Lock the list of primary and secondary outcomes, and state any accepted variants or measurement windows.

Build Your Protocol

State aims and scope, lay out inclusion and exclusion rules, and plan the information sources. Name who will screen, extract, and run the models. Include a change log template for later updates. Register the protocol on PROSPERO or a public registry such as OSF. A public record deters duplication and gives readers a reason to trust the plan. Add a brief plan for data sharing and software.

Design A Reproducible Search

Search at least two large databases suited to the field, plus trial registers when results can lag behind trial completion. Write full Boolean strings with synonyms, truncation, and subject headings, then peer review them. Capture platform, years covered, filters used, and the day each search ran. Export all records with abstracts and IDs intact to streamline screening. Include preprint servers and grey sources when publication lag can skew results.

Manage Records And De-duplicate

Import records to a manager that can tag, batch, and export clean sets. Remove duplicates with a tested recipe and record both exact and near matches removed. Keep the raw dump safe and never edit it. Name each library by date and stage so any reader can follow the trail later.

Title And Abstract Screening

Pilot And Calibration Steps

Calibrate early with a small set before the main pass to align judgments.

Run dual screening for the main set and send conflicts to a tie-breaker. Keep reasons short and rule-based. If language limits apply, state them now and explain why they make sense for the topic and resources on hand.

Full-Text Review

Fetch the full item from publishers, preprint servers, or direct author contact. Apply the same rules. Record one primary reason for exclusion from a short list in the protocol. Save PDFs in a folder named by ID so later checks are easy. Create a living list of studies that need author queries and mark those responses when they arrive.

Doing A Systematic Review Using PRISMA: Screening And Extraction

Now the set is fixed, move with care. Small slips here ripple into the model and the figures. Work in pairs for steps that change numbers in the main tables or plots.

Data Extraction

Build a form with fields for study ID, design, setting, participant traits, intervention details, comparators, outcomes, measurement windows, effect type, effect size, and precision. Add fields for funding, declarations, and notes on deviations. Pilot the form on three to five studies and refine labels. Extract in duplicate when sample sizes are small, effect sizes are fragile, or the topic carries real-world risk. Store forms and exports in a versioned folder.

Assess Risk Of Bias

Match the tool to the design. Randomized trials suit RoB 2. Non-randomized studies of interventions suit ROBINS-I. Cohort or case-control studies outside intervention work may suit tools such as Newcastle–Ottawa. Keep decision rules visible and record quotes that support each rating. Summarize bias at the domain level first, then at the study level if the tool allows.

Plan Your Synthesis

Pick the effect measure that speaks to the audience and the data: risk ratio, odds ratio, mean difference, or a standardized form. Random-effects models handle real variation across settings; fixed-effect models suit narrow, near-identical settings. Inspect forest plots for outliers, check heterogeneity with I2 and τ2, and run leave-one-out checks when one study drives the line. When models do not fit because definitions or outcomes diverge, write a tight narrative and use structured tables.

Check Reporting Bias And Certainty

Search trial registers for completed but unpublished work and check protocols for outcome switching. Funnel plots can hint at small-study effects when counts are adequate. To grade the overall body of evidence for each outcome, use a transparent method such as GRADE and state reasons for any rating shifts.

Write Up With The PRISMA 2020 Checklist

Use the items as a map from title to appendices. The checklist lists title and abstract items, the reason for the review, detailed methods, the selection results, study features, risk of bias findings, effect estimates, and any extra analyses. Finish with limits, generalizability, funding, and any ties. Keep the flow diagram next to the selection text so readers can cross-check counts. The checklist and templates live on the PRISMA 2020 page.

Build The PRISMA Flow Diagram

Start with database and register counts, add any other sources, then show the number screened, the number retrieved in full, the number excluded with reasons, and the final set. Updated reviews use a slightly different template. The official page provides editable files you can adapt to your record sources.

Show Your Searches And Files

Place every strategy string in an appendix with platform, field tags, limits, and the run date. Add a table of included studies with core features and a separate list of excluded full-texts with a short reason. Post forms, code, and data in a public repo. The Cochrane Handbook gives steady method advice that pairs well with PRISMA.

Statistical Notes That Help

Keep unit-of-analysis issues on your radar. Cluster trials need the right adjustment, crossover trials need a plan for carryover, and multi-arm trials need a plan for shared controls. Spell out which conversions you accept for medians, IQRs, and missing SDs, and cite the rule in your methods. When scales differ, use standardized mean differences with a clear direction so larger is always good or always bad, not a mix that hurts clarity.

Pre-plan small sets of subgroups that make sense from the topic and the data. Dose, duration, or risk level often serve well here. State why each subgroup adds value and set a short list before any data leave the library. Keep meta-regression in reserve for sets with enough studies to support it, and state the minimum count you will accept before you run it.

Present results in layers. Show a core model, then sensitivity checks that swap effect measures, switch models, or drop high-risk studies. When one study dominates a plot, give a sentence on why that study differs and what happens when it is removed. Readers want to see both the main line and how sturdy that line stays under simple shifts.

Software And Files

Pick tools that fit your team. Reference managers handle import and de-duplication. Screening can run in purpose-built platforms or in spreadsheets when teams are small. Meta-analysis can run in R packages or in a point-and-click program; the choice matters less than saving code, settings, and seeds so results can be rerun later.

Study Type Bias Tool What It Judges
Randomized trials RoB 2 Randomization, deviations, missing data, measurement, and selection of the reported result.
Non-randomized interventions ROBINS-I Confounding, selection, classification, deviations, missing data, measurement, and selection of the reported result.
Cohort / case-control Newcastle–Ottawa or a field-approved tool Selection, comparability, and outcome or exposure assessment.

Time Savers And Quality Guards

Run a small pilot for screening and extraction to check that rules make sense to both reviewers. Keep a template email ready for author queries. Use label conventions for files and figures so teams do not overwrite each other’s work. When a deadline forces a single-reviewer step, add a second check on a random slice and report that slice size. Automation can triage records, yet humans make the final calls that change counts.

Common Pitfalls To Avoid

Scope creep: Changing the question mid-way breaks trust and ruins comparability. If a change is unavoidable, flag it in the change log, update the registry record, and explain the shift in the paper.

Thin search: One database is rarely enough. Use at least two, plus registers when trials sit behind paywalls or in progress. Write full strings so anyone can rerun them next year.

Vague rules: Eligibility rules that leave room for guesswork yield messy screening. Write crisp rules with examples of borderline cases to guide decisions across the team.

Single extraction with no checks: A second set of eyes on main fields pays back in fewer corrections later. At a minimum, spot-check effect sizes and sample counts.

Mixing designs without a plan: If designs mix in one model, effects can drift. Set plans for subgroups or present separate plots when mixing would mask real signals.

Missing data management: Plan how to handle missing SDs, medians in place of means, and cluster trials. Use accepted conversions and label them in the methods.

Templates You Can Reuse

Protocol Checklist (Short Form)

Question; aims; eligibility (PICO, time, setting, design); information sources; full search strategies; screening plan; extraction form draft; bias tools; effect measures; model choice; plans for variation; missing data plan; reporting bias checks; data sharing; software; roles; timeline; change log rules; registration site.

Extraction Fields (Starter Set)

Study ID; source; country; setting; design; sample size; age group; sex split; intervention details; comparator; outcomes and units; effect measure; effect size with CI; follow-up; funding; notes. Add topic-specific variables as needed, such as dose, device type, or training level.

From Protocol To Publication With Confidence

Stick to the plan, keep a clean audit trail, and write to the checklist. Register the protocol early, store every decision, and make files public at the end. That mix of clear methods and open materials builds reader trust now.