How To Do A PRISMA Literature Review | Step By Step

Plan your question, search broadly, screen with PRISMA flow, appraise studies, extract data, synthesize, and report with the PRISMA checklist.

You can run a PRISMA review with confidence once you know the pieces and the order. This guide keeps the steps clear, repeatable, and audit ready. It suits health, education, and many other fields that use structured reviews.

You will see what to plan, how to record each decision, and what to share in the final paper. The format tracks the PRISMA 2020 items and gives plain language tips that fit day to day work.

Core PRISMA tasks, outputs, and handy tools
Step What you produce Handy tools
Protocol A registered plan PROSPERO, OSF
Search A database log and strings Scopus, PubMed, Embase
Screening Decisions with reasons Rayyan, Excel
Appraisal Risk of bias tables RoB 2, ROBINS-I
Extraction A clean dataset Google Sheets, REDCap
Synthesis Narrative and stats RevMan, R

Doing A PRISMA Literature Review The Right Way

PRISMA sets reporting rules for systematic reviews and meta analyses. It asks for a transparent path from question to claims, with enough detail that another team could repeat your path. That means writing methods first, building a search that can run again, and showing numbers for every gate in the flow.

Write A Tight Review Question

Start with a question that fits one main aim. Use a structure like PICO or a close variant. That is, define the people, the thing you test, the comparator, and the outcomes that matter. If you study methods or signs, swap in design or index tests as needed.

State the setting, time span, and study designs you will accept. Add clear limits for language, age groups, or regions only when they are needed for the aim. Record each choice in the protocol so readers can see why a record was kept or set aside.

Register Your Protocol

Register the plan before you run the searches. A public record guards against drift and helps others avoid duplicate work. Use a registry that suits your field and topic. In health, many teams use PROSPERO. Other fields use OSF Registries or protocols.io.

Your entry lists the question, eligibility rules, databases, outcomes, and the plan for synthesis. It also lists team roles and any ties to funders. When the plan changes, update the entry and state the change in the paper later.

Plan Your Search Strategy

Map the concepts in your question to keywords and subject headings. Draft strings with synonyms and near terms, connect them with OR, and link concepts with AND. Test recall on a short list of known studies. Tweak spelling, truncation, and proximity operators until the string pulls in those studies and keeps noise low.

Search at least two large databases plus trial registers when they fit the topic. Add a preprint server only if preprints shape the evidence. Record the database names, the platform, the date, and the full strings. Export all records with abstracts and identifiers.

Keep a pilot log for searches, decisions, and tweaks so updates run smoothly next time for the whole team.

Screen Records With A Clear Rulebook

Move the exports into a tool that can deduplicate and track decisions. Remove exact and near duplicates. Then screen titles and abstracts in pairs. Use your rulebook for fast yes, no, or maybe calls. Resolve ties with a third reviewer.

Bring maybes to full text screening. Record a reason for each exclusion, using short, standard codes like wrong design, wrong setting, or not the right outcome. Keep a spreadsheet of reasons so counts in the flow are easy to tally.

Use The PRISMA Flow Diagram

The flow diagram shows how many records you found, how many you removed as duplicates, how many you screened, how many you moved to full text, how many you excluded with reasons, and how many studies made it into the review. Pick the template that fits your sources and whether the review is new or an update.

If you add sources like citation chasing or expert tips, list them in the flow and in the methods. Keep date stamps for each step so the diagram and the text match. Reviewers look for that match.

Appraise Study Quality

Pick an appraisal tool that fits the design mix. For randomized trials, RoB 2 is common. For non random studies, ROBINS I is widely used. For reviews that include tests of accuracy, QUADAS 2 is a fit. Two reviewers judge each domain and reach a consensus. Then you decide how the judgments will shape synthesis, such as by running a sensitivity run that drops high risk studies.

Report judgments per domain, not just a global label. Add short quotes or data that back up each call. A short table keeps it brief and clear.

Extract Data With A Consistent Form

Build a form before you extract. Pilot it on three to five studies and prune fields that add no value. Then extract in pairs, compare, and resolve. Include study traits, sample features, the definition of each outcome, and the numbers that feed your effect sizes. For pre post designs or cluster trials, record the details you need to compute standard errors.

Store exact text for main outcomes as well as numbers. Capture contact details so you can write to authors when a table is unclear or data are missing. Keep a column that flags studies with missing or inconsistent values.

Common data fields for extraction and quick tips
Field What to capture Tips
Study ID Author, year, distinct code Match to your flow counts
Design Trial, cohort, case series, test accuracy Align with appraisal tool
Population N, age, setting Record inclusion limits
Intervention Name, dose, duration Note co interventions
Comparator Placebo, usual care, alternative Record exposure where relevant
Outcomes Definitions and time points Keep consistent units
Effect data Counts, means, SDs, HRs, CIs Record the source table
Notes Funding, conflicts, comments Flag queries sent to authors

Synthesize The Evidence

Choose a narrative path when designs or outcomes differ too much for pooling. When pooling fits, state the metric and model up front. Many fields use risk ratio or odds ratio for dichotomous data and mean difference or standardized mean difference for continuous data. Say how you handled cluster or crossover designs.

Check heterogeneity with visual plots and a measure like I square. Probe outliers with influence checks. If the plan includes subgroups or meta regression, state the limits you used to avoid data dredging. Show both pooled and study level effects so readers can see the spread.

Present Results With PRISMA Items

Build the paper to match the PRISMA sections. Start with a short abstract that carries the aim, sources, study count, main findings, and limits. In the methods, describe each step from protocol to synthesis in the same words you used in the registry. In the results, open with the flow counts, then the study table, then the appraisal table, and then the synthesis. Make figures large and readable.

Write Clear Methods And Reproducible Appendices

Put the full database strings in an appendix. Add the date run, platform, and filters. Include the rulebook for screening and the blank extraction form. Add any code used to compute effect sizes. If you used review software, say which version and the settings you picked.

State the limits of the review up front. If you cut the search by language or date, give a reason that ties back to the aim. If you could not reach authors for missing data, say how that might skew effect sizes.

PRISMA Review Steps And Flow Diagram Tips

Name each step with the same labels you use in the diagram. Keep counts aligned across text, tables, and the figure. If you update the search before submission, add a short addendum with the new date and changes in counts. If the update did not change the conclusions, say so plainly.

Software And File Hygiene

Pick one place as source of truth for records and decisions. Back it up. Use a naming scheme for files that includes date, step, and short labels. Track versions for your rulebook, extraction form, and code. Small habits here save time during peer review.

Set up a shared folder with read only copies of the protocol, rulebook, and forms. Keep a log of questions that came up and the team’s decisions. This helps new team members and keeps calls consistent.

Keep raw exports, deduped sets, and screened sets in separate folders. Label folders with clear dates and brief tags. Share the plan with coauthors at the start and pin a copy in your workspace. Small habits like these keep the review tidy and cut friction during peer review and final submission files.

Common Pitfalls And Simple Fixes

Vague eligibility rules lead to slow screening and many ties. Fix by writing short, testable rules with a few example cases. Poor search strings miss whole lines of work. Fix by testing recall and asking a librarian to review the strings. Missing counts break the flow. Fix by logging numbers at each gate on the day you pass it.

Mixing outcomes or time points leads to odd pools. Fix by predefining the main time point and running sensitivity runs for others. Dropping appraisal judgments from the write up leaves readers guessing. Fix by adding a compact table with domain calls and short justifications.

Authorship, Transparency, And Data Sharing

State roles with a taxonomy like CRediT. Disclose ties to funders and any paid services. Share the dataset and code when you can, or at least share the forms and the rulebook. A short repository link helps readers reuse your work and speeds later updates.

Reporting With The PRISMA 2020 Checklist

The PRISMA 2020 checklist has 27 items that map to title, abstract, introduction, methods, results, and discussion. Work through each item while you draft. Tick off the item number next to the heading in your file so nothing slips. Many journals ask you to upload the filled checklist with page numbers. Keep that file ready from day one.

Use the flow template that matches your sources. The site also offers an expanded checklist with extra advice per item. If your review is an update, download the template that adds prior counts and new records. Matching the right template saves rounds of author queries.