How To Do A Systematic Review In Nursing | Step By Step

Plan a protocol, run a reproducible search, screen in pairs, extract and appraise, synthesize, then report with PRISMA—registered on PROSPERO.

Nurses ask tough clinical questions every shift. A systematic review turns that curiosity into a reliable answer that teams can act on. This guide gives you a clean path from idea to write-up, with tactics that save time and reduce bias. You’ll see exactly what to write, what to save, and how to keep the work audit-ready.

You don’t need fancy software to get started. You do need a tight question, a protocol that others can follow, and documentation. The steps below match widely used methods and reporting standards so your review lands with readers, peer reviewers, and managers who fund practice change.

Doing a systematic review in nursing: step-by-step

Think of the process as a series of checkpoints. You’ll set your question, pre-register, design the search, screen with a partner, extract data, judge study quality, choose a synthesis path, grade certainty, and write to the PRISMA template. The table below doubles as a protocol planner you can paste into your document.

Protocol planner for a nursing systematic review
Step Decide this Practical tips
Question PICO, PICo, or SPIDER framing Pick one frame that matches your design and stick with it across text, tables, and forms
Eligibility Inclusion and exclusion rules Define study design, setting, years, language, and outcomes before you search
Registration Public record on PROSPERO Register once the protocol is stable; cite the record in your manuscript
Databases Which sources you’ll search MEDLINE, CINAHL, Embase, CENTRAL; add gray sources if needed
Strategy Keywords and subject headings Draft with a librarian; pilot on one database and refine
Deduping How you’ll remove duplicates Save raw exports and the deduped file; record counts for your flow chart
Screening Title and abstract, then full text Two reviewers at both stages with a plan to resolve ties
Extraction Data items and formats Build a tested form; collect both outcome data and study descriptors
Appraisal Risk-of-bias tool Match tool to design: RoB 2, ROBINS-I, CASP, or JBI
Outcomes Primary and secondary State units and time points; plan how to convert if scales differ
Synthesis Meta-analysis or narrative plan Pre-specify models, effect measures, and subgroup ideas
Certainty Approach to grading Use GRADE with clear footnotes that point to the reason for each judgement
Reporting Items you will include Follow PRISMA 2020 with a flow diagram and a search appendix
Updates Plan for refresh State when you’ll rerun searches and how you’ll flag new trials

Pick a question that fits nursing practice

Good questions are precise and useful at the bedside. PICO suits intervention effects; PICo works for qualitative evidence about experiences; SPIDER can help with mixed methods and service evaluations. Write the question once as a sentence, once as a table, and once as a search block. That alignment keeps your logic tight from start to finish.

Write and register your protocol

Draft the aims, eligibility rules, databases, search blocks, screening plan, data items, risk-of-bias tools, synthesis plan, and timelines. When the team agrees, register on PROSPERO. The record time-stamps your intent and helps readers see what changed.

Build a search you can rerun

List every database and platform you’ll use and capture the exact search strings. Blend subject headings with text words and add filters only when they’re peer-reviewed and safe for your topic. Save raw exports, the deduped file, and a log that records dates, platforms, and hit counts. That log feeds your PRISMA flow chart and makes updates painless.

Screen in pairs without drift

Run calibration on a small batch and tweak rules before the main screen. Work independently for titles and abstracts, then compare. Resolve ties by discussion or a third reviewer. Repeat for full texts. Keep counts by reason for exclusion so your flow diagram is complete and your decisions hold up under scrutiny.

Extract data with a tested form

Define the fields once and test them on a few papers. Typical fields include study design, setting, sample size, demographics, intervention and comparator details, outcome names, units, time points, and numerical results with measures of spread. Add notes for funding and conflicts. Store both the clean spreadsheet and the raw exports.

Appraise study quality with the right tool

Use RoB 2 for randomised trials, ROBINS-I for non-randomised interventions, and the relevant JBI or CASP checklist for qualitative or cross-sectional designs. Two reviewers judge each domain. Record justifications in plain language so readers can track how you reached each judgement.

Choose your synthesis path

If studies are clinically close and report compatible outcomes, a meta-analysis can produce a pooled estimate. If designs, measures, or contexts differ too much, a narrative approach that groups findings by theme, population, or setting keeps the signal clear without forcing numbers to fit.

When a meta-analysis makes sense

Pick the effect measure that matches your outcome: risk ratio or odds ratio for events; mean difference or standardized mean difference for continuous scores. Plan a random-effects model when you expect variation across studies. Check heterogeneity with I² and tau² and look for reasons it might be high. Run leave-one-out checks and small-study assessments where they apply.

When a narrative synthesis makes sense

State how you’ll group studies, how you’ll summarise direction and size of effect, and how risk-of-bias judgements influence confidence in each statement. Bring tables forward so readers can scan comparisons without flipping back and forth.

Grade certainty across outcomes

Use GRADE to rate each outcome across domains such as risk of bias, inconsistency, indirectness, imprecision, and publication bias. Start at high for randomised trials and at low for observational designs, then rate up or down as the evidence warrants. Present a Summary of Findings table that pairs plain language with numbers.

Report in a way peers can reuse

Write to the PRISMA 2020 structure: title and abstract, background, methods, results, and interpretation, with a graphic flow diagram. Add a search appendix that includes every string, platform, and date. Link to your data and forms so teams in other hospitals can replicate your steps with little friction.

How to conduct a nursing systematic review: workflow

Roles and setup

Line up at least two reviewers, a content advisor, and a librarian or an information specialist. Pick tools you know: a reference manager, a spreadsheet, and a shared folder with version control. Name files with a prefix that sorts well, like 01_protocol, 02_search, 03_screening, 04_extraction, 05_analysis, 06_writeup.

Search strategy that stands up

Start in one database and draft your blocks with a librarian if you can. Mix controlled vocabulary with text words and map synonyms. Add proximity operators to catch phrasing quirks. Harvest terms from sentinel papers and index records. Once the pilot retrieves the right set, translate across databases and keep a table of changes.

PRISMA-ready documentation

As you work, maintain four living files: a search log, a screening log, a data dictionary, and an analysis plan. These files turn messy workflows into clean audit trails. They also shorten peer review because answers to common queries are already in the supplement.

Transparent write-up

Use the PRISMA checklist as your table of contents. In methods, write short, direct sentences. In results, present study flow, an overview table of included studies, risk-of-bias figures, and then the findings by outcome. Keep interpretations separate from raw findings. Call out where judgement could swing either way and point to the data that drove your choice.

Common appraisal and bias tools for nursing reviews
Tool Use What it checks
RoB 2 Randomised trials Randomisation, deviations, missing data, measurement, reporting
ROBINS-I Non-randomised interventions Confounding, selection, classification, deviations, missing data, measurement, reporting
JBI checklists Qualitative, cross-sectional, case series, and more Design-specific criteria that flag common threats
CASP Quick appraisal across designs Ten-to-twelve questions that guide a reasoned judgement
AMSTAR 2 Reviews of reviews Protocol, search, selection, appraisal, synthesis, and bias
GRADE Across-study certainty by outcome Bias, inconsistency, indirectness, imprecision, reporting bias

Sample methods text you can reuse

Question and protocol. We framed the question using PICO. The protocol was agreed by all authors and registered on PROSPERO (ID: XXXXX).

Search. A librarian helped build database-specific strategies for MEDLINE, CINAHL, Embase, and CENTRAL. We combined controlled vocabulary with text words and recorded the full strings and dates in the appendix.

Screening. Two reviewers screened titles and abstracts in duplicate after a calibration round, then reviewed full texts. Disagreements were resolved by discussion or a third reviewer.

Data extraction. Using a tested form, we extracted study descriptors, intervention details, outcomes, and numerical results. Authors were contacted when data were incomplete.

Risk of bias. We used RoB 2 for trials and ROBINS-I for non-randomised studies, with two reviewers per study. Judgements and quotes that supported them are in the supplement.

Synthesis. When studies were sufficiently alike, we pooled effects using a random-effects model. Otherwise, we grouped findings thematically by outcome and setting.

Certainty. We used GRADE to rate each outcome and built a Summary of Findings table.

Frequent pitfalls and clean fixes

  • Vague question: Tighten the population and outcome. If your sample would include both ICU and community settings, split into two questions.
  • Search drift: Freeze a v1 search, label updates as v2, v3, and keep the strings side by side in the appendix.
  • One-person screening: Add a second screener for a 10–20% sample to check agreement, then scale up to all records.
  • Data gaps: Contact authors with a short, specific request. Save emails in your folder so readers can see what you asked.
  • Forced pooling: If outcomes or follow-up windows don’t line up, park the meta-analysis and build a clear narrative.
  • Messy files: Use a single folder tree and stable file names so teammates can jump in without hunting.

Project timelines that actually work

A lean team can move from protocol to submission in four to six months with steady hours each week. A typical cadence: two weeks for the protocol and registration; two to four weeks for search and deduping; two to four weeks for screening; four weeks for extraction and appraisal; two to four weeks for analysis; four weeks for writing and checks.

Ethics and data sharing

Most reviews do not need formal ethics review because they use published data. If you contact authors for raw data, follow local rules for data handling. Post your forms, search strings, and cleaned extraction sheet on a trusted repository so others can reuse the work and build on it.

Software picks and light stats

Use tools your team knows. RevMan, JASP, R (meta or metafor), or a spreadsheet can all deliver a sound analysis if the inputs are clean. Convert outcomes to common metrics, check direction so higher scores always mean the same thing, and keep one analysis plan that spells out every choice.

When qualitative evidence is the target

For experiences and barriers, use a JBI or CASP route with a search that reaches qualitative databases and gray sources. Screen with the same two-stage method. Extract contexts, participants, phenomena of interest, and key findings with supporting quotes. Synthesize with a structured approach such as meta-aggregation and link each claim to the studies that back it.

Linking your work to daily care

Translate pooled effects into plain speech: absolute risk changes, number needed to treat, or minutes saved per shift. Flag resource needs, training, and equity checks. Nursing teams use reviews when the path from result to action is spelled out without jargon.

Pre-submission checks

  • Every PRISMA item ticked with page numbers
  • PROSPERO ID in the abstract and methods
  • Flow chart totals match your logs
  • Risk-of-bias notes trace back to quotes or tables
  • Data and code shared where allowed
  • Plain language summary that a busy charge nurse can read in one minute

Where to learn more

For reporting items, use the PRISMA 2020 checklist. For methods across planning, searching, bias, and pooling, keep the Cochrane Handbook open while you work. For registration help, follow the steps on PROSPERO registration. The form is short now.