Can One Person Conduct A Systematic Review? | Editorial Reality

Yes, a single researcher can run a systematic review, but standards expect duplicate screening and extraction to control bias.

Plenty of scholars work with limited time, funding, or collaborators. The question is not whether one person can sit down and synthesize studies. The real issue is whether the end product meets accepted methods so readers can trust the findings. Below, you’ll find what solo work changes, what must stay the same, and the practical safeguards that raise confidence in a one-author review.

What “Systematic” Really Demands

“Systematic” signals a prewritten protocol, a documented search, clear inclusion criteria, duplicate checks at key steps, and transparent reporting. These pieces don’t exist to slow you down; they create a trail others can verify. When one researcher handles every step, the risk is unchallenged decisions. The goal, then, is to keep the method strict enough that readers get a dependable answer.

Solo Versus Team: Tasks, Standards, And Workarounds

A team spreads workload and guards against mistakes. A solo path needs compensating routines. Use the table below as a quick map of what the gold-standard asks for and how a lone reviewer can approximate it without bending rules.

Task Best Practice Solo Workaround
Protocol Register a protocol with clear PICO, outcomes, and plan. Pre-register and invite open comments; time-stamp all changes.
Search Design broad, reproducible strategies per database. Pilot search strings, log each run, export queries verbatim.
Deduplication Remove duplicates before screening. Use two tools (e.g., reference manager + screening app) and compare logs.
Title/Abstract Screening Two independent reviewers; disagreements resolved by a third. Single screener with sampled second checks via colleague or crowdsourced spot review.
Full-Text Screening Two independent decisions; record reasons for exclusion. Single screener with audit sample by an external reader; store PDFs and notes.
Data Extraction Two extractors independently; reconcile differences. Extract twice on different days; compare forms; spot-check by a peer if possible.
Risk Of Bias Independent ratings by two people. Rate twice, blinded to the first rating; document decision rules.
Synthesis Predefined model; justify subgroup rules. Stick to the protocol; run sensitivity analyses that test key choices.
Reporting Follow a recognized checklist end-to-end. Use the full checklist; include appendices with forms and raw decisions.

Running A Systematic Review Alone: What It Takes

This section walks through each stage with tactics that fit a one-author setup while staying inside mainstream methods.

Write And Lock A Protocol

Draft a protocol that nails down the question, eligibility rules, outcomes, and analysis plan. Register it and save a local, time-stamped copy. If you must refine scope later, state the reason and the exact date. That change log proves you didn’t shape methods around the results.

Plan Searches You Can Defend

Cover at least two large bibliographic databases plus trial registries where relevant. Keep full search strings, limits, and dates in an appendix. The Cochrane guidance even notes that independent duplicate searching isn’t required; what matters is a sensitive, documented strategy run by someone trained to do it. That’s friendly to solo work if your search record is meticulous.

Screening Without A Built-In Second Reviewer

Duplicate screening catches missed papers and curbs over-exclusion. If you’re alone, recreate that safety net. One practical route is to single-screen titles and abstracts, then invite a colleague to check a random sample. You can also flag “uncertain” records and send only those for a quick second view. Record every decision and the reason for exclusion at full text.

Double Data Extraction—Even If You’re One Person

Extract each study twice on different days using separate copies of the form. Compare the two versions and resolve conflicts by re-reading the study. For large tables, run a small program or spreadsheet formulas to spot mismatches. Where you can, ask another researcher to audit a subset.

Risk-Of-Bias Ratings That Stand Up To Scrutiny

Pick the tool that fits your design mix (e.g., RCTs or non-randomized studies). Apply decision rules consistently. To mimic duplicate ratings, rate each domain once, step away, then rate again without seeing the first pass. If a co-reviewer can spare an hour, have them rate a slice so you can report agreement.

Synthesis And Sensitivity Checks

Keep the model and subgroup plan tied to the protocol. If data are sparse or heterogeneous, narrate the findings and state why a meta-analysis isn’t a fit. Run sensitivity analyses that test removal of high-bias studies, alternative effect measures, or different random-effects estimators. Report exactly what changed.

What Methods Bodies Say About Duplicate Steps

Leading handbooks and appraisal tools stress duplication at screening and extraction because those steps shape the evidence base. Reporting checklists also push full transparency across the flow diagram and methods. You can link to those documents in your methods section so readers can verify you followed them. Two helpful anchors are the PRISMA 2020 reporting checklist and the Cochrane handbook chapters on searching, selection, and data collection. Link them once in the middle of your article and cite them in your manuscript as needed.

See: PRISMA 2020 and Cochrane chapter on searching and selection.

Evidence On Single Versus Duplicate Screening

Research shows that single screening can miss eligible studies, while duplicate screening improves retrieval. That doesn’t mean solo work is banned; it means you need visible safeguards. If capacity is tight, a hybrid is realistic: one screener excludes the clear-cut misses and a second person reviews a sample, plus all “maybe” records. Spell out the sampling fraction and the yield from that audit in your results.

Reporting That Builds Trust

Transparency softens the concern about one-author bias. Use a flow diagram that lists counts at each step. Provide a table of excluded full texts with reasons. Share the extraction form, the raw outcome table, and your risk-of-bias judgments in an online supplement. Make every decision reproducible.

What To Disclose In The Manuscript

  • That one person conducted screening, extraction, and risk-of-bias steps.
  • Any audit checks by colleagues, including sample sizes and agreement.
  • All deviations from the protocol with dates and rationale.
  • Data and code availability with links.

When A Solo Review Can Be Reasonable

Some topics have narrow scopes, short evidence trails, or urgent timelines. In those cases, a lone reviewer may deliver a usable summary if the protocol is tight and the audit footprint is visible. Rapid review methods sometimes allow a single screener with prespecified safeguards. If you lean on those shortcuts, label the piece as a rapid approach and explain the trade-offs.

Reasons Editors Push Back On One-Author Reviews

Journals often expect duplicate screening and extraction because appraisal tools rate these as core markers of quality. Common reasons for rejection include no protocol registration, opaque search methods, single-pass extraction with no checks, or missing risk-of-bias tables. Address these ahead of time to avoid a desk rejection.

Minimum Methods If You’re Working Alone

Concrete Safeguards You Can Implement Today

  1. Register a protocol and post the full search strategies.
  2. Log every decision with reasons. Keep PDFs and notes.
  3. Use screening software with audit trails.
  4. Double-enter the extraction yourself; compare versions.
  5. Invite a colleague to check a random 10–20% at each key step.
  6. Share forms, code, and de-identified extraction sheets.

Common Tools That Help A Lone Reviewer

Reference managers handle deduplication and storage. Screening apps track decisions and reasons. Statistics packages run meta-analysis and sensitivity checks. Pick tools that export clean logs so editors can trace your path.

Stage Risk When Solo Mitigation Moves
Eligibility Screening Over-exclusion of borderline studies. Flag “maybe” items; second-reader audit of a random sample.
Data Extraction Transcription errors or selective capture. Two-pass extraction; spreadsheet checks; subset audit.
Risk Of Bias Lenient or uneven judgments. Decision rules; blind repeat rating; external spot-check.
Synthesis Choices Model or subgroup choices drive results. Pre-specified plan; sensitivity analyses; full disclosure.
Write-Up Under-reporting methods that limit trust. Follow a full checklist; add appendices with raw materials.

Audit-Ready Documentation

Think of your files as a box an editor could open and retrace each step. Keep your protocol, search strings, database logs, screening exports, extraction sheets, risk-of-bias forms, analysis code, and a readme that explains how to reproduce the figures.

What The Evidence Base Says About Single-Author Reviews

Single-author pieces exist in the literature, and many are indexed. That shows it can be done and published. Still, acceptance tends to hinge on how well the author signals safeguards and follows mainstream reporting. If your topic is large or complex, recruiting a part-time second reviewer for the critical steps pays dividends.

A Practical Timeline For One Person

Six-Step Plan With Realistic Milestones

  1. Week 1–2: Draft and register the protocol; pilot searches.
  2. Week 3–4: Run full searches; deduplicate; store all records.
  3. Week 5–6: Screen titles/abstracts; send a 10–20% sample for audit.
  4. Week 7–8: Retrieve full texts; record reasons for exclusion; second audit sample.
  5. Week 9–10: Double-enter extraction; rate risk of bias; invite a small audit.
  6. Week 11–12: Synthesize; run sensitivity checks; prepare the PRISMA flow and appendices.

Bottom Line For Solo Reviewers

One researcher can deliver a method-driven evidence synthesis that helps decision-makers. The bar doesn’t drop because the team is small. Keep duplicate checks in spirit, show your work at every step, and point readers to the files that prove it. That’s how a solo project earns trust.