How To Do A Medical Literature Review Step-By-Step | Quick Safe Smart

Plan a focused question, register a protocol, search widely, screen with set criteria, extract carefully, assess bias, and report with PRISMA.

Medical decisions need a clear map of existing evidence. A well run literature review gives that map. The steps below keep your work tight, transparent, and repeatable, from the first idea to the final manuscript.

Medical Literature Review Step-By-Step: From Idea To Protocol

Set A Precise Question

Start with a single, clinical question that fits one main population and one main topic. Use a structure such as PICO or PICOS. Define the Population you care about, the Intervention or exposure, a clear Comparator, the key Outcomes, and any Study design limits. Write one sentence that states the goal and a second that states what you will not include.

Pick The Review Type

Match your goal to a format. A narrative review gives a broad overview. A scoping review maps themes and gaps. A systematic review follows a protocol and aims for a complete search with prespecified steps. Choose once and commit, since methods differ.

Review Types, When To Use Them, And Typical Outputs
Type Best Use Usual Output
Narrative Background, context, key themes Descriptive synthesis with broad sources
Scoping Map topics, terms, and gaps Conceptual map, counts, theme tables
Systematic Answer a focused question PRISMA flow, risk-of-bias tables, pooled or narrative findings

Write And Register A Protocol

Draft a protocol before the first search. List the question, databases, full search strings, inclusion and exclusion rules, screening plan, extraction fields, bias tools, and the plan for synthesis. If you plan a systematic review, register on PROSPERO to create a time-stamped public record that guards against post hoc changes and duplication.

Sources And Search Strategy

Select Databases And Source Types

Use at least two major databases to limit missed studies. Good core options include MEDLINE via PubMed’s Advanced Search Builder, Embase, and the Cochrane Library. Add CINAHL for nursing topics, PsycINFO for mental health, Web of Science or Scopus for citation chasing, and trial registries for unpublished or ongoing work. Grey literature such as theses, conference books, and agency reports helps reduce publication bias. Keep a short log of every source you use and why it was chosen.

Design The Search Strings

Turn the PICO terms into synonyms and controlled vocabulary. Combine synonyms with OR, then link the concept sets with AND. Use truncation and phrase marks where needed. For PubMed, test Medical Subject Headings (MeSH) and map them to entry terms. Keep a log of every attempt and save each final string. Run a pilot search and check whether known key studies appear; if not, tune the terms until they do.

Document Every Choice

Record the date, database, platform, and exact string for each run. Export results with full fields and unique IDs. Store raw exports in a read-only folder. This audit trail lets others repeat your work and helps you correct errors without guesswork.

Screening And Study Selection

Deduplicate And Pilot Your Rules

Import all records into a manager such as EndNote, Zotero, or Covidence and remove duplicates. Before full screening, run a small pilot on 50–100 records to confirm that your rules are clear. Adjust only if both reviewers agree and log the change.

Two Reviewers, Clear Decisions

Use two independent reviewers for titles and abstracts, then for full texts. Mark each record as include, exclude, or unsure at each stage. Break ties with a third reviewer. Record one exclusion reason per study at the full text stage using a short, stable list such as wrong population, wrong design, wrong outcome, or not primary research.

Track Flow With PRISMA

Create a flow diagram that shows the number of records at each step. The PRISMA 2020 templates include ready-to-use figures and checklists that match standard reporting practice.

Data Extraction That Sticks

Build A Reusable Form

Design a form with fields for study ID, setting, participants, design, follow-up, interventions or exposures, comparators, outcomes, effect metrics, and notes. Pilot the form on five to ten studies and tune the wording to cut ambiguity.

Double Extraction For Accuracy

Have two people extract the same studies independently. Resolve differences through agreement and record the resolution. Predefine rules for unit conversions, intention-to-treat data, and how to handle multi-arm trials or crossover designs.

Manage Numbers With Care

Keep original numbers untouched in one sheet and work on a separate analysis sheet for any derived values. Record the exact formulas you use for risk ratios, odds ratios, mean differences, or standardised mean differences. Keep a column that flags imputed or converted data.

Appraise Study Quality

Match the study design to a validated tool. Rate each domain and give a study-level judgement. Avoid single-reviewer ratings. Pair reviewers and use a consensus rule or a third rater. Keep quotations or page numbers from the source paper to back each call.

Doing A Medical Literature Review Step By Step: Search To Synthesis

Choose The Right Effect

For binary outcomes, use risk ratio or odds ratio. For time-to-event data, use hazard ratio. For continuous outcomes, use mean difference when scales match and standardised mean difference when they do not. Convert units to a common scale and state the rule in the protocol.

Plan How To Combine Results

Use a fixed effect model when studies are near-identical in question and methods. Use a random effects model when variation across studies is expected. Report the statistic used for between-study variation and the estimator. Present a forest plot with both effect sizes and confidence intervals. Add a table that lists the key design traits for each study so readers can judge whether studies should be pooled.

Check Heterogeneity And Small-Study Signals

Report the chi-square test and I² with a short note on what they mean for your data. Review the forest plot for direction and spread. If you have ten or more studies, use funnel plots and small-study tests. Treat these as signals, not proof.

Run Sensitivity And Subgroup Plans You Pre-Stated

Test the strength of your findings by removing outliers, high risk-of-bias studies, or studies with imputed data. Run only the subgroup checks you planned in the protocol, such as age bands, dose, setting, or follow-up length. Explain any extra, unplanned checks as post hoc and label them clearly.

Rate Certainty Of Evidence

Summarise confidence in each key outcome with a transparent method such as GRADE. Weigh risk of bias, inconsistency, indirectness, imprecision, and publication bias. Present one summary table per outcome with plain-language statements and linked numbers.

Write And Report With Confidence

Follow PRISMA For Reporting

Use the PRISMA 2020 checklist to structure the manuscript. Make sure the abstract includes the key items too. Keep methods in the past tense and results in the present tense for clarity.

Describe Methods So Others Can Repeat Them

Give the full search strings in an appendix. State the versions of all tools and software. Name the databases and platforms, with the dates they were searched. Link the protocol, registration number, data extraction form, and bias tools. A reader should be able to run the same steps and reach the same set of included studies.

Present Clear Figures And Tables

Include the PRISMA flow, a characteristics table, a risk-of-bias summary, and forest plots where needed. When pooling was not possible, give structured narrative text with consistent subheadings such as population, interventions, outcomes, and key messages.

Use Author Roles And Declarations

State who conceived the idea, ran the search, screened, extracted, rated bias, and wrote the draft. Add funding, data-sharing links, and any conflicts.

Time And People: Plan Before You Start

Set Realistic Milestones

Build a simple plan with dates for protocol, searches, screening, extraction, bias assessment, synthesis, and writing. Buffer time for training and disputes. Keep progress visible to the team to avoid drift.

Train The Team

Run short practice rounds for screening, extraction, and bias tools. Agree on rulebooks and store them where all can reach them. Use naming rules for files and versions so that nothing gets lost.

Common Snags And Quick Fixes

Scope Creep

If new lines of inquiry keep appearing, park them in a “next review” list. Protect the main question so you can finish.

Weak Search Yield

Broaden synonyms, drop needless limits, and add citation chasing. Check reference lists of the most relevant studies. Ask a medical librarian to peer-review the strings.

Inconsistent Data

Contact authors with a short, specific request when needed. If no reply, use imputation rules that you prespecified and explain them in the methods.

Mixed Study Designs

Split synthesis by design. Keep RCTs apart from non-randomised studies unless a strong case exists to pool them. Present clear justifications either way.

From Search To Submission: One Page Checklist

  • State one PICO-based question and the review type.
  • Draft and register the protocol.
  • Pick databases and plan grey literature sources.
  • Build and pilot full search strings.
  • Export, deduplicate, and log everything.
  • Screen with two reviewers at each stage.
  • Extract with two reviewers and lock forms.
  • Rate risk of bias with validated tools.
  • Choose the effect metrics and the model.
  • Run planned subgroup and sensitivity checks.
  • Grade certainty of evidence for main outcomes.
  • Write with PRISMA and attach appendices.
  • Share data, code, and forms where allowed.

Transparency, Ethics, And Data Handling

Share What You Can

Post your protocol, extraction templates, and bias forms in a public repository when journal policy allows. Share de-identified data sheets used for plots and pooled estimates. Add clear readme files so another team can follow the trail without guesswork.

Respect Permissions And Rights

Secure access rights for any non-open databases you use. When you reuse figures or tables, seek permission from the copyright holder and credit the source. Keep contact emails short and specific. Store responses and dates so that the record stays complete.

Protect Personal Data

Most reviews use published data only. If you obtain individual patient data from authors, store it on encrypted drives with access limits. Remove names and direct identifiers from working files. Record who can view each folder and when changes were made.

Software, Files, And Version Control

Pick Tools That Fit The Team

Choose one manager for references, one platform for screening, and one spreadsheet or database for extraction. Rayyan, Covidence, EndNote, Zotero, and Excel or REDCap all work well when a team agrees on a single set. Reduce tool hopping, since that causes exports to drift out of sync.

Use Versioned Folders

Create dated folders for raw exports, deduplicated sets, screened sets, and included studies. Freeze each step in a read-only subfolder. Name files with ISO dates and short labels, such as 2025-09-01_medline_raw.csv. A tidy tree saves hours during peer review.

Back up the full project to two independent locations each week. Keep a changelog with one-line notes for each edit. Store passwords with care.

Reporting Strengths And Limits

Be Frank About Gaps

State limits in a direct style. Mention language filters, short follow-up windows, few trials, or poor reporting in source papers. Explain the likely direction of any bias these issues could inject, and point to questions that need new data.

Link Findings To Practice

End with clear take-home points for clinicians, patients, or policy teams. State what the pooled or narrative findings mean for care now, what should wait for new evidence, and where future trials would have the most value.

Risk-Of-Bias Tool Quick Pick

When methods differ across reports, a shared rating tool keeps judgements consistent. Use one tool per design and train pairs of reviewers before full scoring.

Common Risk-Of-Bias Tools By Study Design
Design Tool Main Domains
Randomised trials RoB 2 Randomisation, deviations, missing data, measurement, reporting
Non-randomised studies ROBINS-I Confounding, selection, classification, deviations, missing data, measurement, reporting
Diagnostic accuracy QUADAS-2 Patient selection, index test, reference standard, flow and timing

Keep Learning With Trusted Guides

For methods detail and examples, see the Cochrane Handbook. For reporting, use the PRISMA site linked above. For searching, master filters and field tags in PubMed’s builder and test that your key records appear.