How To Carry Out A Literature Review In Public Health | Fast, Clear, Credible

Plan your question, register a protocol, search widely, screen, appraise, extract, synthesize, and report with PRISMA—tailored to public health.

What A Public Health Literature Review Does

A public health literature review maps what’s known and what’s uncertain across populations, settings, and programs. It links interventions to outcomes, points to fit-for-purpose approaches, and shows where better trials or implementation studies are needed.

Public health pulls from randomized trials, quasi-experimental designs, observational cohorts, surveillance reports, and program evaluations. Your review has to bring these streams together with care, so decision makers can act confidently.

The review type you pick depends on your goal and timeline. The table below compares common approaches.

Public Health Review Types At A Glance

Review Type When To Use Main Output
Systematic Review You need an exhaustive, transparent synthesis against preset eligibility rules Structured summary of effects and certainty across studies
Meta-analysis Studies are similar enough to pool numerically Pooled effect sizes with heterogeneity statistics
Scoping Review Landscape mapping, concepts, and gaps before narrow questions Catalogue of evidence types, measures, and topic clusters
Rapid Review Time-limited decisions where partial shortcuts are acceptable Time-boxed synthesis with clearly flagged limitations
Narrative Review Broad overview by topic experts without exhaustive search Thematic account that organizes themes and theory
Umbrella Review Multiple existing reviews on related questions Synthesis of syntheses with high-level conclusions
Realist Review How and why an intervention works across contexts Context-Mechanism-Outcome explanations
Qualitative Evidence Synthesis Experiences, acceptability, and implementation barriers Themes with confidence assessments
Economic Evidence Review Costs, cost-effectiveness, and budget impact Comparative value statements and thresholds
Mixed-Methods Review You need both effect sizes and lived experience Integrated quantitative and qualitative findings

Carrying Out A Literature Review In Public Health: Step-By-Step

Step 1 — Frame A Sharp Question

Turn broad aims into a structured question. PICO works for interventions (Population, Intervention, Comparator, Outcome). For exposures or social programs, use PEO (Population, Exposure, Outcome) or SPIDER for qualitative work. Define time frame, settings, and any equity lenses (such as rural vs urban, income groups, or PROGRESS-Plus). Write down inclusion and exclusion rules now.

Step 2 — Draft A Protocol

Set the plan before you search. Specify databases, grey sources, screening flow, data fields, and risk-of-bias tools. Pick the review type and timeline. If it’s a full systematic review, register on PROSPERO and keep versioned notes. A clear protocol prevents mid-stream drift.

For methods detail, keep the Cochrane Handbook open while you write the plan.

Step 3 — Design A Reproducible Search

List the core concepts from your question. Under each, write synonyms and controlled-vocabulary terms (MeSH/Emtree/CINAHL Headings). Combine with Boolean logic. Add field tags for title/abstract when you want precision. Test and refine on a known set of seed papers to see what your strategy misses.

Starter String Template

((“adolescent”[MeSH] OR teen* OR youth*) AND (“helmets”[MeSH] OR helmet* OR “head protective devices”)
AND (“bicycling”[MeSH] OR bike OR cycling) AND (injur* OR concussion* OR “head trauma”)
NOT animals[MeSH] NOT editorial[pt])

Plan coverage across MEDLINE/PubMed, Embase, CINAHL, Web of Science or Scopus, CENTRAL, and at least two grey sources such as WHO IRIS, government portals, or thesis repositories. Record platform, date run, and the exact string for every source.

Step 4 — Run Searches And Manage Records

Export to a reference manager. De-duplicate across sources. Keep a log with counts by database and for each screening stage; you’ll need these for the PRISMA flow.

Step 5 — Screen Titles And Abstracts

Pilot your inclusion rules on fifty records to align reviewers. Then dual-screen titles/abstracts, resolve disagreements, and move to full-text screening. Document reasons for exclusion at full text in standard buckets (wrong population, wrong design, wrong outcome, not primary research, duplicate dataset).

Step 6 — Extract Data

Build a calibrated form. Capture study identifiers, region and setting, design, population details, intervention or exposure description, comparator, sample size, follow-up, outcome measures and time points, effect estimates, funding, and any implementation notes such as fidelity or reach. Add equity and subgroup variables you plan to analyze.

Step 7 — Appraise Quality And Bias

Use tools matched to design: RoB 2 for randomized trials, ROBINS-I for nonrandomized studies, QUADAS-2 for diagnostic accuracy, CASP or JBI checklists for qualitative work, and AMSTAR 2 when you include existing reviews. Judge overall certainty with GRADE. Keep judgments transparent with quotes or page references.

Step 8 — Synthesize Findings

If studies are sufficiently aligned, perform a random-effects meta-analysis. Use risk ratio or odds ratio for binary outcomes, mean difference or standardized mean difference for continuous ones. Transform rates when needed. Quantify heterogeneity (I², τ²) and explore it with subgroup or meta-regression where justified. When pooling isn’t appropriate, build a structured narrative synthesis: group studies by design, setting, and outcome; use clear tables; avoid vote-counting.

Step 9 — Translate To Practice And Policy

Summaries should speak to program managers and health officers. Pair effect size with context: baseline risk, resources, feasibility, acceptability, equity impact, and likely spillover benefits or harms. Note implementation facilitators and barriers pulled from the text. For a bench-to-program link, check how findings align with The Community Guide.

Step 10 — Report With Clarity

Follow the PRISMA 2020 checklist and include a flow diagram. Describe your protocol, full search strings, selection process, data items, bias judgments, synthesis methods, and certainty of evidence. Provide appendices so others can replicate or update your work. Keep a plain-language summary for non-technical readers.

Common Pitfalls When You Carry Out A Public Health Literature Review

Skipping A Protocol

Without a protocol, selection drift creeps in and trust erodes. Fix: write and timestamp the plan before searching, and stick to it unless you log changes with reasons.

Narrow Database Coverage

Relying on one index misses community trials, school programs, and non-biomedical studies. Fix: combine major health databases with social science and grey sources, and hand-search key journals.

Loose Definitions

If “adolescent”, “urban”, or “low-income” swing across studies, synthesis falters. Fix: pre-specify operational definitions and code them consistently.

One-Screening Bottleneck

Single-reviewer screening speeds things up but introduces avoidable error. Fix: dual-screen at least a random sample for calibration, then spot-check.

Outcomes That Can’t Be Pooled

Mismatched measures force vote-counting. Fix: choose primary outcomes up front and map alternatives to a common metric when possible.

No Equity Lens

Effects often differ by place or group. Fix: collect equity variables and plan subgroup analyses where justified.

Over-Claiming

Reviews can’t outrun weak designs. Fix: temper statements when evidence comes mostly from before-after studies or small samples.

Risk Of Bias And Appraisal Cheatsheet

Study Design Core Issues To Check Suggested Tool
Randomized Trial Sequence, concealment, blinding, incomplete data, selective reporting RoB 2
Nonrandomized Comparative Confounding, selection into groups, classification of interventions ROBINS-I
Cohort Study Exposure measurement, outcome timing, follow-up completeness Newcastle-Ottawa or ROBINS-I
Case-Control Case definition, control selection, exposure ascertainment Newcastle-Ottawa
Cross-Sectional Sampling frame, measurement validity, handling of confounders AXIS or JBI
Diagnostic Accuracy Index test, reference standard, flow and timing QUADAS-2
Qualitative Recruitment, data richness, reflexivity, coherence CASP or JBI
Economic Evaluation Perspective, costs captured, time horizon, discounting, sensitivity CHEERS/Drummond checks
Systematic Review Search breadth, selection rigor, bias appraisal, synthesis methods AMSTAR 2

Search Strategy That Works On Public Health Questions

Build Concept Blocks

Write one block per concept (population, intervention or exposure, outcome, setting). Expand each with synonyms and thesaurus terms. Use truncation with care so you don’t pull noise.

Balance Recall And Precision

Start broad, then layer field tags or adjacency operators to cut noise. Test on a set of known relevant papers. Adjust when key items are missing.

Cover Grey Literature

Indexing lags for program reports, policy briefs, and dissertations. Search agency portals, conference proceedings, and local repositories, and reach out to networks for unpublished evaluations.

Document Everything

Save date run, database name, platform, and the exact string. Snap screenshots for interfaces that don’t export history. This record feeds the PRISMA diagram and keeps the review auditable.

Data, Effect Sizes, And When To Meta-Analyze

Pick Consistent Effect Measures

For binary outcomes, risk ratio is intuitive; odds ratio works across varied baselines; risk difference helps program planners. For continuous outcomes, mind direction and units; convert to standardized units when instruments differ.

Plan For Heterogeneity

Expect variation in populations, delivery, and follow-up. Use random-effects models, check influence plots, and test whether design or setting explains dispersion. When differences remain large or concepts vary, narrate patterns instead of forcing a pool.

Watch For Small-Study Bias

If you have at least ten studies, consider a funnel plot and small-study tests. Probe for selective reporting by comparing protocols and registries, when available.

Rate Certainty

Summarize each key outcome with a certainty rating and short rationale so readers see how much weight to place on the estimate.

Writing That Passes PRISMA And Peer Review

Title And Abstract

State the question, design, and population in plain terms. In the abstract, list data sources, dates, eligibility, main outcomes, and the direction of effect.

Methods

Report protocol details, all search strings, screening process, data items, bias tools, and synthesis model. Mention any deviations from plan with a reason.

Results

Provide a PRISMA flow, study characteristics table, risk-of-bias summary, and main effect estimates with confidence intervals. Use figures for clarity—forest plots, bubble plots, harvest plots for narrative syntheses.

Discussion

Explain what the numbers mean for programs and policy, note strengths and limits, and outline research needs linked to the gaps you found.

Sharing

Host your extraction sheet and analytic code in a public repository when possible. It speeds updates and builds trust.

Quick Start Checklist

  • Define a structured question and the review type.
  • Write and register a protocol for systematic work.
  • Design and test a multi-database, grey-literature search.
  • Dual-screen with piloting and track reasons for exclusion.
  • Extract with a calibrated form and pre-set outcomes.
  • Match bias tools to study design and judge certainty.
  • Synthesize appropriately and probe heterogeneity.
  • Report with PRISMA and share materials for reuse.

Final Notes

Good reviews read cleanly, show their work, and keep the audience in view. If you plan the question, log a protocol, search widely, screen with care, appraise with the right tools, and write to PRISMA, your public health review will guide smarter action.