Follow a mapped question, a registered protocol, broad searches, careful screening, bias appraisal, synthesis, and PRISMA reporting from start to finish.
Doing a medical research review: step-by-step
Before you dive into databases, lock in what you’re solving, who’s on your team, and how you’ll call balls and strikes. Then move through the ten moves below. If your topic is fast moving, plan short update cycles. If the field is narrow, widen sources and add grey literature to catch small but relevant trials and reports.
Review types at a glance
The table below helps you pick the right approach. It keeps scope and output aligned so you don’t overpromise or undershoot.
| Type | Best Use | Watch-outs |
|---|---|---|
| Narrative review | Broad overview when methods can stay flexible | Prone to selection bias if sources aren’t tracked |
| Systematic review | Focused question with pre-set methods and duplicate screening | Needs a protocol and complete search across databases |
| Meta-analysis | Pool effect sizes when studies are similar enough | Heterogeneity and small-study effects can mislead |
| Scoping review | Map concepts, outcomes, and gaps across a field | Usually no risk-of-bias rating or effect pooling |
| Rapid review | Time-sensitive brief with tightened steps | Trade-offs must be documented and justified |
| Umbrella review | Synthesizes existing reviews on a broad topic | Quality of included reviews can limit confidence |
| Diagnostic accuracy review | Measures sensitivity and specificity of tests | Study design and thresholds vary across papers |
| Prognostic review | Assesses prediction models or risk factors | Reporting and calibration issues are common |
| Qualitative synthesis | Integrates themes from interviews and focus groups | Requires clear coding and reflexivity |
Plan your review question
Write one focused line using PICO or a close cousin such as PECO or PEO. Spell out the population, the intervention or exposure, the comparator, and the outcomes that matter. Add settings and time frames if they’re make-or-break. Keep that sentence visible to stop scope creep when new studies tempt you off track.
Set tight inclusion and exclusion criteria that match the line above. Decide on study designs you’ll include, minimum follow-up, language limits, and date ranges. Define primary and secondary outcomes now so extraction stays tidy later.
Draft your protocol
Write the who, what, where, and how. Name each role, from search to screening to bias appraisal. Lay out databases, grey sources, de-duplication, screening steps, data items, planned syntheses, and subgroup tests you’ll only run if they were planned in advance. Keep version control using a shared folder and simple file names that tell the story at a glance.
Register the plan so others can find it and so your team stays honest. Use PROSPERO for most health topics. Update the record when your scope shifts, and cite the registration ID in your final report.
Build your search strategy
Start with one database, write a draft string with keywords and subject headings, then test and refine. Use both free text and terms such as MeSH or Emtree. Capture synonyms and spelling variants. Test recall by checking whether landmark trials show up. If they don’t, fix the string before you scale out.
Search at least two major databases that fit the topic, such as MEDLINE/PubMed and Embase, plus CENTRAL for trials. Add CINAHL, PsycINFO, Web of Science, or Scopus as needed. Pull grey literature from trial registries and conference books. Save every strategy and date. Export full records with abstracts and identifiers to a reference manager.
Have a second person peer-review the strategy. PRESS checks prevent missed terms and syntax errors that would sink your recall rate.
Manage records and screening
De-duplicate the master file, then pilot ten to twenty records to align on inclusion logic. Screen titles and abstracts in duplicate with a conflict process. Move to full-text screening, again in duplicate. Keep a reason log for exclusions at the full-text stage. Prepare a PRISMA flow diagram while counts are fresh.
Use simple tools your team already knows. A spreadsheet works when roles are clear. Citation managers can tag stages. Dedicated platforms help larger teams but don’t replace communication.
Extract data the smart way
Build one form and test it on three to five papers. Capture study setting, design, arm sizes, follow-up, population traits, interventions, comparators, outcomes, effect measures, and funding. Add fields for notes and queries to authors. Extract in duplicate for the first few papers to shake out gaps, then split the batch with checks on tricky items.
Record exactly how you computed any derived numbers. Save copies of all calculations. If papers report medians and ranges, use accepted formulas to estimate means and standard deviations, and mark those as estimates in your sheet.
Appraise risk of bias
Pick tools that fit the design. RoB 2 fits randomized trials. ROBINS-I fits non-randomized studies of interventions. QUADAS-2 fits diagnostic accuracy work. Train on one paper together, then rate independently and resolve. Keep domain-level notes so readers can see your logic.
Separate bias rating from study quality or reporting quality. Bias deals with direction and size of error. A well written paper can still carry bias, and a brief report can still be sound.
Synthesize findings
Start with a clear narrative that lines up with PICO and outcomes. When studies match on design, population, and measures, a meta-analysis can add precision. Pick fixed or random effects based on your question and between-study variation. Report measures like risk ratio, odds ratio, mean difference, or standardized mean difference as fits the outcome.
Trace heterogeneity with forest plots, I², and clinical judgment. Run planned subgroups and sensitivity checks. Test small-study effects with funnel plots when you have enough studies. Avoid data dredging. Report what you planned and what changed, and why.
Rate certainty with GRADE
Summarize each key outcome in a short table with effect sizes and confidence ranges. Then grade the body of evidence for each outcome across domains such as risk of bias, inconsistency, indirectness, imprecision, and publication bias. Randomized trials usually start high; observational studies start lower, but strong, consistent effects can raise confidence.
Write plain takeaway lines linked to the certainty rating. Readers should know both the size of the effect and how sure we can be.
Write and report with PRISMA
Use the PRISMA 2020 checklist to shape the manuscript and abstract. Include the flow figure, full search strings, eligibility rules, bias tools, and reasons for changes from the protocol. Name any funding and declare conflicts. Share data and code in a public repo if your setting allows it.
Style helps readers stay with you. Lead with the main answer, then give the details. Use short headings, clear tables, and legends that stand on their own.
Medical research literature review methods that work
Keep teams small, roles clear, and files named in a way that makes sense months later. Set calendars with review checkpoints. Use a living notes file that records every decision, from a tricky inclusion to a formula choice during extraction. Small habits like these keep the work smooth and reproducible.
Second table: risk-of-bias tool quick picks
Match tools to study designs so judgments stay consistent across the set.
| Study Design | Primary Tool | Notes |
|---|---|---|
| Randomized trials | RoB 2 | Domain-based with signaling questions |
| Non-randomized interventions | ROBINS-I | Compares to a target trial |
| Diagnostic accuracy | QUADAS-2 | Tailor domains to the test pathway |
| Prognostic models | PROBAST | For development and validation studies |
| Systematic reviews | AMSTAR 2 | Assesses review conduct and reporting |
| Animal studies | SYRCLE RoB | Focuses on allocation, blinding, and attrition |
Handle common pitfalls early
Scope creep ruins timelines. Keep your PICO line in view during screening meetings. Poor search recall leaves blind spots. Peer-review strategies and rerun near the end to catch late records. Single-reviewer screening raises error rates. Budget time for duplicate checks on titles, abstracts, and full texts.
Vague extraction rules lead to messy tables. Pilot the form and write short rules for tricky cases. Unplanned subgroup hunts create bias. Only run what you wrote in the protocol, or clearly flag the change. Thin reporting makes replication tough. Host search strings, extraction forms, and bias decisions in a public folder and link it in the paper.
Data presentation that lands
Readers want quick clarity. Use one summary of findings table with plain text takeaways next to numbers. Keep forest plots readable: consistent order, matching scales, and clear labels. When outcomes run on different scales, use standardized effects but explain what the units mean in practice.
Pick figures that add insight. Flow diagrams show scope. Funnel plots show small-study effects. Leave out art that doesn’t inform a decision.
Ethics, transparency, and authorship
Report funding and any ties that could sway judgments. State how authors qualified for credit and who had access to data. Share exclusions on request. If the review touches patient data, confirm approvals and de-identification steps. Respect journal rules on data sharing and disclosures.
Tune the workflow for speed and quality
Templates save time. Store a protocol shell, a PRESS checklist, a PRISMA flow template, and a standard extraction sheet. Use the Cochrane Handbook when you hit a methods snag. It offers step-by-step guidance on searching, bias, synthesis, and reporting.
Short, focused writing sprints beat marathon sessions. Tackle one section at a time: methods, then results, then discussion. Swap edits in tracked changes with brief comments so decisions are easy to audit later.
Keep the review alive
Science moves. Set a calendar reminder to rerun searches on a set interval. If new studies alter the main answer, issue an update with a short note on what changed. For topics like diagnostics or vaccines, a living review can keep teams and readers aligned with the latest evidence without starting from scratch each year.
Final checklist before submission
- Title matches the question and isn’t stuffed with keywords.
- Abstract states the answer, the effect size, and the certainty.
- Protocol is registered and cited.
- All search strings and dates are in an appendix or repo.
- Screening was duplicate at both stages with a reason log.
- Extraction form, calculators, and decisions are archived.
- Bias tools fit the study designs and notes support each call.
- Meta-analysis choices match the data and the question.
- GRADE ratings sit next to plain language takeaways.
- PRISMA checklist is complete and uploaded.
Do these things the same way each time and you’ll ship tight, useful reviews that readers can trust and teams can update with less pain.
