State the review type, data sources, search steps, selection, extraction, bias checks, and synthesis plan so another team could repeat your work.
Introduction
A methods section tells readers exactly how the review found, filtered, and combined evidence. Clarity helps peers trust results and lets others repeat the process. The outline below keeps wording tight while meeting journal expectations.
Describing Methodology In A Medical Literature Review: Structure That Works
The safest way to write methods is to mirror the workflow you actually used. Start with the review type and question, move through sources and searches, then explain screening, data handling, and synthesis. End with bias checks and any grading of certainty.
Table: Method Blueprint
| Element | What To State | Example Wording |
|---|---|---|
| Review type & question | Specify review type, question format, protocol and registration if any | “Systematic review guided by a protocol registered with PROSPERO (CRDxxxx). Question framed with PICO.” |
| Eligibility criteria | Target populations, exposures/interventions, comparators, outcomes, settings, designs, years, languages | “Included randomized and cohort studies enrolling adults with type 2 diabetes, English from 2000–2025.” |
| Information sources | All databases and other sources, with date of last search | “Searched MEDLINE, Embase, CENTRAL, Scopus, and trial registries to 16 Sep 2025.” |
| Search strategy | Full strings, limits, and who built them | “A librarian designed the strategy; the full MEDLINE string appears in Supplement 1.” |
| Selection process | How many reviewers, tools used, conflict resolution | “Two reviewers screened titles/abstracts in Rayyan; conflicts went to a third reviewer.” |
| Data extraction | Fields captured, forms, pilot process, number of extractors | “Two reviewers used a piloted form to collect study, sample, outcome, and risk-of-bias data.” |
| Risk of bias | Tool name and domain level, judgement process | “Used RoB 2 for trials and ROBINS-I for non-randomized studies, with consensus adjudication.” |
| Effect measures & synthesis | Effect size metrics, model choice, heterogeneity measures, subgroup plans | “Reported risk ratios for dichotomous outcomes; random-effects meta-analysis with Hartung-Knapp; I² for heterogeneity; planned subgroup by dose.” |
| Certainty of evidence | Whether GRADE was used and how | “Summarized certainty with GRADE across studies for each key outcome.” |
How To Write The Methods For A Medical Literature Review: Step-By-Step
Define The Review Type And The Question
Name the review type (systematic, scoping, rapid, umbrella). State the primary question and format, such as PICO for interventions, PECO for exposures, or SPIDER for qualitative aims. If you had a protocol, give the registration ID and where it can be accessed.
Spell Out Eligibility Criteria
Readers need to see exactly what could enter the review. List population features, exposure or intervention details, comparators, primary and secondary outcomes, settings, study designs, publication years, language rules, and reasons you excluded certain designs. Note any minimum follow-up or sample size thresholds. If criteria changed, say when and why.
List All Information Sources
State every database, platform, and non-database source. Typical choices include MEDLINE, Embase, CENTRAL, and Web of Science. Add trial and preprint servers if used, plus manual checks such as reference lists or contact with authors. Put a calendar date for the last search to anchor currency.
Document A Reproducible Search Strategy
Say who built the search (such as an information specialist), which fields you searched, and whether you used both keywords and controlled vocabulary. Provide at least one full search string in an appendix or supplement so others can rerun it without guessing. Mention any filters, like human studies or publication type, and justify limits such as English-only.
Describe The Selection Process
Explain how you moved from records to included studies. State the tool you used (e.g., Rayyan or Covidence), whether you removed duplicates before screening, and how many reviewers screened titles and abstracts independently. Tell readers how disagreements were settled and whether you tracked reasons for exclusion at full-text stage. If you used automation to assist, name the tool and how it influenced decisions.
Explain Data Extraction
Name the data fields captured, such as design, setting, sample size, follow-up, outcome definitions, effect estimates, funding, and conflicts of interest. Say whether you piloted the form, how many reviewers extracted data, and whether one person entered data with verification by a second. Note policies for contacting study authors when data were missing or unclear.
Define Outcomes And Variables Up Front
Make outcome definitions explicit, including time points and scales. State preferred effect metrics for each outcome and how you handled multiple measures from one study. Describe any rules for choosing one outcome per domain to avoid double counting.
Assess Risk Of Bias
Name the tool matched to design (RoB 2 for randomized trials, ROBINS-I for non-randomized studies, QUADAS-2 for diagnostic accuracy, and so on). Say how many reviewers judged risk and how consensus was reached. Clarify whether you used domain-level or overall judgements and how these shaped synthesis or sensitivity checks.
Choose Effect Measures
State the effect size for each outcome type: risk ratio or odds ratio for binary results, mean difference or standardized mean difference for continuous results, hazard ratio for time-to-event. If you converted units, say how. Note any preference for adjusted estimates from observational studies.
Lay Out Synthesis Methods
If you performed a meta-analysis, name the model (fixed or random effects) and the estimator (e.g., DerSimonian–Laird with a correction or Hartung-Knapp). Say how you handled variance, correlated outcomes, or cluster designs. Report heterogeneity measures such as tau² and I² and explain any thresholds for concern. Describe prespecified subgroup, meta-regression, and sensitivity plans. If the body of evidence was too mixed for pooling, describe the approach for narrative synthesis, including grouping by exposure, outcome, or design and any visual tools like harvest plots.
Check For Reporting Biases
Describe any funnel plot use, small-study tests, or selection models, and the minimum number of studies needed for those checks. State how you searched trial registries and compared protocols to publications.
Rate Certainty Across Studies
If you used GRADE, say who rated certainty and how ratings influenced emphasis in the summary of findings. List domains considered: risk of bias, inconsistency, indirectness, imprecision, and publication bias. Note any upgrades for large effects or dose-response.
Handle Unit Of Analysis Issues
Explain how you treated multi-arm trials, crossover trials, cluster-randomized designs, and repeated measures. Document any calculations to avoid double counting and any intraclass correlation assumptions for clusters.
Manage Missing Data
Describe rules for imputing missing standard deviations or events, conversions from medians to means, and any contact with authors for raw data. Flag sensitivity checks that drop imputed studies.
Software And Reproducible Assets
List software and versions for screening, extraction, meta-analysis, and figures. Mention code repositories or shared data files if available.
Reporting Standards That Editors Expect
Align the methods with established guidance. The PRISMA 2020 checklist sets clear items for reporting searches, selection, and synthesis. The Cochrane Handbook explains approved choices for effect measures, bias tools, and synthesis paths. The EQUATOR Network guide helps you pick design-specific checklists if your review includes diverse study types. Many journals ask you to cite the checklist in the manuscript. Keep your flow diagram numbers aligned with those items.
Write Clear, Reproducible Methods
Names, Dates, And Versions
Give database provider names, interface versions, and the exact date you last searched each source. Name the risk-of-bias tool version, the meta-analysis package, and any add-on modules.
Order Of Operations
Write the steps in the sequence you used: deduplication, screening, full-text review, extraction, bias assessment, synthesis. Clarity about order prevents confusion when readers compare the text with the flow diagram.
Units And Thresholds
State measurement units, cut-points, and clinically relevant thresholds. If scales run in opposite directions, say which way you transformed them so higher values always mean the same thing.
Examples Of Tight, Plain Language
Good methods read like a logbook. Short sentences, active voice, and concrete numbers make review work easy to follow. Here are patterns you can reuse:
- “We searched MEDLINE (Ovid) from 1946 to 16 Sep 2025 using MeSH and text words for ‘atrial fibrillation’ and ‘catheter ablation’.”
- “Two reviewers screened all records independently; conflicts went to a third reviewer.”
- “We used RoB 2 with signalling questions and judged overall risk by domain; disagreements were resolved by consensus.”
- “We fit a random-effects model with Hartung-Knapp adjustment and reported risk ratios with 95% CIs.”
- “We planned subgroup analyses by dose and age group and one sensitivity analysis dropping high risk studies.”
Table: Common Biases And How You Report Your Checks
| Bias type | What you checked | Where it appears |
|---|---|---|
| Selection bias | Randomization method, allocation concealment, baseline balance | Risk-of-bias table; sensitivity plan |
| Performance/Detection bias | Blinding of participants, carers, and assessors | Risk-of-bias table; subgroup rules |
| Attrition bias | Missing outcome data and handling rules | Extraction form; sensitivity plan |
| Reporting bias | Protocol vs published outcomes, small-study patterns | Registry checks; funnel plot plan |
| Confounding (observational) | Measured covariates and adjustment strategy | ROBINS-I domains; effect preference |
Transparency, Registrations, And Data Availability
Say whether a protocol exists and where it sits. If registered, give the ID and registration date. Describe access to extraction sheets, analytic code, and any automation outputs. Link durable repositories when allowed by the journal.
Ethics And Conflicts
Reviews using only published, de-identified reports typically do not require ethics review, but confirm local policy. State funding sources and how you handled investigator ties from included studies during risk-of-bias judgement.
Visuals That Match The Text
A PRISMA flow diagram should match counts named in the methods. Forest plots need the same effect metrics you declared. Any funnel plot or GRADE summary should reflect choices already stated above.
Quick Methods Checklist
- State review type and question.
- List eligibility criteria.
- List every source and the last search date.
- Provide at least one full search string.
- Describe the screening process and tools.
- Explain extraction fields and reviewer roles.
- Name risk-of-bias tools per design.
- Define effect measures for each outcome.
- Describe meta-analysis model or narrative plan.
- State heterogeneity measures and thresholds.
- List subgroup and sensitivity analyses.
- Plan for reporting bias checks.
- Explain certainty ratings across studies.
- Note software, versions, and data sharing.
- Match figures and tables to the text.
Special Cases And Design-Specific Notes
Qualitative Evidence
State the approach to synthesis (e.g., thematic or meta-ethnography), the sampling frame for studies, and how you coded and developed themes. Name the tool used to appraise study quality, such as CASP or GRADE-CERQual, and explain how confidence in themes shaped your claims.
Diagnostic Accuracy
Report the index test and reference standard in enough detail to reproduce both. State whether you applied a hierarchical model such as the bivariate or HSROC model. Say how you handled multiple thresholds and whether you corrected for threshold effects. Clarify if you split analyses by setting or specimen type.
Prognostic And Predictive Models
Say how you extracted model details, including candidate predictors, handling of missing predictors, and internal or external validation. Name the tools used for bias assessment (e.g., PROBAST) and report how you synthesized C-statistics, calibration, or decision curves.
Network Meta-Analysis
List the interventions in the network, transitivity checks, and choices for inconsistency assessment. Name the estimator and software (e.g., frequentist netmeta or a Bayesian model), how you selected priors if Bayesian, and how you ranked treatments. Explain how you treated multi-arm trials and sparse connections.
Equity And Subgroup Reporting
If your review assessed equity or context, state the approach used (e.g., PROGRESS-Plus) and how you planned subgroup summaries by sex, age, income, or region. Give any prespecified thresholds for meaningful subgroup differences and the minimum number of studies required.
Common Wording Pitfalls To Avoid
Steer clear of vague verbs that hide decisions. Replace “we assessed studies” with a concrete action such as “two reviewers applied RoB 2 by domain.” Avoid passive phrases that omit who did the work. Give numbers where you can: how many databases, how many reviewers, how many records deduplicated, how many full texts retrieved. Plain, specific sentences help readers retrace every step without guessing.
Final Checks Before Submission
Read methods while viewing the flow diagram and forest plots. Numbers and labels should match. Ask a colleague to rerun your main search string.
