No, systematic reviews state a structured question, not a hypothesis; hypotheses appear only for preplanned subgroups or heterogeneity checks.
Readers ask whether a review needs a formal prediction. Method texts describe reviews as answers to a pre-specified question using transparent methods. Some teams add predictions for planned analyses, yet the core output remains an evidence-based answer to the question.
What A Systematic Review Really Sets Up
A standard review begins with a protocol that marks the topic, the question, eligibility rules, outcomes, and analysis plans. The question defines the scope: population, intervention or exposure, comparator, and outcomes. This structure keeps study selection, extraction, and synthesis consistent across the team.
That protocol step is registered or archived in many fields. Registration adds transparency, reduces duplication, and guards against shifting targets once results are seen. It also nudges teams to spell out subgroup plans and sensitivity checks before any data are pooled.
Question Versus Hypothesis In Evidence Synthesis
A hypothesis predicts a direction or size of an effect between variables. A review question asks whether an effect exists, how big it is across studies, and for whom or under what conditions. Because reviews assemble past studies, the aim sits on appraisal and synthesis, not on a new predictive claim.
Meta-analysis can test contrasts and yield p-values or credible intervals. Even then, the phrasing usually reflects a question, such as whether treatment X reduces outcome Y in group Z. Teams may pre-specify subgroup predictions to probe inconsistency, yet the review still orbits around the question.
Broad Table: Review Types And Where Predictions Fit
The table below sketches common evidence syntheses and how predictions tend to appear within each. It shows where a formal prediction helps and where it adds little value.
| Review Type | Core Aim | Role Of Predictions |
|---|---|---|
| Systematic review (interventions) | Answer a structured PICO question across trials | Optional; used for subgroup or heterogeneity plans |
| Meta-analysis | Pool effect sizes for outcomes | Optional; set a few a priori probes |
| Scoping review | Map concepts and evidence range | Not used; scope not effect-size driven |
| Rapid review | Deliver a time-bound summary with pared methods | Optional; brief probes only if preplanned |
| Umbrella review | Synthesize across existing reviews | Occasional; mainly to plan strata across reviews |
| Qualitative evidence synthesis | Summarize lived experience and mechanisms | Not used; question guides theme development |
Are Hypotheses Part Of Systematic Reviews Today?
Answer: they are optional. Method standards call for a clear question and predefined methods. Predictions are encouraged when probing effect modifiers, planned subgroups, or test accuracy thresholds. In many scoping or mapping projects, predictions are out of place.
For reporting, the PRISMA 2020 checklist sets what to report for objectives, methods, and results; it lists no mandate to state a prediction line. For planning, the Cochrane Handbook chapter on eligibility shows how the PICO question anchors inclusion rules.
How To Frame The Review Question Cleanly
Pick a question template that matches the domain. PICO or PICOS fits treatment questions; PICo handles qualitative syntheses; PEO suits risk and exposure topics; SPIDER can work for qualitative designs. Write the question in one line, then translate that line into inclusion criteria and search strings.
State the outcomes that matter to decision-makers. Name primary outcomes and time windows. Flag any minimum follow-up, measurement scales, or thresholds that will drive synthesis choices.
When A Prediction Helps
A prediction can sharpen planned subgroup or meta-regression work. State it before extraction, tie it to a rationale, and keep the number of checks modest to avoid false leads. Common places include dose bands, baseline risk, setting, and study design.
In diagnostic accuracy work, teams sometimes set an a priori threshold or lay out expected trade-offs between sensitivity and specificity. These act like predictions for how accuracy shifts across settings or cut-points.
Worked Steps: From Question To Synthesis
1) Draft the question with a fitting question template. 2) Write a short protocol that fixes eligibility, outcomes, and analysis. 3) Register or post the protocol. 4) Run the search, screen records in duplicate, and log reasons for exclusion. 5) Extract data in duplicate with calibrated forms. 6) Appraise bias with a field-standard tool. 7) Synthesize: choose fixed, random, or Bayesian models as the data allow. 8) Probe heterogeneity with planned checks; keep unplanned probes labeled as such. 9) Grade certainty and draft practice-ready statements.
Deep Table: When To State A Prediction
Use this guide to decide whether to add a prediction line in your plan. It lists common scenarios, the value of a prediction, and tips to keep it honest.
| Scenario | Value Of Prediction | Good Practice Tip |
|---|---|---|
| Heterogeneity across risk levels | High | State how effect may shift by baseline risk; limit to a few contrasts |
| Different dose or intensity bands | Medium | Pre-plan a dose gradient or categories and why they matter |
| Multiple care settings or regions | Medium | State a small set of setting strata to probe context |
| Diagnostic accuracy thresholds | High | Fix a cut-point plan or link to decision curves |
| Scoping or mapping questions | Low | Skip predictions; keep aims descriptive |
| Theory-building qualitative syntheses | Low | No predictions; pattern the question with PICo or SPIDER |
Common Missteps To Avoid
Writing a prediction after peeking at forest plots bends the process. Skipping protocol registration widens bias risk. Running dozens of unplanned subgroup cuts invites spurious signals. Burying negative or null findings behind vague wording erodes trust.
What Editors And Reviewers Expect To See
A clean question, a public or dated protocol, transparent flow of records, risk of bias tables, a synthesis plan that matches the data, and a short set of answers. If predictions were set in advance, place them near the subgroup or sensitivity sections and report them plainly.
Practical Templates You Can Borrow
Use a checklist to keep reporting tight. A template for methods might include: protocol link, question, question templates used, database list, eligibility bullets, outcomes, risk of bias tool, synthesis model, and certainty grading approach.
Bottom Line For Review Teams
The question is the spine. Predictions help when you plan to probe variation across subgroups or thresholds. If your goal is mapping topics or summarizing diverse concepts, skip predictions and keep the question broad yet sharp.
Objective Statements That Work
Write the aim as a one-line objective tied to the question template. Keep it specific enough for clear screening yet not so narrow that only a handful of trials qualify. Good aims name the group, the intervention or exposure, the comparator, and the main outcomes, plus a time window when that matters.
Sample Objective Wording
Examples of crisp aims: “To assess whether oral drug A reduces all-cause mortality within 90 days among adults with condition B compared with placebo or standard care.” “To compare event rates after high versus low dose of intervention C in children with condition D over 12 months.” Each aim can be traced straight to eligibility bullets and data items.
Protocol Registration In Practice
Teams in health sciences often use PROSPERO for public registration. The minimum set covers the question, eligibility, outcomes, search plan, risk of bias tool, and any planned subgroup checks. Other fields post a protocol on an institutional repository or a preprint server. The location matters less than the presence of a date-stamped plan that readers can verify.
What To Pre-Specify
Pre-specify deal-breakers and analysis rules: study designs that qualify, language limits if any, minimum sample sizes, handling of cluster trials, unit-of-analysis fixes, missing data approach, continuity corrections, and eligible effect metrics. Set a short list of subgroup probes with a short rationale for each.
Where Predictions Commonly Appear
Predictions show up most in areas with suspected effect modifiers. Baseline risk bands, dose tiers, age groups, disease severity grades, and care settings are common. In methods text, these are framed as a priori subgroup checks or meta-regression terms. When the aim is mapping, such predictions rarely help, so teams keep the plan descriptive.
Edge Cases Across Designs
Network meta-analysis compares several interventions at once. The plan may include a small set of contrasts that the team expects to rank near the top based on mechanism or prior head-to-head data. Even so, the anchor remains the question, with predictions serving only to guide planned probes.
Bias Control Linked To The Plan
Bias tools such as RoB 2 for trials and QUADAS-2 for accuracy studies make sense once the question and outcomes are fixed. A clear plan sets which domains matter and how judgments affect synthesis. Without that plan, assessments can drift or be used selectively.
Writing Results That Match The Aim
Report flow, included studies, main effects with intervals, and direction in plain terms. State when an effect is small, uncertain, or absent. Link subgroup findings back to the few predictions you set. Flag unplanned probes as exploratory so readers can weigh them with caution.
Who Should Draft The Plan
Bring in a subject specialist, a methods lead, an information professional, and a statistician. Each role shapes the plan: clinical nuance, bias control, search strings, and synthesis choices. With clear roles, the plan stays stable from registration through write-up.
When A Prediction Is A Bad Fit
If the field lacks prior signals, a prediction line adds noise. When outcomes and measures vary widely, predictions can push teams toward fragile subgroup cuts. In mapping projects, the aim is breadth and structure, not a directional claim.
Checklist You Can Reuse
Before screening begins, confirm: a one-line aim; a dated protocol; one question template; databases and dates; clear eligibility bullets; primary outcomes; bias tool; synthesis model; a short list of planned probes; and a path for certainty grading. During analysis, stick to the plan; label late additions as exploratory.
