No, a systematic review synthesizes studies; meta-analyses model effect sizes as outcomes and use study features as moderators.
Readers run into a snag when mixing primary-study language with evidence synthesis. Lab and field studies define an independent variable and a dependent variable inside one study. A systematic review gathers many studies to answer a framed question and, when possible, runs a meta-analysis to pool their results. That shift changes how “variables” work. This guide explains what belongs where, how outcomes and exposures carry over into a review, and when moderator checks behave like “independent variables” in a meta-regression.
What A Systematic Review Actually Measures
A review starts with a structured question, search, and screening. It extracts data from eligible studies and reports what the collected evidence shows. If the included work is similar enough, a meta-analysis estimates a pooled effect. The review itself does not manipulate treatments or exposures; it summarizes them. That means it does not house an independent variable in the experimental sense. The same goes for the dependent variable—there isn’t a single outcome measured on participants by the reviewers. Instead, the review records outcomes that the original studies measured, then aggregates those results.
To keep things precise, most health intervention reviews frame the question with PICO: Population, Intervention, Comparison, and Outcome. That scaffold keeps the selection and extraction tight and makes the end product traceable. See the concise overview of PICO components for definitions that map neatly onto review planning and screening.
Where Variables Live Across Study Types
Here’s a fast map that keeps roles straight across study designs and the evidence synthesis that pools them.
| Study Or Synthesis | What’s Analyzed | Role Of “Variables” |
|---|---|---|
| Experimental Study | Participant outcomes under assigned conditions | Independent variable = treatment; dependent variable = measured outcome |
| Observational Study | Associations across exposures and outcomes | Exposure acts like an independent variable; outcome is recorded |
| Systematic Review (No Meta-analysis) | Narrative synthesis across studies | No single independent or dependent variable; reports outcomes and study features |
| Meta-analysis | Effect sizes from included studies | Dependent variable = effect size; moderators = study or sample features |
Independent Vs. Dependent Variables In Evidence Syntheses: What Fits Where
In a meta-analysis, each included study contributes an effect size such as a risk ratio, mean difference, or standardized mean difference. That effect size becomes the outcome in the meta-analytic model. When analysts probe why study results differ, they add moderators such as dose, follow-up length, or risk-of-bias indicators. In a regression on effect sizes, those moderators act like predictors. So the “independent variable” idea appears at the model level only when running meta-regression, not at the review level by default.
The PRISMA 2020 guideline explains how reviews should report methods and results so readers can see exactly what outcomes were aggregated, which analyses were prespecified, and how heterogeneity was handled. Clear reporting makes those moderator checks interpretable and reusable.
How PICO Maps To Outcomes In A Review
Population defines who the evidence describes. Intervention and Comparison specify the exposure or treatment contrast. Outcome lists the end points that matter for decisions. During extraction, reviewers capture those outcomes from each study along with the statistics needed to compute an effect. The synthesis then pools those effects. No single participant is measured by the reviewers; the unit under analysis is the study’s effect size.
Outcome Choices And Effect Measures
Reviews pick effect measures that match the data type and decision need. Binary outcomes often use risk ratios or odds ratios. Continuous outcomes use mean differences or standardized mean differences. Time-to-event outcomes use hazard ratios. The choice flows from the included studies and the question; it does not come from manipulating a variable inside the review.
Why Meta-Regression Looks Like A Classic Model
Once effect sizes are in hand, analysts can test whether study-level characteristics explain variation. That is where the language of predictors fits. A meta-regression estimates how the pooled outcome (the effect size) shifts with a change in a moderator such as age group, dose, or trial design. Conceptually, you can read it like a regression of y on x, only y is the effect size and x is a study feature.
Heterogeneity, Subgroups, And Moderators
When the set of effects varies, subgroup splits or continuous moderators can shed light on patterns. Subgroup checks compare pooled effects inside categories like “short vs. long follow-up.” Continuous moderators model trends such as dose per week or baseline severity. Both approaches treat the effect size as the response and the study feature as the input.
Common Moderator Candidates In Meta-Regression
The list below sketches frequent predictors and why they help interpret pooled results.
| Moderator Type | Examples | What It Explains |
|---|---|---|
| Population Features | Age band, baseline risk, setting | Differences in who received the intervention or exposure |
| Intervention Details | Dose, delivery mode, duration | Variation in what was given and how |
| Study Methods | Randomization, blinding, follow-up length | Design choices linked to effect estimates |
What To Call Variables Inside A Review
To keep language tidy, use “outcome” for the end points recorded in the included studies and “effect size” for the statistic pooled across studies. Reserve “moderator” for study-level features examined in subgroup checks or meta-regression. That wording aligns with the synthesis workflow and avoids confusion with primary-study roles.
Practical Walkthrough: From Question To Pooled Result
1) Frame The Question
Write the PICO. Name the Population, define the Intervention and the Comparison, and list the Outcomes that guide inclusion and extraction. Keep outcomes specific enough to match across studies.
2) Search And Screen
Run structured searches and screen records against the PICO. Track reasons for exclusion. A PRISMA flow diagram makes the path from records to included studies transparent.
3) Extract Data
Capture study identifiers, participant numbers, intervention details, outcome statistics, and any candidate moderators. Record risk-of-bias items. Decide in advance which outcomes and time points are primary.
4) Compute Effect Sizes
Convert each study’s results into a common metric. For dichotomous outcomes, compute log risk ratios or log odds ratios with standard errors. For continuous outcomes, compute mean differences or standardized mean differences. For time-to-event, use log hazard ratios.
5) Pool And Inspect Heterogeneity
Combine effects with inverse-variance weights. Report confidence intervals and a measure of spread such as tau-squared or I². If spread is large, plan subgroup splits or a meta-regression.
6) Probe Moderators When Justified
Test prespecified predictors such as dose or risk-of-bias categories. Be cautious with the number of predictors relative to the number of studies. Report the model, coefficients, and fit measures, and keep the interpretation crisp.
Common Misconceptions To Avoid
“A Review Picks Its Own Independent Variable”
Not so. The review pools what the included studies already measured. It does not assign treatments or exposures to participants. The only time a predictor shows up is during moderator checks, where the effect size is the response.
“Dependent Variable Means The Patient Outcome In The Review”
Inside a meta-analysis, the outcome is the effect size from each study. Patient outcomes sit one level down inside the original trials or observational studies.
“Every Review Needs Meta-Regression”
No. Many review questions need only a pooled estimate or even a structured narrative if the studies are too diverse. Moderator checks help when you have enough studies and a clear rationale.
Method Notes And Source Trail
This guide follows standard reporting and synthesis practice. For reporting structure and flow diagrams, see the PRISMA 2020 statement. For pooling logic, effect measures, and meta-regression basics, the Cochrane Handbook chapters on effect measures and meta-analysis outline the models and the way moderators are treated as explanatory variables in a regression on effect sizes. For framing the question, the PICO overview lays out the standard elements used to plan and screen.
Quick Answers To The Core Question
Do Reviews Contain Independent And Dependent Variables?
No in the experimental sense. A review compiles outcomes from studies and may pool them. The only “dependent variable” in the statistical sense appears when modeling effect sizes in a meta-analysis.
When Do Predictors Enter The Picture?
During subgroup checks or meta-regression. Study or sample features become moderators that help explain differences in effect sizes across studies.
What Should Authors Call Things?
Use “outcomes” for end points, “effect sizes” for the quantities pooled, and “moderators” for study features tested in meta-regression. Keep “independent/dependent” for primary studies or for the meta-regression model description.
Takeaway For Students And New Reviewers
Speak the language of the level you are working at. In a trial, independent and dependent variables make sense because you are linking an intervention to a measured outcome in a single sample. In a review, outcomes and effect sizes are the currency, and moderators are study features that can explain differences across results. That shift keeps your write-up clear and aligns with widely used guidance.
