A medical article critique follows a structure: appraise the study, judge trustworthiness, and write an evidence-based evaluation for readers.
You’re reviewing a clinical or biomedical paper, not retelling it. The goal is to help readers weigh the study’s credibility and usefulness. The process works best when you follow a consistent path, cite standards, and show how you reached each judgment. This guide lays out a repeatable workflow that editors and peer reviewers recognize.
Write A Review For A Medical Journal: Step-By-Step
Set up a tidy plan. Read once for the big picture, again for the design and execution, and a third time to verify numbers and claims.
Scope The Question And Audience
Start with the research question and context. State the PICO or equivalent (population, intervention or exposure, comparator, outcome). Say who would act on the findings: clinicians, policy makers, or researchers. Give a single-sentence takeaway early so busy readers know whether to keep reading.
Check Ethical And Reporting Basics
Confirm ethics approval, consent, and data transparency statements. Name the trial registry or protocol if one exists. Match the paper’s reporting to the right guideline set. The ICMJE Recommendations describe authorship, conflicts, data sharing, and trial registration rules used by many journals. For syntheses of studies, the PRISMA 2020 checklist lays out required items for abstracts, methods, and the flow diagram.
One-Page Checklist Before You Draft
| Section | What To Do | Proof You Include |
|---|---|---|
| Title/Abstract | State topic, design, key result, and limits. | Design label, effect size, precision, and one main caveat. |
| Question | Write the study question in PICO/PEO form. | Population, exposure/intervention, comparator, outcomes. |
| Methods | Describe design and why it fits the question. | Prospective or retrospective; randomization or matching; blinding. |
| Participants | Assess recruitment, eligibility, and setting. | Flow diagram numbers and reasons for exclusion or loss. |
| Interventions/Exposures | Check clarity and fidelity. | Dosage, timing, adherence, co-interventions. |
| Outcomes | Judge clinical relevance and measurement quality. | Primary outcome definition and validation. |
| Bias Control | Identify randomization, allocation concealment, blinding. | Sequence generation method and concealment process. |
| Sample Size | Look for power calculation and assumptions. | Alpha, beta, target effect, planned n. |
| Statistics | Check model choice and handling of missing data. | Intention-to-treat, imputation, model diagnostics. |
| Results | Report effect sizes with precision, not only p values. | Risk ratios or mean differences with 95% CIs. |
| Harms | Summarize adverse events transparently. | Denominators, severity, and withdrawals due to harms. |
| External Validity | Judge applicability to real-world settings. | Setting, case mix, resources needed, follow-up. |
| Conflicts/Funding | Note sponsor roles and competing interests. | Who designed, analyzed, or wrote parts of the study. |
| Bottom Line | Offer a balanced verdict. | One-to-two lines on credibility and use. |
Read Methods With A Critic’s Eye
Design And Randomization
Match design to question. For treatment effects, trials beat observational designs. In a trial, look for sequence generation, allocation concealment, and blinding. Weakness in any of the three can skew estimates.
Participants And Setting
Eligibility rules shape who the results apply to. Note baseline comparability, loss to follow-up, and reasons for attrition. A lopsided drop-out rate can distort estimates even when randomization looked fine at baseline.
Outcome Measurement
Outcomes should be pre-specified, patient-relevant, and measured with valid tools. Surrogates can mislead. Check timing, assessor blinding, and whether multiple testing inflated false positives.
Qualitative Papers
For interviews or focus groups, check sampling strategy, reflexivity, and how themes were derived. Look for a coding process, quotes that map to themes, and steps that guard against selective use of excerpts. Explain what the findings add to practice or policy decisions.
Analysis Choices
Look for intention-to-treat in trials, adjustment for confounders in cohort studies, and attention to clustering when units are grouped. Ask whether model assumptions were checked. When subgroup claims appear, see if they were pre-planned and tested for interaction.
Summarize Results Without Spin
Give effect sizes with confidence intervals. Convert odds ratios to risk ratios when baseline risk is not rare. Present absolute effects where possible so readers can judge clinical size, not only statistical signals.
Weigh Bias And Certainty
Separate internal and external threats. Internal threats include selection bias, performance bias, detection bias, and attrition. External threats relate to setting and case mix. For syntheses, describe how the authors controlled for small-study effects and selective reporting. Many reviewers use structured tools: CASP checklists help for single studies, and AMSTAR 2 helps for systematic reviews and meta-analyses. Use these as scaffolds, not as scorecards.
Write The Critique In A Clean Structure
Opening Paragraph
One or two sentences place the study in context and state the main finding. Add a plain claim on credibility: “methods support a cautious yes,” or “limits prevent firm inferences.”
Methods Paragraph
Summarize design, setting, participants, interventions or exposures, outcomes, and how missing data were handled. Name any prespecified subgroups. Avoid dense jargon; keep terms correct and brief.
Results Paragraph
Report the primary estimate with precision. Include counts for key events and time windows. Add the most relevant secondary outcome only if it changes decisions or safety profiles.
Appraisal Paragraph
List the two or three biggest strengths and the two or three biggest limits. Tie each point to a part of the paper: concealment, adherence, outcome misclassification, contamination, or follow-up. Avoid vague labels; link each claim to a detail you can cite.
Implications Paragraph
Say who should act, what action makes sense, and what uncertainty remains. Offer one line on research gaps if the question matters but the evidence is thin.
Use Reporting And Appraisal Standards
When you review a synthesis, check that the abstract, search strategy, selection process, risk-of-bias methods, and flow diagram match PRISMA items. For quality grading across outcomes, GRADE may appear, but don’t inflate a low-certainty signal into a firm practice point. For single trials or observational studies, CASP prompts cover validity, results, and applicability. For reviews of interventions, AMSTAR 2 asks about protocol, search, study selection, justification of exclusions, risk-of-bias methods, and synthesis choices; it does not produce a numeric score.
Numbers You’ll Use Again And Again
Effect Sizes That Matter
Risk ratio, risk difference, and number needed to treat or harm carry meaning for clinical decisions. Median differences help when data are skewed. Report both the estimate and its 95% interval so readers see the likely range, not just a point.
Translating Between Measures
When authors report odds ratios for common outcomes, convert to an approximate risk ratio using a baseline risk. For time-to-event results, present hazard ratios with absolute risks at a set time to anchor interpretation.
Common Pitfalls When You Review
| Flaw | Question To Ask | Quick Fix In Your Review |
|---|---|---|
| Outcome Switching | Was the reported primary outcome prespecified? | Compare to protocol/registry; flag changes and why they matter. |
| P-Hacking Signals | Were many tests run with few events? | Favor confidence intervals and pre-planned analyses. |
| Unclear Randomization | How was the sequence generated and concealed? | Ask for detail or rate risk as high when concealment is opaque. |
| Loss To Follow-Up | Is attrition uneven or reasons imbalanced? | Show per-group loss; discuss bias direction. |
| Selective Reporting | Do results appear only for favored outcomes? | Cross-check registry and supplement; call it out. |
| Confounding | Were key prognostic factors unmeasured? | State that residual confounding may explain effects. |
| Small Study Effects | Do tiny trials drive a big pooled effect? | Point to funnel plot limits and downgrade certainty. |
| Misleading Graphs | Are axes truncated or scales inconsistent? | Translate figures into plain numbers with consistent denominators. |
Style And Tone That Editors Welcome
Write short paragraphs and use active voice. Cite data, not feelings. Keep the verdict near the end so readers follow your checks.
Submission Prep Checklist
Files And Formatting
Follow the target journal’s instructions. Use their reference style, figure formats, and word count.
Transparency Statements
Add role statements if you worked in a team review: who screened, who extracted, who checked statistics. If you used a checklist, name it and attach a filled version as a supplement.
Ethics And Permissions
If you reproduce a figure or table, secure permission. When you quote from a protocol or registry, cite the record. Keep patient privacy intact when authors included case material.
Speed Moves When Time Is Tight
When a deadline looms, triage the title, abstract, methods, and the main table. Then check randomization or confounding, outcome measurement, and results. Draft your verdict early and refine numbers later.
Template You Can Reuse
Suggested Outline
Context: One sentence on the clinical or policy backdrop. Question: The exact research question. Design: Trial, cohort, case-control, cross-sectional, or qualitative. Methods: Key features that protect against bias. Results: Primary effect with precision and any serious harms. Appraisal: Two main strengths and two main limits. Implications: What a clinician, policymaker, or researcher should do next and where the uncertainties sit.
Phrase Bank
“Randomization and concealment appear sound, lowering selection bias.” “Outcome assessors were blinded, reducing detection bias.” “Attrition differs by arm and might inflate benefit.” “Adjusted model includes core confounders, but residual bias is possible.” Lift the exact line that fits the paper you are reviewing, then back it with numbers.
Keep Your Evidence Trail
Store notes and calculations with page or figure numbers. Save any baseline risks used for conversions. Keep author replies with your file for later checks.
Why This Approach Works
It mirrors what readers value: clarity, relevance, and fairness. You show the question, the design fit, the numbers, and the verdict. You cite recognized standards so editors can see your method. Most of all, you write for decisions, not for decoration.