A thematic literature review in health sciences pulls insights across studies and weaves them into clear, recurring themes. It suits questions about experiences, barriers, enablers, and how or why an intervention plays out. You’ll map what the research says, surface patterns, and explain them in plain language backed by data extracts.
Doing a thematic literature review in health sciences: quick setup
You don’t need exotic software or a massive team to start. You do need a tight question, a protocol, a thorough search, consistent screening, transparent coding, and a write-up that shows your steps. The table below lays out the moving parts from start to finish.
| Stage | What you do | Practical outputs |
|---|---|---|
| Define | Frame a clear review question and scope using PEO, SPIDER, or a similar approach | One-sentence question, scope limits, inclusion and exclusion rules |
| Plan | Draft a short protocol and plan roles, timelines, and tools | Protocol file, task list, screening form |
| Search | Run database and grey searches with tested strings and limits | Search log, export files, de-duplicated library |
| Screen | Two reviewers screen titles, abstracts, then full texts | PRISMA flow counts, include/exclude notes |
| Extract | Pull study characteristics and all text that answers your question | Extraction sheet, quote bank |
| Appraise | Judge methodological quality using a fit-for-purpose checklist | Appraisal table, notes on confidence |
| Code | Line-by-line coding of findings and relevant context | Codebook, coded excerpts |
| Theme | Group codes, write descriptive themes, then build analytical themes | Theme map, theme definitions |
| Synthesise | Explain relationships, tensions, and what the themes mean for practice or policy | Narrative with tables and visuals |
| Sensitivity | Test how themes hold when lower-quality or edge studies are removed | Sensitivity notes, adjusted theme set |
| Write | Report methods and findings with clear links from data to claims | Manuscript, figures, appendices |
Plan your question and scope
Shape a tight question first. For qualitative aims, PEO (Population, Exposure, Outcome) or SPIDER (Sample, Phenomenon of Interest, Design, Evaluation, Research type) work well. Keep the scope narrow enough that themes can be traced across studies without drifting into vague summaries. Write inclusion and exclusion rules that reflect your scope, study types, settings, languages, and time window.
Draft a short protocol so choices are recorded before you search. State the question, eligibility rules, databases, example strings, screening steps, extraction items, appraisal tool, and synthesis method. If a registry fits your review type, register the protocol; if not, place the file in a public repository so readers can see what you planned.
Thematic literature review for health sciences: search and screening
Use multiple databases to reduce blind spots. MEDLINE or PubMed, CINAHL, PsycINFO, Embase, and Web of Science or Scopus usually give strong coverage. Add subject-specific sources when needed, and check trial registries and theses for grey items. Keep a repeatable log of where you searched, the dates, limits, and number of hits.
Build strings with controlled vocabulary and text words. Test and refine them against a pilot set of known studies. Export results to a reference manager, remove duplicates, and move to screening. Two people should screen titles and abstracts, then full texts, with a third on call to break ties. Record reasons for exclusion so your PRISMA flow is airtight. For detailed reporting guidance, follow the PRISMA 2020 checklist.
Manage records the tidy way
Pick one workspace for the team. Create fields for decisions, reasons, and notes. Use labels for each screening step and for topic tags that might help later in coding. Back up exports at each key milestone: after de-duplication, after title/abstract screening, after full-text screening.
Extract and code data for themes
Extraction should capture both study descriptors and the text that answers your question: participant voices, author interpretations, and any observational detail that bears on the theme. Don’t trim too hard at this stage; you’ll want enough richness to code meaningfully. Include basic design details, setting, sample, and context notes, since these often shape how a theme plays out.
Build a tight codebook
Start with open coding of a subset, line by line. Write short labels that capture meaning, not just topic. Meet to harmonise labels, merge overlaps, and agree on examples and counter-examples for each code. That codebook becomes your guardrail for consistent application by the whole team.
Move from codes to themes
Group codes into clusters that describe what participants did, felt, believed, or encountered. Draft descriptive themes from those clusters, then ask what sits beneath them. That’s where analytical themes come from: a higher-order explanation of why those patterns arise. The three-stage path set out by Thomas and Harden (code text, build descriptive themes, produce analytical themes) is widely used in health research.
Second table: a ready-to-use extraction sheet
Here’s a compact sheet you can adapt. Keep it lean so data entry stays quick yet complete enough for coding.
| Field | Why it helps | Example entries |
|---|---|---|
| Study basics | Tracks design, setting, and sample so readers can judge transferability | Qualitative interview; tertiary clinic; n=24 adults with type 2 diabetes |
| Phenomenon | Clarifies what experience or process the paper speaks to | Self-management barriers during Ramadan |
| Main findings | Central author interpretations and participant quotes | “I fear hypos at work”; “family meals derail my plan” |
| Context cues | Notes that shape meaning of a finding | Shift work; fasting hours; clinic education model |
| Quality flags | Signals that raise or lower confidence | Clear sampling; thin reflexivity; limited triangulation |
| Reviewer notes | Emerging ideas and potential links to other codes | Stigma links to disclosure code; ties to access theme |
Appraise study quality without losing nuance
Use a checklist suited to qualitative designs. The JBI tools and CASP are common choices; they steer attention to sampling, data collection, analytic transparency, reflexivity, and ethical handling. Record judgements per item rather than one blunt label, and explain how appraisal shaped your synthesis. For methods detail, the JBI Manual for Evidence Synthesis sets out approaches that fit health topics.
Write themes that carry evidence
Each theme needs a clear name, a short definition, a paragraph that explains the signal, and data extracts that show it. Use quotes and author interpretations side by side so readers can see both raw voice and synthesis. Flag disconfirming cases and show how they fine-tune the theme rather than hiding them. If a theme varies by setting or group, say so with crisp sub-sections.
Show your path from data to claims
Readers should be able to trace a straight line: from search and selection to coded data, to theme maps, to the final narrative. Include a PRISMA flow figure with counts and reasons, a table of study characteristics, an appraisal table, and a theme table with supporting quotes. When you need method guidance tailored to qualitative syntheses, see Cochrane chapter 21.
Link themes to practice and policy without overreach
Keep claims tied to the data. Translate themes into clear takeaways for clinicians, managers, or educators. If you point to action, show the thread from theme to step. Where the evidence is thin, mark it and suggest what new studies would settle the question. That way, your review helps both immediate decisions and the next round of research.
Common snags and quick fixes
Scope creep
When a question balloons mid-stream, park new angles for a later review. Stick to your protocol and eligibility rules. Add a short note in the paper about topics you set aside.
Shallow searches
If early runs miss cornerstone papers, revise strings, add synonyms from index terms, and ask a librarian to spot gaps. Scan reference lists and use citation chasing to round up near-misses.
Topic labels in place of themes
“Barriers” or “education” are buckets, not themes. Push labels to say what’s happening, who’s involved, and why it matters to the outcome at hand.
Over-counting quotes
Numbers can help, but headcounts aren’t the point of a thematic review. Use counts sparingly to signal spread, then lean on depth and explanation.
Weak linkage between data and claims
If a theme reads like commentary, add quotes or richer excerpts. Show the reader how you got there. Where a claim rests on thin evidence, say so.
Method drift
Thematic synthesis has its own logic. Pick one approach and stick with it from coding through write-up, unless you planned a hybrid at protocol stage.
Present visuals that lift clarity
A tight theme map, a table of quotes per theme, and a short logic model can carry a lot of weight. Keep visuals readable in black and white, with labels that match text exactly. If you cluster themes, explain the links in a sentence before the figure so no one gets lost.
Teamwork tips that save hours
Decide early who leads search design, who manages screening, and who owns the codebook. Set short calibration checkpoints: screen the first 200 records together, co-code two or three papers, and compare decisions. Small syncs up-front beat long disputes later.
Reporting must-haves
Use a structure that readers recognise: background and question; methods (protocol, search, screening, extraction, appraisal, synthesis); results (study flow, study table, quality table, themes with quotes); and a short section on implications. Keep language plain and direct, and tie every claim to a traceable data extract or table.
Ethics and reflexivity
Even though you’re synthesising published work, reflect on your stance, topic ties, and any constraints that might shape coding and interpretation. State how the team handled disagreements and how you kept participant voices central during synthesis.
Data management and reproducibility
Keep a clean chain of files: protocol, search logs, de-duplication notes, screening exports, extraction sheets, codebook versions, and appraisal forms. Store coded datasets and quote banks in a secure folder with a readme. Share what you can in a repository on acceptance, removing any copyrighted material that can’t be posted.
Final checks before submission
- Does the title reflect the question and method?
- Is the question tight enough to yield themes rather than loose lists?
- Have you logged searches, dates, and counts for each source?
- Can a reader follow selection from hit counts to the included set?
- Does the appraisal table match what you later say about confidence?
- Does each theme include a name, a definition, a short explanation, and quotes?
- Are disconfirming cases shown and used to sharpen claims?
- Do tables and figures use the same labels as the text?
- Do cross-refs to the PRISMA 2020 checklist appear in the right places?
- Have you stated data sharing and any limits?