How To Do A Scoping Literature Review In Medicine | Clinician’s Quick Guide

Plan a clear question, write a short protocol, run a broad search, screen in pairs, chart data, and map findings with PRISMA-ScR reporting.

Scope at a glance

Here is a compact road map you can use from day one.

Stage Main output Practical tip
Question PCC sentence and sub-questions Write one line for each element and keep a parking lot for nice-to-have ideas
Protocol Public plan with methods Post to OSF and date-stamp edits
Search Full strings and logs Follow PRISMA-S and save every version
Screening Decision logs and reasons Pilot in pairs and keep weekly huddles
Charting Clean data table Pilot the form and lock fields before the main pass
Mapping Grouped summaries and visuals Pick two or three simple charts over complex figures
Reporting Manuscript and supplements Match headings to the PRISMA-ScR checklist

Why choose a scoping review

A scoping review maps what exists, where it sits, and how it clusters across topics, designs, and settings. It suits broad or emerging areas, mixed methods, and questions about definitions, measures, or models. When randomised trials are scarce or outcomes vary widely, a scoping approach helps you size the field and spot gaps without scoring study quality.

Doing a scoping literature review in medicine: step-by-step

The process follows a staged path. You shape a clear question, plan how you will search and select sources, screen titles and abstracts, read the full texts that pass, chart core data, and present the map. Many teams use the JBI PCC format (Population, Concept, Context) to frame the question. That wording keeps scope broad enough for mapping while still giving structure for decisions.

Frame your question with pcc

Write the question in one sentence. List Population, Concept, and Context right under it. Note any limits on time window, language, or setting and explain why they matter to the map. Add two or three sub-questions you plan to summarise in the results, such as how often a measure appears or which settings test a given model.

Build a short protocol

The protocol keeps choices transparent. In two to five pages, state your aim, the PCC elements, inclusion and exclusion lines, databases, date ranges, grey sources, and how many team members will screen at each stage. Add a draft charting form with the fields you intend to extract. Post this plan on an open repository such as OSF before you run the search so readers can see what you planned from the start.

Design a reproducible search

Team up with an information specialist if you can. Write full search strings for each database, including MeSH and free-text terms. Keep filters to a minimum so you do not miss signals. For reporting, follow the PRISMA-S guidance for search methods and store the strings and logs in a shared folder.

Pick databases and grey sources

For medicine, MEDLINE and Embase sit at the core. Add CINAHL for nursing and allied health, PsycINFO for mental health topics, and Web of Science or Scopus for reach. For policy and guidelines, add grey sources such as guideline libraries, trial registries, and major agency sites. Search preprints when speed matters and note that status in your charting form.

Manage records and deduping

Export to a reference manager and remove duplicates before screening. Many tools match on title, author, year, and DOI; still scan a sample to confirm the match rules are not too strict. Keep a log of counts at each step so you can fill the PRISMA flow diagram later without guesswork.

Screen titles and abstracts

Screen in pairs with a short pilot first. Calibrate with 20–50 records, meet to sort differences, then start the main pass. Use broad include choices at this stage; push tough calls to full text. Tag reasons for exclusion so the flow diagram and tables read cleanly. Keep weekly huddles to resolve blockers fast.

Review full texts

Fetch PDFs in batches and keep a tracker. Two reviewers read each item against the inclusion lines and record the reason for any exclusion from the full-text stage. Move any unclear cases to a third-reviewer tie-break.

Chart the data

Build a charting form that matches your question and sub-questions. Pilot the form on five to ten papers and refine labels so anyone on the team can use it the same way. Capture study type, setting, sample, concept terms, measures, outcomes, and any notes on definitions. Add one free-text field for surprises that do not fit your preset boxes; this often surfaces useful themes.

Map and present findings

Summarise counts and ranges first. Then group the evidence by concept, method, population, or setting and show the patterns with simple visuals. Tables that list definitions or measures by domain help readers reuse your work. Keep narrative sections crisp and tied to the PCC and sub-questions you set at the start.

Report with PRISMA-ScR

Follow the PRISMA-ScR checklist when you write. It lists the items that readers clearly expect: what you planned, how you searched, how you selected sources, how you charted data, and how you present results. Match your headings to the items so peer reviewers can find answers in seconds. Add a full search appendix, a filled flow diagram, and the blank charting form as supplements.

How to do a scoping review in medical literature: search and screening

Search strings must balance breadth with precision. Use proximity operators and field tags where supported, and test variants of core terms. Record every change with the date so your audit trail stays tight. For screening, write one-line include and exclude rules the whole team can quote. Examples keep calls consistent: one rule, one clear example that passes, and one that fails.

When a scoping review fits better than a systematic review

Choose a scoping path when the aim is to map types of evidence, clarify concepts, or scope how a field measures outcomes. If the goal is to answer a narrow effect question with pooled estimates, that sits in systematic review land. You can still add a light critical appraisal in a scoping project if your audience needs it; describe the tool and keep it descriptive instead of grading the field.

Team roles and tools

At minimum you need a lead, one information specialist or librarian, and two reviewers for each stage. A third reviewer handles ties. Reference managers handle deduping, while screening platforms track decisions and reasons. Spreadsheets work for small sets; larger sets benefit from a database or review software with exportable logs.

Grey literature without getting lost

Set a list of named sites and repeatable paths. For site searches, use site: filters and note the date and path used. Save PDFs or web captures to a folder with a clear naming rule. Be explicit about what you will not search so scope stays steady.

Handling language and time limits

Language limits can clip the map if a large slice of work sits outside English. If you set a limit, state why and reflect on the impact in the limits section. For time windows, use a start date tied to a meaningful event, such as the first use of a term or the launch of a policy. Log any updates you run and state the last search date on page one.

Write with reuse in mind

Place the data chart and definitions where a reader can lift them for protocol building, guideline drafts, or grant background. Name files with dates and versions and place them in a public folder when you submit. That habit helps readers and boosts trust in your map. Credit the information specialist and any patient or public partners who shaped the question or sources list.

Common pitfalls to avoid

Over-narrow questions sink scoping work. Vague include lines do the same. Missing grey sources can skew the map toward journals only. Skipping a pilot for screening or charting creates drift across reviewers. Late changes to the charting form break comparability; lock it after the pilot unless a strong case calls for an edit and note that change in the write-up.

What goes in each manuscript section

In the Abstract, give the aim, date of last search, sources, counts, and two or three headline patterns. In Methods, mirror your protocol from question to charting and link to the plan. Results carry the map: selection counts, study lines, and the grouped summaries tied to the sub-questions. The last section states what the map shows, gaps that surfaced, and how readers can use the outputs.

Ethics, registration, and data sharing

Most scoping projects do not need ethics review since they use public sources. Still, check local rules. Register the protocol on OSF or a similar service, since PROSPERO does not accept most scoping designs. Share the charting form, codebook, and raw counts as open files so others can reuse them and build from your work.

From search to submission: a simple timeline

One to two weeks for scoping the question and writing the protocol. Two to four weeks for building and running searches across sources. Two to three weeks for title and abstract screening. Three to six weeks for full-text review and charting, shaped by volume and team size. Two to three weeks for mapping and writing with visuals and appendices.

Plain writing that still meets reporting standards

Use the PRISMA-ScR checklist as your outline and keep sentences short. Each item maps to a section you already wrote during planning. When you cite, link to core guidance that readers trust and keep the tone neutral. That style helps peer review and makes the paper easy to use as a model for the next team.

Sample include and exclude lines

Write short rules you can apply in seconds. Include: peer-reviewed articles and grey sources that match the PCC. Include all study designs that speak to the concept. Exclude: single-patient case reports unless your map targets them; editorials without data; and sources where the concept is only mentioned in passing. If in doubt at title stage, push to full text and decide there.

Build better search strings

List synonyms for each PCC element. Use truncation to catch word variants and proximity operators to keep linked terms close. Combine subject headings with text words. Test recall by checking whether known sentinel papers appear. If they do not, tweak strings until they do. Save each test run with a label so you can show the trail from draft to final.

Visuals that help readers

Pick charts that match the story in your data. A simple bar chart can show counts by setting or study type. A bubble plot shows volume on two axes at once. Timelines show how terms or measures rise and fade. Keep colour schemes simple and add clear labels so readers can reuse the figures in presentations and briefs with credit to your team.

Data management and reproducibility

Name files with ISO dates and versions. Keep a data dictionary for charting fields and store both the blank form and the final table. Use shared folders with read-only exports for the logs you will cite in the paper. Small habits like these turn a one-off project into a resource that others can build on without guesswork.

For methods detail and worked examples, see the JBI Manual scoping reviews chapter.

Charting fields you can copy

Use this ready list of fields when you build or refine your charting form.

Field Why it helps Notes
Study type and design Groups evidence by method Examples: RCT, cohort, qualitative, mixed
Population and setting Shows who and where Include age range, clinic type, country
Concept terms and definitions Clarifies how terms are used Quote short definitions when provided
Measures and instruments Maps how outcomes are captured List scale names and any cut-offs
Outcomes reported Reveals range and frequency Flag primary vs secondary if stated
Sample size and timeframe Shows study scope Note start and end years for data
Notes and surprises Catches themes you did not plan Use this to seed new sub-questions