How To Do A Literature Review In Psychology | Quick Steps

Pick a clear question, search databases with set rules, screen and extract consistently, then synthesize themes into a clear, transparent review.

What a good literature review delivers

A strong review does four jobs: maps what scholars have already tested carefully, shows how methods and samples differ, surfaces patterns and gaps, and sets up a case for your own study. Readers should finish it knowing why the topic matters, what has been tried, and where your work clearly fits.

Keep scope tight. Define the population, concept, and context before you search. Pick a date range and language rules. Decide which designs, measures, and outcomes qualify. Write these rules down first and stick to them during screening.

Clarify your audience as you set scope. A thesis committee expects transparent methods and complete reporting. A journal review may need stricter rules and a tighter narrative. Match tone and depth to the venue while keeping the same core method.

Review Types And When To Use
Review Type Purpose Use When
Narrative review Summarizes and links studies around themes. You need a broad map and critical commentary.
Scoping review Charts what exists: topics, designs, measures. The field feels scattered or emerging.
Systematic review Uses a protocol, exhaustive search, and documented screening. You must be thorough and reproducible.
Meta-analysis Aggregates effect sizes across comparable studies. Studies report commensurable statistics.

Doing a literature review in psychology: a clear plan

Follow a simple path that keeps bias low and makes your work repeatable. These steps suit theses, journal papers, and capstone projects alike.

Document every move. Keep a dated search log, save each database query, and store export files with clear names. That paper trail lets another reader repeat the work and gives you an easy way to revise months later.

Define the question

State one focused question. Name the population, main concept, and outcome or phenomenon. For intervention topics, a PICO string works well; for qualitative topics, a SPIDER string fits. Write both the plain-English question and the search version.

PICO example: population = adolescents, intervention = mindfulness program, comparison = waitlist, outcome = sleep quality. SPIDER example: sample = adults with chronic pain, phenomenon of interest = coping, design = interviews, evaluation = lived experience, research type = qualitative. Both formats create a tidy plan for search terms.

Plan rules before you search

Draft inclusion and exclusion rules: study type, date span, languages, age groups, settings, and outcomes. Note any minimum sample size or measurement standards. Save this as a short protocol document.

Create a screening form with yes/no fields and a notes box. Decide how you will resolve disagreements if two people screen the same paper.

State primary outcomes and any proxy outcomes up front. If the topic is broad, create tiers of inclusion. e.g., include randomized trials and well-matched quasi-experiments; mark case reports for a short side note.

If two screeners work in parallel, pilot test your form on twenty abstracts and tune the wording before the full pass. Record inter-rater agreement so you can report consistency.

Build smart search strings

List synonyms for each concept. Group synonyms with OR; connect concepts with AND; put phrases in quotes. Use truncation where it helps. Example: “sleep quality” AND rumination AND adolescent*. Reuse that concept grid across databases so your search stays consistent.

Use controlled vocabulary where available. In APA PsycInfo, the database thesaurus supplies official index terms that boost recall and precision. Combine those index terms with free-text keywords to catch new papers not yet indexed.

Map spelling variants and abbreviations. Pair British and American spellings (behavio*r, randomi?ed). Add common acronyms to the keyword list and test whether they add noise. Revise iteratively while keeping a changelog.

Boolean and field codes

Learn one platform well, then translate. Title/abstract fields cut noise when a concept is too broad. Proximity operators (e.g., NEAR/3) help when two ideas need to appear close together.

Each platform has quirks. Ovid uses adj operators; EBSCO uses N and W; Scopus offers W/n. Skim the help page for your platform and note the field code for title and abstract.

Pick databases and grey sources

Core databases for this field include APA PsycInfo, PubMed, Web of Science, and Scopus. For education-related topics, add ERIC. Scan Google Scholar for preprints and theses; check ProQuest Dissertations and OSF for grey literature when publication bias is a worry.

Why these sources? The APA database concentrates discipline-specific journals and dissertations. PubMed brings strong biomedical coverage and MeSH indexing. Web of Science and Scopus enable forward citation chasing so you can see who cited a seminal paper. ERIC adds teacher-focused studies and reports.

Do citation chasing both ways. Backward chaining mines the reference lists of your final set; forward chaining tracks newer papers that cite them.

Screen, extract, and organize

Export all results to a reference manager such as Zotero or EndNote and remove duplicates. Screen titles and abstracts against your rules, then screen full texts. Log every reason for exclusion. A PRISMA-style flow diagram helps you show counts from search to final set.

Create a data sheet for the final set before you read in depth. Capture citation, country, sample, design, measures, and outcomes. Add columns for quality items you care about, such as randomization, attrition, blinding, or validated scales.

Use two independent screeners when possible for titles/abstracts and resolve conflicts by discussion or a third vote. If you work solo, rescreen a sample after a break to catch drift.

During full-text screening, capture exact quotes for reasons to exclude. That precision speeds write-up and avoids revision debates later.

For deduplication, sort by DOI, title, and year. Watch for records with minor title changes between preprint and final publication; keep the final peer-reviewed version unless a preprint holds extra data.

Synthesize without bias

Group studies by design, population, or measure and build theme summaries. Contrast consistent patterns with outliers and offer plausible reasons such as differences in samples, follow-up length, or instruments. Avoid cherry-picking; cite all studies in each theme.

Weigh study quality while you synthesize. Give more space to robust designs and well-reported methods. Flag issues like small samples or unclear measurement. Name limits without dismissing the evidence base.

Pick a synthesis mode that fits your set of studies. If designs and measures align, compute or report effect sizes and compare their spread. When designs differ, write a tight narrative that still names magnitudes and directions so the reader can gauge size, not just significance labels.

Create a matrix where rows are studies and columns are themes or variables. Fill cells with one-line findings and notes on quality. That matrix becomes your outline and reduces duplicate reading.

State uncertainty plainly. Use ranges and confidence intervals when available. When confidence is low, say so and give the reason: small samples, short follow-up, weak measures, or inconsistent definitions.

Structure the write-up

Shape the paper so readers can scan fast and still grasp the takeaways.

Introduction

Open with the narrow problem the review clearly solves. Define core terms. State the question and why the answer matters for theory or practice. End the section with a one-sentence overview of methods: databases searched, date span, and core rules.

Methods

Report databases, date of last search, all search strings, and screening rules. Mention tools used for deduplication, citation management, and data extraction. Include a flow diagram and a table listing the final studies if your assignment permits.

Results

Start with counts: how many records at each step, how many final studies, and basic features of that set. Then write your themes. For each theme, summarize the evidence, point to standout studies, and note limits.

Discussion

Tie themes back to the question. Spell out practical implications, open questions, and precise next steps for research. Keep claims modest and grounded in the data you reported.

Style and citations

Use past tense for findings and present tense for general truths. Paraphrase with care and cite every claim that traces to a source. Keep reference entries consistent and complete, following APA Style.

Use short paragraphs and front-load each one with the core point. Keep sentences under twenty-five words when you can. Prefer active voice. Use hedging words sparingly and only when the data demand caution.

Quote sparingly. Paraphrase and cite instead, and avoid patchwriting. If a definition is standard, quote it once and then shift back to paraphrase with citation.

Follow heading levels and number style from the manual so readers can scan. Explain every abbreviation at first use. When reporting statistics, include the test name, test value, degrees of freedom when relevant, and the p-value or interval.

Figures and tables that lift clarity

Two or three well-chosen visuals can make the review effortless to skim. A flow diagram shows the path from records to the final set. A characteristics table lists designs, samples, and measures at a glance. A theme map links clusters of findings so readers can see how ideas connect.

Keep captions plain and informative too. Make each figure readable in grayscale, and cite any source if you adapt a template. Place each visual near the paragraph that references it so scanning eyes never hunt for context.

When to stop searching

Set stopping rules in advance. Common triggers: two consecutive update searches that yield no new eligible studies; forward citation checks on the newest five papers add nothing; alerts run for four weeks with no fresh matches; or the project deadline arrives.

Report those rules and the actual stop date. If a new high-impact paper appears after your stop date, you can add a short note in the Discussion to show awareness without re-running the full process.

For live projects, schedule a final mini-search on submission week. Run the top two databases and a quick citation check to catch any last-minute additions.

Literature review in psychology: common pitfalls

Many setbacks stem from planning and record-keeping, not a lack of effort. Scan this list early and again before submission.

  • A scope that balloons because the question is vague.
  • Too few databases or no controlled vocabulary.
  • Search strings that miss synonyms, acronyms, or spelling variants.
  • Screening drift after the first few papers.
  • Data extraction that skips measures or timing details.
  • Theme summaries that ignore studies that do not fit the story.
  • Claims that stretch beyond the data.

Quick checklist before you submit

  • The question is specific and matches the final set of studies.
  • Rules, search strings, and dates appear in the Methods section.
  • A flow diagram records counts at each step.
  • Every included study appears at least once in the Results narrative.
  • Tables and figures are labeled and referenced in text.
  • Verb tense, numbers, and abbreviations match APA Style.
  • References are complete and consistently formatted.
Data Extraction Fields That Save Time
Field What To Capture Tip
Study basics Author, year, journal; country; funding. Match to your reference list structure.
Design & sample Design, N, age range, setting. Note recruitment and inclusion quirks.
Measures & outcomes Instruments, timing, effect size or key findings. Record statistics exactly as reported.

Ethics and transparency

State funding and any conflicts. If your own prior study appears in the set, disclose that link. Avoid self-citation padding; cite only when a source truly fits the theme.

Run a light plagiarism check on your draft and fix anything that looks too close to a source. Keep all PDFs, notes, and exported files so you can answer reviewer questions quickly.

Report the exact date of each search and the platform used. If a database offers both basic and advanced interfaces, note which one you used.

Plan time like a project

Block time by phase: question and rules, searching, screening, extraction, synthesis, writing, and polish. Estimate hours for each based on the scale of your topic and the number of databases. Build slack between phases for advisor feedback.

Automate where you can: save search alerts during drafting so new papers land in your inbox, not in a last-minute scramble. Version your manuscript and data sheet so you never lose edits.