User Research Plan: The Step-by-Step Template (with Examples) for 2026
A modern user research plan template — the 8 sections every plan needs, the questions stakeholders actually want answered, and how AI-moderated voice interviews compress the average 42-day research timeline into a single week.
Koji Team
May 15, 2026
What a user research plan is — in one sentence
A user research plan is the single document that turns a fuzzy product question ("are users struggling with onboarding?") into a structured, time-boxed study with clear goals, a defined audience, an explicit method, and a date by which the team will have an answer they can act on.
If that sounds bureaucratic, consider the alternative. The 2025 Dscout research timeline study found the average research project takes 42 days end-to-end, with 22.4% of researchers reporting they didn't have enough time, and the single biggest source of slippage — accounting for 36.3% of delays — was poorly-scoped recruitment and operations (Dscout 2025 research timelines report). Most of those failures trace back to one thing: skipping the plan.
This is the modern template — eight sections, the questions each one needs to answer, and how AI-moderated voice interviews compress the timeline from 42 days to 5-7.
Why a research plan still matters in 2026
The case for a written research plan is not bureaucracy. It is alignment:
- It forces the team to commit to the decision the research will inform before the research happens. (If the team won't commit, the research is theatre.)
- It is the document a CFO, a CEO, or a skeptical engineer can read in 4 minutes and either approve, push back on, or fund.
- It is the artifact that survives the inevitable scope creep — 19.6% of project delays trace back to scope drift, and a written plan is the only thing that stops it.
The 2025 UX Research Budget report found that 29% of researchers get less than $10,000 a year to do the entire job, and 50% operate on $1,000/month or less. In that environment, every research dollar must be defensible. A research plan is the defence.
The 8-section user research plan template
Every research plan — for a discovery study, a concept test, a usability evaluation, or a churn investigation — fits the same eight-section structure. The depth changes. The structure does not.
1. Project overview & background
The "why now" of the project. Two short paragraphs maximum:
- The product or business question that triggered the research
- The decision that will be made once results are in
- The team and stakeholders involved (PM, design lead, exec sponsor)
If you cannot fit this in two paragraphs, the project is too vague — go back and narrow it.
Example: "Onboarding completion has dropped from 64% to 51% since the v3 redesign. The product team is debating whether to roll back the redesign, iterate on it, or push through. This study will give the PM a recommendation within two weeks. Stakeholders: Lena (PM), Marc (Design Lead), Ana (Onboarding Eng), Priya (CPO, sponsor)."
2. Research goals & questions
The most important section. Most plans live or die here.
- Research goal = what you want to learn. One or two sentences.
- Research questions = the 4-7 specific, answerable questions that, taken together, satisfy the goal.
Bad: "Understand how users feel about onboarding." Good: "Identify the specific steps in onboarding where users drop off, why they drop off, and what they expected to happen instead."
Each research question should be specific enough that you'll know when you've answered it. "Feel" is not measurable. "Drop off at which step" is.
3. Method & approach
The "how" — and the single section that has changed most in 2026.
State:
- The method (in-depth interview, usability test, survey, concept test, diary study)
- The modality (voice, text, in-person)
- Whether it's moderated, unmoderated, or AI-moderated
- The rationale (one sentence on why this method, not another)
A modern plan for the onboarding example above might read: "30 AI-moderated voice interviews via Koji with users who completed onboarding in the last 30 days, blended with 15 with users who started but did not complete. Voice modality chosen because completion friction is often emotional and verbal — typed surveys would lose 70% of the signal."
For help choosing the right method, see our methodology selection guide and the breakdown of interview modes in Koji.
4. Participant criteria & recruitment
Recruitment failures cause more than a third of all research delays — so this section has to be precise. Specify:
- Inclusion criteria (must-have characteristics)
- Exclusion criteria (disqualifiers — e.g., "no current employees of the company")
- Sample size (how many per segment, with rationale)
- Recruitment source (existing user list, intercept, paid panel, mixed)
- Incentive (amount, mechanism, IRS-reporting considerations)
For sample-size rationale by method, see how many user interviews you need and the customer interview cadence guide.
5. Discussion guide / question list
The exact questions, in order, with probes. Whether you're writing a moderator guide for a live interview, a task list for an unmoderated test, or a prompt set for an AI-moderated session, this is where it lives.
A strong discussion guide blends Koji's six structured question types:
- Open-ended — "Walk me through your first day with the product." (the qualitative core)
- Scale — "On a scale of 1-7, how confident did you feel by the end of onboarding?"
- Single choice — "Which of these steps felt longest?"
- Multiple choice — "Which of these tools were you switching between during setup?"
- Ranking — "Rank these three improvements by which would matter most to you."
- Yes/no — "Did you finish setting up before your first work session?"
The structured types pin down the quantitative spine of the study; the open-ended questions give it qualitative depth. Reports automatically visualize each one with the right chart type (distribution for scales, bar chart for choice, pie for yes/no), so a 30-interview study produces both stories and numbers in one pass.
See our discussion guide template and 60+ customer interview question examples for ready-to-use phrasing.
6. Timeline & milestones
A 5-bullet timeline. Pad nothing, hide nothing:
- Kickoff & plan approval: day 0
- Recruitment complete: day X
- Interviews complete: day Y
- Analysis complete: day Z
- Findings readout & decision: day Z+2
The traditional 42-day average breaks down to roughly 14 days recruitment, 14 days fieldwork, 10 days analysis, 4 days delivery. AI-moderated platforms collapse this to 5-7 days total because:
- Interviews run in parallel, 24/7, across time zones
- Transcription is real-time and free
- Analysis runs automatically — themes, segments, verbatim quotes — within minutes of the last interview ending
If your plan still has a 6-week timeline in 2026, the plan — not the research — is what's outdated.
7. Analysis approach
How you'll turn raw conversations into a decision. State explicitly:
- The coding approach (open coding, axial coding, framework analysis, AI-assisted thematic analysis)
- Who codes (you, a partner, the platform)
- The deliverable format (report doc, slide readout, dashboard)
Modern AI-native platforms collapse this section dramatically. Koji's automatic thematic analysis clusters open-ended responses, generates a verbatim-grounded codebook, and surfaces segment-specific themes — so the analysis question becomes "which themes do we prioritize?" rather than "how do we extract themes from 30 transcripts?" See how to analyze user interview data for the modern workflow.
8. Decisions enabled & risks
The section nobody writes — and the section that determines whether the research will get used. List:
- The decisions this study will inform. (If you can't list at least one, stop and rescope.)
- The decisions this study will NOT inform. (Manages stakeholder expectations.)
- Known risks. ("Holiday timing may slow recruitment," "we only have access to current users, not lapsed.")
This section is also the place to name your anti-research-bias plan: how you'll avoid leading participants, how you'll handle conflicting findings, and what disqualifies a finding from making it into the final report.
A worked example: the onboarding drop-off study
Putting all eight sections together, a complete plan for the onboarding example above fits on a single page:
Project: Onboarding completion has dropped from 64% to 51% post-redesign. The PM needs a recommendation in 2 weeks.
Goal: Identify which onboarding steps drive drop-off and what users expected instead.
Questions: Where do users drop off? Why? What did they expect to happen? What workaround did they invent (if any)? Is the drop-off concentrated in a specific persona?
Method: 30 AI-moderated voice interviews via Koji — 15 completers, 15 drop-offs. Hybrid mode: voice with structured scales and choice questions at key checkpoints.
Participants: Users who started onboarding in the last 30 days. Exclude employees and internal beta testers. Sample weighted 50/50 between completers and drop-offs to enable comparison.
Discussion guide: [link]. Includes scale rating per step, ranked friction points, and an open-ended "walk-through" prompt.
Timeline: Plan approved day 0. Interviews launched day 1. Fieldwork complete day 5. Analysis complete day 6. Readout day 7. (Total: 7 days.)
Analysis: Automatic theming + segment cut by completer/drop-off + verbatim quote bank per friction point.
Decisions enabled: Whether to roll back, iterate, or persist with v3 redesign. Which specific step to prioritize for iteration.
Seven days. One page. One decision. That is what a 2026 research plan looks like.
Common research-plan failure modes
- No decision attached. "We want to understand users." Research without a decision is theatre.
- Goals confused with questions. "Understand churn" is a goal. The questions are what specifically you'll ask each participant.
- Sample size pulled from the air. Specify the rationale, even if it's "5 per persona per Nielsen 1993."
- Recruitment treated as a sub-task. Recruitment failure is the #1 cause of research delays. Plan it as carefully as the interviews themselves — see our recruitment guide.
- No timeline. "When it's done" is not a timeline. Stakeholders will fill the vacuum with their own expectations.
- Plan never updated. The plan is a living document. If methods change, update it. The plan is the contract with stakeholders, not a ceremony.
Why Koji compresses the entire timeline
The traditional 42-day research timeline breaks down across four big chunks: recruitment (14 days), fieldwork (14 days), analysis (10 days), delivery (4 days). Three of those four chunks are the ones AI-moderated research collapses:
- Fieldwork — 30 interviews in parallel finish in 3 days instead of 14, because the AI moderator runs as many sessions as you have participants, 24/7.
- Analysis — automatic thematic analysis, segment cuts, and verbatim quote extraction run in minutes, not the 51% of project time researchers say they spend on synthesis.
- Delivery — one-click reports are ready the moment fieldwork ends, ready to share with the whole stakeholder list.
Recruitment is the only chunk Koji doesn't eliminate — and even there, the elimination of scheduling overhead (no booking, no time-zone juggling, no no-shows) recovers 4-5 days.
The math: a study that genuinely required a 6-week plan in 2023 is now a 7-day plan in 2026, and the deliverable is more defensible because every theme is grounded in a verbatim quote a stakeholder can listen to.
Get started
Take the eight-section template above. Fill it in for the one decision you're facing this month. If it fits on a single page and names a specific decision, you have a plan worth funding. Then launch the interviews in Koji and have your answer before the end of next week.
That is what a 2026 user research plan looks like in practice.