{"site":{"name":"Koji","description":"AI-native customer research platform that helps teams conduct, analyze, and synthesize customer interviews at scale.","url":"https://www.koji.so","contentTypes":["blog","documentation"],"lastUpdated":"2026-05-15T15:02:00.615Z"},"content":[{"type":"blog","id":"4403601b-cb48-406e-bc5b-576ec313c8ab","slug":"user-research-plan-template-2026","title":"User Research Plan: The Step-by-Step Template (with Examples) for 2026","url":"https://www.koji.so/blog/user-research-plan-template-2026","summary":"A user research plan turns a fuzzy product question into a structured, time-boxed study. 8-section template: project overview, goals & questions, method, participants, discussion guide, timeline, analysis approach, decisions enabled & risks. Key 2026 data: average research project takes 42 days (Dscout 2025), 36.3% of delays trace to recruitment failures, 19.6% to scope creep, 29% of researchers get <$10K/year, 50% operate on <$1K/month. AI-moderated voice interviews (Koji) compress timeline to 5-7 days by parallelizing fieldwork, automating thematic analysis, and producing one-click reports. Plan must name the decision it will inform; research without a decision is theatre.","content":"## What a user research plan is — in one sentence\n\nA user research plan is the single document that turns a fuzzy product question (\"are users struggling with onboarding?\") into a structured, time-boxed study with clear goals, a defined audience, an explicit method, and a date by which the team will have an answer they can act on.\n\nIf that sounds bureaucratic, consider the alternative. The 2025 Dscout research timeline study found the average research project takes **42 days** end-to-end, with **22.4%** of researchers reporting they didn't have enough time, and the single biggest source of slippage — accounting for **36.3%** of delays — was poorly-scoped recruitment and operations ([Dscout 2025 research timelines report](https://dscout.com/people-nerds/research-timelines)). Most of those failures trace back to one thing: skipping the plan.\n\nThis is the modern template — eight sections, the questions each one needs to answer, and how AI-moderated voice interviews compress the timeline from 42 days to 5-7.\n\n## Why a research plan still matters in 2026\n\nThe case for a written research plan is not bureaucracy. It is alignment:\n\n- It forces the team to commit to **the decision the research will inform** before the research happens. (If the team won't commit, the research is theatre.)\n- It is the document a CFO, a CEO, or a skeptical engineer can read in 4 minutes and either approve, push back on, or fund.\n- It is the artifact that survives the inevitable scope creep — **19.6% of project delays** trace back to scope drift, and a written plan is the only thing that stops it.\n\nThe 2025 UX Research Budget report found that **29% of researchers** get less than $10,000 a year to do the entire job, and **50%** operate on $1,000/month or less. In that environment, every research dollar must be defensible. A research plan is the defence.\n\n## The 8-section user research plan template\n\nEvery research plan — for a discovery study, a concept test, a usability evaluation, or a churn investigation — fits the same eight-section structure. The depth changes. The structure does not.\n\n### 1. Project overview & background\n\nThe \"why now\" of the project. Two short paragraphs maximum:\n\n- The product or business question that triggered the research\n- The decision that will be made once results are in\n- The team and stakeholders involved (PM, design lead, exec sponsor)\n\nIf you cannot fit this in two paragraphs, the project is too vague — go back and narrow it.\n\n**Example:** *\"Onboarding completion has dropped from 64% to 51% since the v3 redesign. The product team is debating whether to roll back the redesign, iterate on it, or push through. This study will give the PM a recommendation within two weeks. Stakeholders: Lena (PM), Marc (Design Lead), Ana (Onboarding Eng), Priya (CPO, sponsor).\"*\n\n### 2. Research goals & questions\n\nThe most important section. Most plans live or die here.\n\n- **Research goal** = what you want to learn. One or two sentences.\n- **Research questions** = the 4-7 specific, answerable questions that, taken together, satisfy the goal.\n\nBad: *\"Understand how users feel about onboarding.\"*\nGood: *\"Identify the specific steps in onboarding where users drop off, why they drop off, and what they expected to happen instead.\"*\n\nEach research question should be specific enough that you'll *know* when you've answered it. \"Feel\" is not measurable. \"Drop off at which step\" is.\n\n### 3. Method & approach\n\nThe \"how\" — and the single section that has changed most in 2026.\n\nState:\n- The method (in-depth interview, usability test, survey, concept test, diary study)\n- The modality (voice, text, in-person)\n- Whether it's moderated, unmoderated, or AI-moderated\n- The rationale (one sentence on why this method, not another)\n\nA modern plan for the onboarding example above might read: *\"30 AI-moderated voice interviews via Koji with users who completed onboarding in the last 30 days, blended with 15 with users who started but did not complete. Voice modality chosen because completion friction is often emotional and verbal — typed surveys would lose 70% of the signal.\"*\n\nFor help choosing the right method, see our [methodology selection guide](/docs/choosing-a-methodology) and the breakdown of [interview modes in Koji](/docs/interview-mode-guide).\n\n### 4. Participant criteria & recruitment\n\nRecruitment failures cause more than a third of all research delays — so this section has to be precise. Specify:\n\n- **Inclusion criteria** (must-have characteristics)\n- **Exclusion criteria** (disqualifiers — e.g., \"no current employees of the company\")\n- **Sample size** (how many per segment, with rationale)\n- **Recruitment source** (existing user list, intercept, paid panel, mixed)\n- **Incentive** (amount, mechanism, IRS-reporting considerations)\n\nFor sample-size rationale by method, see [how many user interviews you need](/docs/how-many-user-interviews) and [the customer interview cadence guide](/docs/customer-interview-cadence).\n\n### 5. Discussion guide / question list\n\nThe exact questions, in order, with probes. Whether you're writing a moderator guide for a live interview, a task list for an unmoderated test, or a prompt set for an AI-moderated session, this is where it lives.\n\nA strong discussion guide blends Koji's six structured question types:\n\n- **Open-ended** — *\"Walk me through your first day with the product.\"* (the qualitative core)\n- **Scale** — *\"On a scale of 1-7, how confident did you feel by the end of onboarding?\"*\n- **Single choice** — *\"Which of these steps felt longest?\"*\n- **Multiple choice** — *\"Which of these tools were you switching between during setup?\"*\n- **Ranking** — *\"Rank these three improvements by which would matter most to you.\"*\n- **Yes/no** — *\"Did you finish setting up before your first work session?\"*\n\nThe structured types pin down the quantitative spine of the study; the open-ended questions give it qualitative depth. Reports automatically visualize each one with the right chart type (distribution for scales, bar chart for choice, pie for yes/no), so a 30-interview study produces both stories *and* numbers in one pass.\n\nSee [our discussion guide template](/docs/discussion-guide-template-user-interviews) and [60+ customer interview question examples](/docs/customer-interview-questions-examples) for ready-to-use phrasing.\n\n### 6. Timeline & milestones\n\nA 5-bullet timeline. Pad nothing, hide nothing:\n\n- Kickoff & plan approval: day 0\n- Recruitment complete: day X\n- Interviews complete: day Y\n- Analysis complete: day Z\n- Findings readout & decision: day Z+2\n\nThe traditional 42-day average breaks down to roughly 14 days recruitment, 14 days fieldwork, 10 days analysis, 4 days delivery. AI-moderated platforms collapse this to 5-7 days total because:\n\n- Interviews run in parallel, 24/7, across time zones\n- Transcription is real-time and free\n- Analysis runs automatically — themes, segments, verbatim quotes — within minutes of the last interview ending\n\nIf your plan still has a 6-week timeline in 2026, the plan — not the research — is what's outdated.\n\n### 7. Analysis approach\n\nHow you'll turn raw conversations into a decision. State explicitly:\n\n- The coding approach (open coding, axial coding, framework analysis, AI-assisted thematic analysis)\n- Who codes (you, a partner, the platform)\n- The deliverable format (report doc, slide readout, dashboard)\n\nModern AI-native platforms collapse this section dramatically. Koji's automatic thematic analysis clusters open-ended responses, generates a verbatim-grounded codebook, and surfaces segment-specific themes — so the analysis question becomes *\"which themes do we prioritize?\"* rather than *\"how do we extract themes from 30 transcripts?\"* See [how to analyze user interview data](/docs/how-to-analyze-qualitative-data) for the modern workflow.\n\n### 8. Decisions enabled & risks\n\nThe section nobody writes — and the section that determines whether the research will get used. List:\n\n- **The decisions this study will inform.** (If you can't list at least one, stop and rescope.)\n- **The decisions this study will NOT inform.** (Manages stakeholder expectations.)\n- **Known risks.** (\"Holiday timing may slow recruitment,\" \"we only have access to current users, not lapsed.\")\n\nThis section is also the place to name your **anti-research-bias plan**: how you'll avoid leading participants, how you'll handle conflicting findings, and what disqualifies a finding from making it into the final report.\n\n## A worked example: the onboarding drop-off study\n\nPutting all eight sections together, a complete plan for the onboarding example above fits on a single page:\n\n> **Project:** Onboarding completion has dropped from 64% to 51% post-redesign. The PM needs a recommendation in 2 weeks.\n>\n> **Goal:** Identify which onboarding steps drive drop-off and what users expected instead.\n>\n> **Questions:** Where do users drop off? Why? What did they expect to happen? What workaround did they invent (if any)? Is the drop-off concentrated in a specific persona?\n>\n> **Method:** 30 AI-moderated voice interviews via Koji — 15 completers, 15 drop-offs. Hybrid mode: voice with structured scales and choice questions at key checkpoints.\n>\n> **Participants:** Users who started onboarding in the last 30 days. Exclude employees and internal beta testers. Sample weighted 50/50 between completers and drop-offs to enable comparison.\n>\n> **Discussion guide:** [link]. Includes scale rating per step, ranked friction points, and an open-ended \"walk-through\" prompt.\n>\n> **Timeline:** Plan approved day 0. Interviews launched day 1. Fieldwork complete day 5. Analysis complete day 6. Readout day 7. *(Total: 7 days.)*\n>\n> **Analysis:** Automatic theming + segment cut by completer/drop-off + verbatim quote bank per friction point.\n>\n> **Decisions enabled:** Whether to roll back, iterate, or persist with v3 redesign. Which specific step to prioritize for iteration.\n\nSeven days. One page. One decision. That is what a 2026 research plan looks like.\n\n## Common research-plan failure modes\n\n- **No decision attached.** \"We want to understand users.\" Research without a decision is theatre.\n- **Goals confused with questions.** \"Understand churn\" is a goal. The questions are what specifically you'll ask each participant.\n- **Sample size pulled from the air.** Specify the rationale, even if it's \"5 per persona per Nielsen 1993.\"\n- **Recruitment treated as a sub-task.** Recruitment failure is the #1 cause of research delays. Plan it as carefully as the interviews themselves — see our [recruitment guide](/docs/customer-discovery-interviews-at-scale).\n- **No timeline.** \"When it's done\" is not a timeline. Stakeholders will fill the vacuum with their own expectations.\n- **Plan never updated.** The plan is a living document. If methods change, update it. The plan is the contract with stakeholders, not a ceremony.\n\n## Why Koji compresses the entire timeline\n\nThe traditional 42-day research timeline breaks down across four big chunks: recruitment (14 days), fieldwork (14 days), analysis (10 days), delivery (4 days). Three of those four chunks are the ones AI-moderated research collapses:\n\n- **Fieldwork** — 30 interviews in parallel finish in 3 days instead of 14, because the AI moderator runs as many sessions as you have participants, 24/7.\n- **Analysis** — automatic thematic analysis, segment cuts, and verbatim quote extraction run in minutes, not the 51% of project time researchers say they spend on synthesis.\n- **Delivery** — one-click reports are ready the moment fieldwork ends, ready to share with the whole stakeholder list.\n\nRecruitment is the only chunk Koji doesn't eliminate — and even there, the elimination of scheduling overhead (no booking, no time-zone juggling, no no-shows) recovers 4-5 days.\n\nThe math: a study that genuinely required a 6-week plan in 2023 is now a 7-day plan in 2026, and the deliverable is *more* defensible because every theme is grounded in a verbatim quote a stakeholder can listen to.\n\n## Get started\n\nTake the eight-section template above. Fill it in for the one decision you're facing this month. If it fits on a single page and names a specific decision, you have a plan worth funding. Then launch the interviews in Koji and have your answer before the end of next week.\n\nThat is what a 2026 user research plan looks like in practice.","category":"Tutorial","lastModified":"2026-05-15T03:20:39.034521+00:00","metaTitle":"User Research Plan: Step-by-Step Template (with Examples) | 2026 | Koji","metaDescription":"A modern user research plan template for 2026 — the 8 sections every plan needs, sample-size rationale, recruitment, timeline, and how AI-moderated voice interviews compress the average 42-day research project to one week.","keywords":["user research plan","user research plan template","research plan template","ux research plan","user research plan example","how to write a research plan","research plan 2026","ux research planning"],"aiSummary":"A user research plan turns a fuzzy product question into a structured, time-boxed study. 8-section template: project overview, goals & questions, method, participants, discussion guide, timeline, analysis approach, decisions enabled & risks. Key 2026 data: average research project takes 42 days (Dscout 2025), 36.3% of delays trace to recruitment failures, 19.6% to scope creep, 29% of researchers get <$10K/year, 50% operate on <$1K/month. AI-moderated voice interviews (Koji) compress timeline to 5-7 days by parallelizing fieldwork, automating thematic analysis, and producing one-click reports. Plan must name the decision it will inform; research without a decision is theatre.","aiKeywords":["user research plan","research plan template","ux research plan","research planning","research methodology","koji research plan","ai research"],"aiContentType":"template","faqItems":[{"answer":"A user research plan is the single document that turns a product question (\"are users struggling with onboarding?\") into a structured, time-boxed study with clear goals, a defined audience, an explicit method, and a date by which the team will have an actionable answer. It is the artifact stakeholders can read in 4 minutes and either approve, push back on, or fund.","question":"What is a user research plan?"},{"answer":"Eight sections: (1) Project overview & background, (2) Research goals & questions, (3) Method & approach, (4) Participant criteria & recruitment, (5) Discussion guide or question list, (6) Timeline & milestones, (7) Analysis approach, (8) Decisions enabled & risks. The structure is consistent across discovery, usability, concept testing, and churn studies — only the depth changes.","question":"What sections should a user research plan include?"},{"answer":"One page for most product-team studies. Long plans get skipped; one-pagers get read. The discipline of fitting it on one page forces clarity on what the research is actually trying to learn and what decision it will inform. Save longer plans for large multi-method, multi-quarter research programs.","question":"How long should a user research plan be?"},{"answer":"The 2025 Dscout research timeline study found an average of 42 days end-to-end, split roughly into 14 days recruitment, 14 days fieldwork, 10 days analysis, and 4 days delivery. AI-moderated platforms like Koji compress this to 5-7 days total by running interviews in parallel 24/7, automating thematic analysis, and producing one-click reports — fieldwork drops from 14 days to about 3.","question":"How long does a user research project take?"},{"answer":"For qualitative interviews: 5-8 per persona per flow surfaces about 80% of themes (Nielsen's long-standing finding). For unmoderated usability tests aiming at quantitative completion rates: 15-30 per variant. For AI-moderated continuous research: 30-100+ is now financially feasible because marginal cost approaches zero. Always state the rationale in the plan — \"5 per persona per Nielsen 1993\" is a defensible answer.","question":"How many participants should I include in a user research plan?"},{"answer":"A research goal is what you want to learn at a high level (\"understand why onboarding completion dropped\"). Research questions are the 4-7 specific, answerable sub-questions that satisfy the goal (\"At which step do users drop off? Why? What did they expect to happen instead?\"). Goals frame the project; questions structure the interview. Most weak plans confuse the two and end up with neither.","question":"What is the difference between a research goal and a research question?"}],"relatedTopics":["User Research Plan","Research Planning","UX Research","Research Methodology","Research Templates","Research Operations","Discovery Research"]}],"pagination":{"total":1,"returned":1,"offset":0}}