{"site":{"name":"Koji","description":"AI-native customer research platform that helps teams conduct, analyze, and synthesize customer interviews at scale.","url":"https://www.koji.so","contentTypes":["blog","documentation"],"lastUpdated":"2026-05-09T07:15:09.430Z"},"content":[{"type":"documentation","id":"08f8067b-b810-49be-b584-bc66087a2bd9","slug":"discussion-guide-template-user-interviews","title":"Discussion Guide for User Interviews: Template, Structure, and 30+ Example Questions (2026)","url":"https://www.koji.so/docs/discussion-guide-template-user-interviews","summary":"A discussion guide is the structured outline of topics, questions, and probes that keeps a semi-structured user interview on track without becoming a script. The proven four-section structure — introduction, warm-up, core questions, wrap-up — allocates roughly 5/10/40/5 minutes of a 60-minute interview. Each core question should be open-ended, anchored in past behavior, and shipped with 2–3 standard probes. AI-moderated platforms like Koji turn the discussion guide into a live study, running it consistently across every participant with quality scoring and real-time theme extraction.","content":"## What is a discussion guide?\n\nA discussion guide (also called an interview guide or moderator guide) is the structured outline a researcher uses to run a semi-structured user interview. It lists the topics to cover, the core questions to ask under each topic, and the probes to use when an answer needs to go deeper.\n\nA discussion guide is *not* a script. A script forces every word; a discussion guide gives the moderator a flexible roadmap that ensures every interview covers the same ground without strangling the conversation.\n\n> \"In a semistructured interview, the interviewer uses an interview guide (also referred to as a discussion guide). Unlike an interview script — which is used in structured interviews — an interview guide can be used flexibly: interviewers can ask questions in any order they see fit, omit questions, or ask questions that are not in the guide.\" — Nielsen Norman Group, *Writing an Effective Guide for a UX Interview*\n\nIf user interviews are the most common method in product research, the discussion guide is the most common artifact. Learning to write one well is the single biggest skill upgrade most product teams can make.\n\n## Why discussion guides matter (the BLUF)\n\nWithout a discussion guide, even experienced interviewers drift: questions vary across sessions, key topics get skipped under time pressure, and synthesis becomes a nightmare because the data is not comparable. A good discussion guide solves all three problems while leaving room for the moderator to follow interesting threads.\n\nA well-built guide does five things at once:\n\n1. Translates research objectives into questions a participant can actually answer.\n2. Sequences questions so the conversation builds naturally.\n3. Allocates time across topics so the most important questions never get cut.\n4. Gives the moderator standard probes for going deeper without leading.\n5. Makes synthesis fast because every interview covers the same comparable ground.\n\nA discussion guide that fails any of these will quietly destroy a research study.\n\n## The four-section structure\n\nAlmost every effective discussion guide follows the same four-section structure. The names vary; the structure does not.\n\n### 1. Introduction (5 minutes)\n- Welcome and rapport-building\n- Brief explanation of the study purpose (vague enough not to bias)\n- Confidentiality and recording consent\n- Permission to ask \"why\" repeatedly\n- One sentence setting expectations on length\n\n### 2. Warm-up / context (5–10 minutes)\n- 3–5 easy, factual questions about the participant's background\n- Establish their role, context, and current behavior\n- Ground the rest of the conversation in their lived reality\n\n### 3. Core questions (25–40 minutes)\n- 5–8 main questions tied directly to research objectives\n- Each main question paired with 2–3 standard probes\n- Sequenced from broad to specific\n- Sensitive or evaluative questions placed after rapport is established\n\n### 4. Wrap-up (5 minutes)\n- \"Anything we did not cover that you want to mention?\"\n- Summarize the participant's key points to invite correction\n- Thank, log incentive, and close\n\nFor a 60-minute interview, that allocates roughly 5 / 10 / 40 / 5 minutes. For a 30-minute interview, halve every section but keep the four-part structure.\n\n## How to write each section\n\n### Writing the introduction\n- Keep it short. Participants are anxious in the first two minutes; long preambles make it worse.\n- Be vague about hypotheses. \"We are exploring how teams handle X\" is fine; \"we want to validate our new pricing\" is not.\n- Always explicitly say the participant's honest opinion — including criticism — is what is most useful. This single sentence increases candor measurably.\n\n### Writing warm-up questions\nThree to five easy, factual, low-stakes questions. Their job is twofold: build rapport, and ground the rest of the interview in concrete context.\n\nExamples:\n- \"Tell me about your role and what a typical week looks like.\"\n- \"How long have you been doing this work?\"\n- \"Walk me through the last time you did [behavior under study].\"\n- \"What tools do you currently use for [task]?\"\n\nThe \"walk me through the last time\" question is the most powerful warm-up in product research. It anchors everything that follows in a real, recent, concrete moment instead of in generalities.\n\n### Writing core questions\nThis is where most discussion guides go wrong. Three rules cover most failures:\n\n**Rule 1: Open-ended, not yes/no.** Yes/no questions yield no insight. Reframe \"Do you find onboarding difficult?\" as \"Walk me through the last time you onboarded a new person.\" Behavior, not opinion.\n\n**Rule 2: Past behavior, not future intent.** What people say they will do is a poor predictor of what they actually do. \"How did you handle the last time X happened?\" beats \"Would you use a feature that did Y?\"\n\n**Rule 3: Specific, not abstract.** \"What do you think about workflow tools?\" is unanswerable. \"What did you do the last time a workflow broke at 5pm?\" is answerable.\n\nSequence core questions from broad and exploratory to narrow and evaluative. Anchor early questions in past behavior; save concept reactions and direct evaluations for the second half, after the participant's real context is on the table.\n\n### Writing standard probes\nEach core question should ship with 2–3 standard probes you can use to go deeper without leading. Default probes that work in almost any study:\n\n- \"Tell me more about that.\"\n- \"Can you give me a specific example?\"\n- \"Walk me through what happened next.\"\n- \"What did you do then?\"\n- \"Why was that important?\"\n- \"What would have been different if [variable]?\"\n- \"What was going through your mind in that moment?\"\n\nProbes are deliberately content-neutral. They do not suggest an answer; they invite depth.\n\n### Writing the wrap-up\nTwo questions do most of the work:\n\n- \"Is there anything we did not cover that you think is important for me to know?\" — surfaces topics you missed.\n- \"If you could change one thing about [topic], what would it be?\" — forces a final prioritization that often reveals the most useful insight of the session.\n\n## 30+ example discussion guide questions\n\n### Discovery / generative interviews\n\n- \"Walk me through the last time you [behavior under study].\"\n- \"What were you trying to accomplish that day?\"\n- \"What did you try first? What happened?\"\n- \"Where did you get stuck, if at all?\"\n- \"What did you do as a workaround?\"\n- \"Who else was involved, and what was their role?\"\n- \"When does this happen most often?\"\n- \"What would have made that easier?\"\n\n### Pain point exploration\n\n- \"What is the most frustrating part of [process]?\"\n- \"Tell me about a time it went really badly.\"\n- \"What did that cost you — time, money, relationships?\"\n- \"Why have you not solved this already?\"\n- \"What have you tried that did not work?\"\n\n### Jobs-to-be-Done switch interviews\n\n- \"Tell me about the day you decided you needed something new.\"\n- \"What was happening that pushed you to look for a solution?\"\n- \"What were you using before? Why did it stop being enough?\"\n- \"What were you hoping the new tool would let you do?\"\n- \"What almost stopped you from switching?\"\n\n### Concept testing\n\n- \"Looking at this, what is your immediate reaction?\"\n- \"What do you think this is for?\"\n- \"Who do you think this is built for?\"\n- \"What is unclear or confusing?\"\n- \"If this existed today, would it replace, supplement, or be irrelevant to what you currently do? Why?\"\n\n### Pricing and willingness to pay\n\n- \"If you saw this product at [price A], what would you think?\"\n- \"At what price would this be too expensive to consider?\"\n- \"At what price would you start to question its quality?\"\n- \"What would justify a higher price?\"\n\n### Wrap-up / reflection\n\n- \"Looking back over our conversation, what stands out as most important?\"\n- \"What is the one change to [topic] that would matter most to you?\"\n- \"Is there anyone else I should be talking to about this?\"\n- \"Is there anything you wish I had asked?\"\n\n## Time allocation: the 60-minute discussion guide template\n\nFor a 60-minute interview:\n- **0–5 min** Introduction, consent, expectations\n- **5–15 min** Warm-up: role, context, last-time-they-did-X\n- **15–25 min** Core block 1: behavioral history (past behavior, pain points)\n- **25–40 min** Core block 2: deeper probing on key topics\n- **40–55 min** Core block 3: concept reactions, evaluation, prioritization\n- **55–60 min** Wrap-up, anything-we-missed, thanks\n\nFor a 30-minute interview, drop one core block and tighten the warm-up.\n\n## Common discussion guide mistakes\n\n- **Too many questions.** A 60-minute interview cannot meaningfully cover more than 8 main questions plus probes. Most over-stuffed guides cause the moderator to skip the most important questions at the end.\n- **Leading questions.** \"Don't you find X frustrating?\" pre-loads the answer. Reframe as \"How do you feel about X?\" or, better, \"Tell me about the last time you used X.\"\n- **Abstract questions.** \"How do you think about productivity?\" gets you platitudes. Anchor everything in concrete past events.\n- **Questions stacked together.** \"Tell me about your workflow, the tools you use, and why you chose them\" gives the participant three questions in one — they will answer one and skip the others.\n- **Skipping probes.** Without standard probes, depth depends entirely on the moderator's mood and energy. Pre-writing probes makes consistency a feature of the guide, not a personality trait.\n- **No wrap-up question.** \"Is there anything we did not cover?\" finds insights every single time. Skipping it leaves data on the table.\n- **Treating the guide as a script.** A skilled moderator follows the participant's most interesting thread first, then circles back to cover remaining ground. A guide that is read line-by-line produces stilted, surface-level data.\n\n## How Koji turns a discussion guide into a live study\n\nTraditional discussion guides die at the operational layer: scheduling 12 interviews, coordinating moderators, fighting with calendars, transcribing recordings, and synthesizing the result over weeks. A great guide written by a senior researcher often gets executed inconsistently by junior moderators because the operational load is too heavy.\n\nKoji turns the discussion guide into the study itself:\n\n- **AI-moderated voice or text interviews.** Your discussion guide is loaded into Koji's AI moderator, which runs interviews 24/7 in voice or text. Every participant gets the same opening, the same core questions, and the same standard probes — but the moderator adapts follow-ups dynamically based on the participant's actual answers.\n- **Structured questions inline.** Koji's six structured question types — open_ended, scale, single_choice, multiple_choice, ranking, yes_no — can be mixed into the conversational flow. A scale rating sits naturally between two open-ended probes, capturing both signal and story in one session.\n- **Quality scoring on every question.** Each interview is automatically scored 1–5, so the team can see which questions in the guide are pulling weight and which are producing thin data — and revise the guide between waves.\n- **Methodology frameworks built in.** Koji supports Mom Test, JTBD, discovery, exploratory, and lead-magnet frameworks out of the box, so a JTBD switch interview guide is not a blank document — it starts with the right structure and probes baked in.\n- **Real-time theme extraction.** The discussion guide's questions are automatically themed across all completed sessions, so by the time the 10th participant finishes, you already have the cross-session pattern instead of weeks of manual coding ahead.\n\nThe work shifts: from operating interviews to *designing the right discussion guide*. Which is, after all, the part where research judgment actually compounds.\n\n## A discussion guide checklist\n\n- [ ] Research objectives written down before drafting questions\n- [ ] 5 / 10 / 40 / 5 minute structure or equivalent ratio\n- [ ] No more than 8 core questions for a 60-minute interview\n- [ ] Every question is open-ended, behavior-anchored, and concrete\n- [ ] At least 2–3 standard probes per core question\n- [ ] Sensitive or evaluative questions placed after rapport\n- [ ] At least one \"anything we missed\" wrap-up question\n- [ ] Guide piloted with 1–2 participants before main study (see [Pilot Study Guide](/docs/pilot-study-user-research-guide))\n- [ ] Guide stored as a versioned artifact, not a one-off doc\n\n## Related Resources\n\n- [Structured Questions Guide](/docs/structured-questions-guide) — the six question types you can mix into your discussion guide for inline scale, ranking, and choice data\n- [How to Conduct User Interviews](/docs/how-to-conduct-user-interviews) — the full interview playbook your discussion guide supports\n- [How to Moderate User Interviews](/docs/how-to-moderate-user-interviews) — moderator skills, rapport, and probing techniques\n- [Probing and Follow-up Questions](/docs/probing-and-follow-up-questions) — deep-dive on the standard probes that should ship with every guide\n- [User Interview Questions](/docs/user-interview-questions) — broader question library beyond the discussion guide\n- [Pilot Study in User Research](/docs/pilot-study-user-research-guide) — pilot your discussion guide before the main study to catch broken questions\n- [Avoiding Bias in Interviews](/docs/avoiding-bias-in-interviews) — the bias patterns most often baked into discussion guides","category":"Interview Techniques","lastModified":"2026-05-08T03:21:11.665249+00:00","metaTitle":"Discussion Guide Template for User Interviews: Structure & 30+ Questions (2026) | Koji","metaDescription":"A discussion guide outlines the topics, questions, and probes for a semi-structured user interview. Get the four-section template, 30+ copy-ready example questions, time allocation, and how AI moderation runs your guide as a live study.","keywords":["discussion guide","interview guide","moderator guide","user interview script","discussion guide template","semi-structured interview","user interview questions","UX research","Koji"],"aiSummary":"A discussion guide is the structured outline of topics, questions, and probes that keeps a semi-structured user interview on track without becoming a script. The proven four-section structure — introduction, warm-up, core questions, wrap-up — allocates roughly 5/10/40/5 minutes of a 60-minute interview. Each core question should be open-ended, anchored in past behavior, and shipped with 2–3 standard probes. AI-moderated platforms like Koji turn the discussion guide into a live study, running it consistently across every participant with quality scoring and real-time theme extraction.","aiPrerequisites":["user-interview-questions","how-to-conduct-user-interviews"],"aiLearningOutcomes":["Write a discussion guide using the four-section structure (intro, warm-up, core, wrap-up)","Allocate time across sections for 30, 60, and 90-minute interviews","Convert research objectives into open-ended, behavior-anchored core questions","Pair every core question with 2–3 standard, non-leading probes","Mix structured Koji question types into a conversational discussion guide for richer data"],"aiDifficulty":"beginner","aiEstimatedTime":"15 min read"}],"pagination":{"total":1,"returned":1,"offset":0}}