New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Interview Techniques

Discussion Guide for User Interviews: Template, Structure, and 30+ Example Questions (2026)

A discussion guide is the structured outline of topics, questions, and probes that keeps a user interview on track without making it feel like a script. Learn the four-section structure, time allocation, 30+ example questions you can copy, and how AI moderation turns a discussion guide into a live conversational study.

What is a discussion guide?

A discussion guide (also called an interview guide or moderator guide) is the structured outline a researcher uses to run a semi-structured user interview. It lists the topics to cover, the core questions to ask under each topic, and the probes to use when an answer needs to go deeper.

A discussion guide is not a script. A script forces every word; a discussion guide gives the moderator a flexible roadmap that ensures every interview covers the same ground without strangling the conversation.

"In a semistructured interview, the interviewer uses an interview guide (also referred to as a discussion guide). Unlike an interview script — which is used in structured interviews — an interview guide can be used flexibly: interviewers can ask questions in any order they see fit, omit questions, or ask questions that are not in the guide." — Nielsen Norman Group, Writing an Effective Guide for a UX Interview

If user interviews are the most common method in product research, the discussion guide is the most common artifact. Learning to write one well is the single biggest skill upgrade most product teams can make.

Why discussion guides matter (the BLUF)

Without a discussion guide, even experienced interviewers drift: questions vary across sessions, key topics get skipped under time pressure, and synthesis becomes a nightmare because the data is not comparable. A good discussion guide solves all three problems while leaving room for the moderator to follow interesting threads.

A well-built guide does five things at once:

  1. Translates research objectives into questions a participant can actually answer.
  2. Sequences questions so the conversation builds naturally.
  3. Allocates time across topics so the most important questions never get cut.
  4. Gives the moderator standard probes for going deeper without leading.
  5. Makes synthesis fast because every interview covers the same comparable ground.

A discussion guide that fails any of these will quietly destroy a research study.

The four-section structure

Almost every effective discussion guide follows the same four-section structure. The names vary; the structure does not.

1. Introduction (5 minutes)

  • Welcome and rapport-building
  • Brief explanation of the study purpose (vague enough not to bias)
  • Confidentiality and recording consent
  • Permission to ask "why" repeatedly
  • One sentence setting expectations on length

2. Warm-up / context (5–10 minutes)

  • 3–5 easy, factual questions about the participant's background
  • Establish their role, context, and current behavior
  • Ground the rest of the conversation in their lived reality

3. Core questions (25–40 minutes)

  • 5–8 main questions tied directly to research objectives
  • Each main question paired with 2–3 standard probes
  • Sequenced from broad to specific
  • Sensitive or evaluative questions placed after rapport is established

4. Wrap-up (5 minutes)

  • "Anything we did not cover that you want to mention?"
  • Summarize the participant's key points to invite correction
  • Thank, log incentive, and close

For a 60-minute interview, that allocates roughly 5 / 10 / 40 / 5 minutes. For a 30-minute interview, halve every section but keep the four-part structure.

How to write each section

Writing the introduction

  • Keep it short. Participants are anxious in the first two minutes; long preambles make it worse.
  • Be vague about hypotheses. "We are exploring how teams handle X" is fine; "we want to validate our new pricing" is not.
  • Always explicitly say the participant's honest opinion — including criticism — is what is most useful. This single sentence increases candor measurably.

Writing warm-up questions

Three to five easy, factual, low-stakes questions. Their job is twofold: build rapport, and ground the rest of the interview in concrete context.

Examples:

  • "Tell me about your role and what a typical week looks like."
  • "How long have you been doing this work?"
  • "Walk me through the last time you did [behavior under study]."
  • "What tools do you currently use for [task]?"

The "walk me through the last time" question is the most powerful warm-up in product research. It anchors everything that follows in a real, recent, concrete moment instead of in generalities.

Writing core questions

This is where most discussion guides go wrong. Three rules cover most failures:

Rule 1: Open-ended, not yes/no. Yes/no questions yield no insight. Reframe "Do you find onboarding difficult?" as "Walk me through the last time you onboarded a new person." Behavior, not opinion.

Rule 2: Past behavior, not future intent. What people say they will do is a poor predictor of what they actually do. "How did you handle the last time X happened?" beats "Would you use a feature that did Y?"

Rule 3: Specific, not abstract. "What do you think about workflow tools?" is unanswerable. "What did you do the last time a workflow broke at 5pm?" is answerable.

Sequence core questions from broad and exploratory to narrow and evaluative. Anchor early questions in past behavior; save concept reactions and direct evaluations for the second half, after the participant's real context is on the table.

Writing standard probes

Each core question should ship with 2–3 standard probes you can use to go deeper without leading. Default probes that work in almost any study:

  • "Tell me more about that."
  • "Can you give me a specific example?"
  • "Walk me through what happened next."
  • "What did you do then?"
  • "Why was that important?"
  • "What would have been different if [variable]?"
  • "What was going through your mind in that moment?"

Probes are deliberately content-neutral. They do not suggest an answer; they invite depth.

Writing the wrap-up

Two questions do most of the work:

  • "Is there anything we did not cover that you think is important for me to know?" — surfaces topics you missed.
  • "If you could change one thing about [topic], what would it be?" — forces a final prioritization that often reveals the most useful insight of the session.

30+ example discussion guide questions

Discovery / generative interviews

  • "Walk me through the last time you [behavior under study]."
  • "What were you trying to accomplish that day?"
  • "What did you try first? What happened?"
  • "Where did you get stuck, if at all?"
  • "What did you do as a workaround?"
  • "Who else was involved, and what was their role?"
  • "When does this happen most often?"
  • "What would have made that easier?"

Pain point exploration

  • "What is the most frustrating part of [process]?"
  • "Tell me about a time it went really badly."
  • "What did that cost you — time, money, relationships?"
  • "Why have you not solved this already?"
  • "What have you tried that did not work?"

Jobs-to-be-Done switch interviews

  • "Tell me about the day you decided you needed something new."
  • "What was happening that pushed you to look for a solution?"
  • "What were you using before? Why did it stop being enough?"
  • "What were you hoping the new tool would let you do?"
  • "What almost stopped you from switching?"

Concept testing

  • "Looking at this, what is your immediate reaction?"
  • "What do you think this is for?"
  • "Who do you think this is built for?"
  • "What is unclear or confusing?"
  • "If this existed today, would it replace, supplement, or be irrelevant to what you currently do? Why?"

Pricing and willingness to pay

  • "If you saw this product at [price A], what would you think?"
  • "At what price would this be too expensive to consider?"
  • "At what price would you start to question its quality?"
  • "What would justify a higher price?"

Wrap-up / reflection

  • "Looking back over our conversation, what stands out as most important?"
  • "What is the one change to [topic] that would matter most to you?"
  • "Is there anyone else I should be talking to about this?"
  • "Is there anything you wish I had asked?"

Time allocation: the 60-minute discussion guide template

For a 60-minute interview:

  • 0–5 min Introduction, consent, expectations
  • 5–15 min Warm-up: role, context, last-time-they-did-X
  • 15–25 min Core block 1: behavioral history (past behavior, pain points)
  • 25–40 min Core block 2: deeper probing on key topics
  • 40–55 min Core block 3: concept reactions, evaluation, prioritization
  • 55–60 min Wrap-up, anything-we-missed, thanks

For a 30-minute interview, drop one core block and tighten the warm-up.

Common discussion guide mistakes

  • Too many questions. A 60-minute interview cannot meaningfully cover more than 8 main questions plus probes. Most over-stuffed guides cause the moderator to skip the most important questions at the end.
  • Leading questions. "Don't you find X frustrating?" pre-loads the answer. Reframe as "How do you feel about X?" or, better, "Tell me about the last time you used X."
  • Abstract questions. "How do you think about productivity?" gets you platitudes. Anchor everything in concrete past events.
  • Questions stacked together. "Tell me about your workflow, the tools you use, and why you chose them" gives the participant three questions in one — they will answer one and skip the others.
  • Skipping probes. Without standard probes, depth depends entirely on the moderator's mood and energy. Pre-writing probes makes consistency a feature of the guide, not a personality trait.
  • No wrap-up question. "Is there anything we did not cover?" finds insights every single time. Skipping it leaves data on the table.
  • Treating the guide as a script. A skilled moderator follows the participant's most interesting thread first, then circles back to cover remaining ground. A guide that is read line-by-line produces stilted, surface-level data.

How Koji turns a discussion guide into a live study

Traditional discussion guides die at the operational layer: scheduling 12 interviews, coordinating moderators, fighting with calendars, transcribing recordings, and synthesizing the result over weeks. A great guide written by a senior researcher often gets executed inconsistently by junior moderators because the operational load is too heavy.

Koji turns the discussion guide into the study itself:

  • AI-moderated voice or text interviews. Your discussion guide is loaded into Koji's AI moderator, which runs interviews 24/7 in voice or text. Every participant gets the same opening, the same core questions, and the same standard probes — but the moderator adapts follow-ups dynamically based on the participant's actual answers.
  • Structured questions inline. Koji's six structured question types — open_ended, scale, single_choice, multiple_choice, ranking, yes_no — can be mixed into the conversational flow. A scale rating sits naturally between two open-ended probes, capturing both signal and story in one session.
  • Quality scoring on every question. Each interview is automatically scored 1–5, so the team can see which questions in the guide are pulling weight and which are producing thin data — and revise the guide between waves.
  • Methodology frameworks built in. Koji supports Mom Test, JTBD, discovery, exploratory, and lead-magnet frameworks out of the box, so a JTBD switch interview guide is not a blank document — it starts with the right structure and probes baked in.
  • Real-time theme extraction. The discussion guide's questions are automatically themed across all completed sessions, so by the time the 10th participant finishes, you already have the cross-session pattern instead of weeks of manual coding ahead.

The work shifts: from operating interviews to designing the right discussion guide. Which is, after all, the part where research judgment actually compounds.

A discussion guide checklist

  • Research objectives written down before drafting questions
  • 5 / 10 / 40 / 5 minute structure or equivalent ratio
  • No more than 8 core questions for a 60-minute interview
  • Every question is open-ended, behavior-anchored, and concrete
  • At least 2–3 standard probes per core question
  • Sensitive or evaluative questions placed after rapport
  • At least one "anything we missed" wrap-up question
  • Guide piloted with 1–2 participants before main study (see Pilot Study Guide)
  • Guide stored as a versioned artifact, not a one-off doc

Related Resources

Related Articles

Structured Questions in AI Interviews

Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.

Probing and Follow-Up Questions: Going Deeper in Research Interviews

Learn the different types of probing questions — clarification, elaboration, and contrast — and when to use each to get richer qualitative data from your participants.

Avoiding Bias in Research Interviews

Understand the most common biases in qualitative research — confirmation bias, leading questions, and social desirability — and learn proven techniques to minimize their impact on your data.

How to Moderate User Interviews: Skills, Probes, and the Question Flow That Surfaces Real Insights

A practical guide to moderating user interviews — rapport building, listening ratios, probing techniques, and how AI-moderated interviews remove the human variability that limits research quality.

How to Write User Interview Questions That Surface Real Insights

A practical guide to writing user interview questions that uncover genuine insights — covering open vs closed questions, common mistakes (leading, double-barreled, hypothetical), and how Koji's 6 structured question types combine qualitative and quantitative research.

How to Conduct User Interviews: The Complete Step-by-Step Guide

A complete step-by-step guide to planning, conducting, and analyzing user interviews—covering discussion guide writing, participant recruitment, facilitation techniques, sample size, and modern AI-powered approaches.

Pilot Study in User Research: How to Pre-Test Your Methodology Before Going Live (2026)

A pilot study is a small-scale rehearsal of your full research project that catches broken questions, biased prompts, and recruiting issues before they invalidate your real data. Learn when to run one, how many participants you need, what to test, and how AI-moderated platforms compress the pilot loop from weeks to hours.