New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Interview Techniques

How to Write User Interview Questions That Surface Real Insights

A practical guide to writing user interview questions that uncover genuine insights — covering open vs closed questions, common mistakes (leading, double-barreled, hypothetical), and how Koji's 6 structured question types combine qualitative and quantitative research.

How to Write User Interview Questions That Surface Real Insights

Bottom line: The quality of your user research findings is only as good as the questions you ask. Open-ended, behavior-focused questions surface real insights; leading, hypothetical, and double-barreled questions produce bias. This guide covers the question types that work, the mistakes that contaminate research, and how Koji's 6 structured question types help you collect both qualitative depth and quantitative breadth in a single study.

Why Question Quality Determines Research Quality

You can have perfect participant recruitment, a skilled facilitator, and hours of recordings — and still get worthless data. If your questions are leading, hypothetical, or double-barreled, participants will answer what you asked rather than tell you what you need to know.

"Conducting a good interview is actually about shutting up." — Erika Hall, Co-Founder, Mule Design; Author of Just Enough Research

The gap between mediocre and excellent interview questions is the difference between confirmation and discovery. Mediocre questions confirm what you already believe. Excellent questions reveal what you didn't know you didn't know.

The Fundamental Rule: Ask About Behavior, Not Opinion

The most important principle in writing user interview questions comes from Erika Hall: "The first rule of user research: never ask anyone what they want!"

Why? Because people are unreliable predictors of their own future behavior. Ask someone if they would use a feature, and they'll often say yes — because saying yes is socially easy and cognitively simple. Watch someone actually try to use the feature, and you'll see something different entirely.

Instead of asking about opinions, desires, or hypothetical futures, ask about past behaviors and real experiences.

Instead of (opinion/hypothetical)Ask this (behavioral)
"Would you use a dashboard feature?""Tell me about the last time you wanted to check on your progress."
"Is the navigation confusing?""Walk me through how you found the settings page last week."
"Do you like the onboarding flow?""Describe your first experience using the product."
"What would make you pay for this?""Tell me about the last subscription tool you cancelled — what made you stop?"

Open-Ended vs. Closed Questions

Open-ended questions invite narrative responses — they cannot be answered with "yes," "no," or a single word. They are your primary tool for discovery.

Maria Rosala of Nielsen Norman Group explains: "Open-ended questions result in deeper insights. Closed questions provide clarification and detail, but no unexpected insights."

Open-ended questions allow you to find more than you anticipated. They capture how users think about problems, what language they use, what context they bring, and what workarounds they've invented.

Examples of open-ended questions:

  • "How do you currently manage your research projects?"
  • "Walk me through what happened the last time you needed to share findings with stakeholders."
  • "What's your process for deciding which user to interview first?"

Closed questions produce yes/no or single-word answers. They are efficient for confirmation, quantification, and follow-up clarification — but will not surface unexpected insights.

Examples of appropriate closed questions (used as follow-ups):

  • "Was that the first time you'd tried that approach?"
  • "Would you describe yourself as a daily user?"
  • "Did you complete the task the way you expected to?"

The rule: Lead with open-ended questions. Use closed questions to clarify and quantify — never as your primary discovery tool.

Question Types to Avoid

1. Leading Questions

A leading question suggests the "correct" answer through its framing, tone, or embedded assumptions.

  • Leading: "How satisfied are you with our excellent customer service?"

  • Neutral: "How would you describe your experience with our customer service team?"

  • Leading: "Did you find it easy to navigate the menu?"

  • Neutral: "What was your experience navigating the menu?"

Leading questions don't reveal user truth — they confirm researcher bias. They produce results that "don't represent your sample's opinions and wrongly confirm your own bias" (Maze, 2024).

How to catch them: Read each question and ask yourself: Does this question signal what answer I'm hoping for? If yes, rewrite it.

2. Double-Barreled Questions

A double-barreled question asks about two things simultaneously but only receives one answer — making the data uninterpretable.

  • Double-barreled: "Was the checkout process fast and easy?"
  • Fixed: Two separate questions: "How would you describe the checkout speed?" and "How easy was it to complete the checkout?"

Double-barreled questions "create ambiguity and make the data potentially unusable, leading to unwise business decisions based on incorrect information" (Qualtrics).

The rule: One question = one concept. Whenever you write "and" in a question, check whether you're asking two things.

3. Hypothetical Questions

Hypothetical questions ask participants to predict their own future behavior or preferences in imagined scenarios.

  • Hypothetical: "If we added a collaboration feature, would you use it?"
  • Better: "Tell me about the last time you needed to share your work with a colleague. What did you do?"

People are systematically poor at predicting their own behavior. Hypothetical answers reflect what participants wish they would do, not what they actually do.

Exception: Hypotheticals can work in concept testing when paired with actual prototypes. Showing someone a mockup and asking "How would you use this?" is grounded in a tangible artifact — much more reliable than pure imagination.

4. Unscaffolded "Why" Questions

"Why" questions can produce defensive, rationalized, or socially acceptable answers rather than authentic responses.

  • Unscaffolded: "Why do you use this app?"

  • Scaffolded: "What originally brought you to this app? What keeps you coming back?"

  • Unscaffolded: "Why did you cancel your subscription?"

  • Scaffolded: "Walk me through your thinking when you decided to cancel. What was happening at that point?"

The scaffolded versions anchor participants in memory and narrative, producing more truthful and detailed responses.

The Interview Question Structure: A Proven Framework

1. Opener Questions (Rapport Building)

Start with easy, non-threatening questions that help participants relax and give you background context.

  • "Tell me a bit about your role and what you work on day to day."
  • "How long have you been using [product/approach]?"
  • "What does a typical [relevant activity] look like for you?"

2. Contextual / Behavioral Questions (Core Discovery)

Move into the heart of your research — asking about real past experiences in relevant contexts.

  • "Walk me through the last time you [relevant task]. Start from the beginning."
  • "Tell me about a time when [pain point scenario] happened. What did you do?"
  • "How do you currently handle [problem area]? Show me if you can."

3. Follow-Up Probing Questions (Depth)

Use these to go deeper whenever participants mention something interesting.

Standard probing questions:

  • "Tell me more about that."
  • "Can you give me an example?"
  • "What did you do next?"
  • "What were you expecting to happen?"
  • "You mentioned [X] — what did you mean by that?"
  • "How often does that happen?"

Steve Portigal, author of Interviewing Users: "There's a big difference between talking to people and leading an interview." Probing is where the skill lies — knowing when to follow up and when to move on.

4. Exploratory Questions (Breadth)

Catch anything your script didn't anticipate.

  • "What else affects how you approach [topic]?"
  • "Is there anything about [topic] we haven't talked about that you think is important?"
  • "What do you wish existed that doesn't?"

5. Closing Questions

Leave space for participants to add context you didn't think to ask for.

  • "Is there anything you'd want us to know that we didn't ask about?"
  • "If you could change one thing about [product/process], what would it be?"
  • "Who else should we be talking to about this?"

How Koji's Structured Questions Solve the Open vs. Closed Dilemma

Traditional interviews force a choice: go deep with open-ended questions or go broad with standardized closed questions. You can't easily do both in the same session without making it feel like two different interviews.

Koji eliminates this tradeoff with 6 structured question types that combine qualitative and quantitative data collection in a single AI-moderated session:

  1. Open Ended — Pure qualitative. Koji's AI asks follow-up probing questions automatically based on participant responses, replicating the value of a skilled human moderator. Responses generate thematic summaries and representative quotes in the report.

  2. Scale — Numeric ratings (NPS 0–10, CSAT 1–5, effort scores). Responses aggregate into distribution charts — instantly comparable across participants and studies. The AI anchors follow-ups: "You gave that a 3 — what would need to change for it to be a 5?"

  3. Single Choice — Participants select one option from a list. Results render as frequency bar charts. Ideal for "Which of these describes your primary use case?" style questions.

  4. Multiple Choice — Participants select all applicable options. Results render as stacked frequency charts. Useful for "What tools do you currently use?" type questions.

  5. Ranking — Participants order items by preference. Results show average rank position per item — powerful for feature prioritization research.

  6. Yes / No — Binary questions that produce pie/donut chart visualizations. Useful for qualifying conditions, confirming behaviors, and measuring binary outcomes.

The AI interviewer adapts its conversation style to each question type — presenting scale questions as interactive widgets in text mode, asking follow-up questions after open-ended responses, and validating responses before moving on.

This means you can write a single study that opens with behavioral open-ended discovery, captures quantitative ratings mid-interview, and closes with a qualitative "Is there anything else?" — all without a human facilitator scheduling 30-minute calendar slots.

According to the 2024 State of User Research Report, AI adoption for qualitative analysis jumped from 20% to 56% in a single year — reflecting how rapidly practitioners are adopting AI-assisted question design and analysis tools.

Building Your Interview Script: A Practical Checklist

Before running sessions, review your discussion guide against this checklist:

  • Every question is about past behavior, not future intent
  • No leading questions (re-read each one looking for embedded assumptions)
  • No double-barreled questions (split any question with "and")
  • Hypotheticals are anchored in tangible artifacts (not pure imagination)
  • "Why" questions are scaffolded with context
  • Probing follow-ups are ready for each core question
  • The script has a clear funnel: broad opener → contextual discovery → specific probing → closing
  • Total estimated time is 45–60 minutes maximum (fatigue degrades data quality after 60 minutes)

Common Research Biases in Question Writing

Six biases identified by Erika Hall in Just Enough Research:

  1. Design bias — Questions assume your product's framing is correct
  2. Sample bias — Questions only work for certain user types
  3. Interviewer bias — Your tone or reactions signal preferred answers
  4. Sponsor bias — Participants sense what you want to hear
  5. Social desirability bias — Participants give "good" answers rather than true ones
  6. Hawthorne effect — Participants behave differently when observed

Mitigation: Map your team's assumptions before writing questions. For every assumed answer, write a question that would disprove it.

Indi Young, researcher and author of Mental Models: "I try to be very open about my bias, my team's bias. I don't want that affecting the data. We want the data to be able to say what it has to say. We want to actually hear other people's perspectives, understand other people's perspectives."

Question Stems That Work

Use:

  • "How..."
  • "What..."
  • "Walk me through..."
  • "Tell me about a time when..."
  • "Describe your process for..."
  • "What happened next?"

Avoid:

  • "Did you..." (binary; invites yes/no)
  • "Was it..." (binary)
  • "Do you like..." (opinion, not behavior)
  • "Would you ever..." (hypothetical)
  • "Don't you think..." (leading)

Related Resources

Related Articles

Structured Questions in AI Interviews

Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.

Discussion Guide Template: How to Structure Your Research Sessions

Learn how to create a research discussion guide that keeps interviews focused and uncovers deep insights. Includes templates, question structures, and how AI platforms like Koji replace static guides with adaptive conversation.

Screener Questions for User Research: A Complete Guide

Learn how to write effective screener questions that find the right research participants — and how Koji's intake forms and AI interviews make screening faster and more natural.

Generative Research: How to Uncover User Needs You Didn't Know Existed

A complete guide to generative (exploratory) user research — what it is, when to use it, which methods work best, and how AI-powered platforms like Koji make it faster and more scalable than ever.

Mixed Methods Research: How to Combine Qualitative and Quantitative Data

Learn how to design and run mixed methods research that combines the statistical power of quantitative data with the depth of qualitative insight — including how AI interview platforms like Koji make mixed methods accessible to every research team.

How to Conduct Usability Testing: The Complete Guide

A comprehensive guide to usability testing for UX researchers and product managers. Covers types of testing, participant numbers, step-by-step facilitation, and the most common mistakes to avoid.