How to Write User Interview Questions That Get Real Answers
Most interview questions are too narrow, too leading, or too hypothetical. Here is a practical guide to writing questions that unlock genuine insight from your participants.
Koji Team
March 26, 2026
Most interview questions are too narrow, too leading, or too hypothetical. Researchers spend weeks planning a study, recruit carefully, and then ask questions that produce polished, surface-level answers instead of the honest truth. Here is how to write questions that actually work.
Why Question Quality Matters More Than Anything Else
The quality of your interview questions determines the quality of your insights. You can recruit the perfect participants, run a flawless session, and still walk away with nothing useful if your questions were the wrong ones.
According to Nielsen Norman Group, the most common failure mode in user interviews is the researcher inadvertently leading the participant — through word choice, framing, or the order of questions — toward confirming what the researcher already believes.
A 2024 study from Glaut found that AI-moderated interviews produced 129% more words per response than traditional static surveys. The difference was not the format — it was the quality of follow-up probing. Better questions, asked at the right moment, produce richer data.
The Core Principle: Ask About Experience, Not Opinion
The single most important rule in user interview question design: ask about what people did, not what they think they would do.
Opinions and hypotheticals are unreliable. People are notoriously bad at predicting their own future behavior. But memories of past experience, while imperfect, are far more grounded in reality.
Instead of: "Would you pay for a feature that did X?"
Ask: "Walk me through the last time you tried to do X. What did you do?"
Instead of: "How important is Y to you?"
Ask: "Tell me about a time when Y caused a problem for you. What happened?"
This principle — popularized by the Jobs-to-be-Done framework and the customer discovery work of Steve Blank — consistently produces more actionable insights than opinion-based questioning.
Question Types That Work
1. Grand Tour Questions (Start Here)
Grand tour questions invite participants to walk you through their experience in their own words. They are low-pressure, open-ended, and reveal structure you didn't know to ask about.
Examples:
- "Walk me through how you currently handle [problem area]."
- "Tell me about the last time you had to [do the thing you're researching]."
- "Describe a typical week when it comes to [topic]."
These questions are your opening move. Let the participant talk for 3–5 minutes uninterrupted. You'll often get your most valuable insights here, before you ask anything specific.
2. Specific Instance Questions
After a grand tour, drill into a specific moment. Vague memories produce vague answers. Specific memories produce specific, actionable data.
Examples:
- "You mentioned that happens every week — can you tell me about a specific time it happened recently?"
- "Think about the last time you [did X]. What triggered it?"
- "Walk me through exactly what you did step by step."
The more specific the episode, the more you can trust the data.
3. Probing Questions
Probing questions are your follow-up moves. They go deeper without leading the participant.
The most powerful probing questions:
- "Tell me more about that."
- "What happened next?"
- "How did that make you feel?"
- "Why did you do it that way and not another way?"
- "What were you expecting to happen instead?"
Note what these questions have in common: they're short, neutral, and create space for the participant to go wherever they need to go. Avoid suggesting the answer in your probe.
4. Native Language Questions
Ask participants what words they use, not the words you use. This matters enormously for product copy, marketing, and feature naming.
Examples:
- "How would you describe this to a colleague?"
- "If you were searching for a solution to this, what would you type into Google?"
- "What do you call this step in your workflow?"
The answers often reveal that your internal terminology is completely different from how customers think about the problem.
Question Types That Don't Work
Leading Questions
Leading questions embed the answer in the question itself.
Avoid: "Don't you find it frustrating when you can't find your research notes?"
Use instead: "Tell me about a time you needed to find something from a past research session."
Double-Barreled Questions
Double-barreled questions ask two things at once, which means participants answer only one — and you don't know which.
Avoid: "How do you currently handle analysis and how long does it take?"
Use instead: "How do you currently handle analysis? [Wait for answer.] And how long does that typically take?"
Hypothetical Questions
Hypothetical questions invite speculation, not experience.
Avoid: "If we added X feature, would that help you?"
Use instead: "Have you ever tried to do X? What happened?"
Yes/No Questions
Yes/no questions close down conversation when you need it to open up.
Avoid: "Do you find the current process frustrating?"
Use instead: "What has your experience been with the current process?"
How to Structure a 45-Minute Interview
A well-structured interview moves from broad to specific, building psychological safety before asking hard questions.
0–5 minutes: Warm-up Explain the purpose, confirm recording consent, set expectations. Ask a non-threatening opening question: "Tell me a bit about your role and what you work on day to day."
5–20 minutes: Context building Use grand tour and specific instance questions to understand their world. Don't ask about your product or solution yet.
20–35 minutes: Deep dive Probe the specific problem area you're researching. This is your highest-value time. Ask follow-up questions relentlessly.
35–40 minutes: Direct questions Now you can ask more direct questions — reactions to concepts, hypothetical comparisons. Your participant has enough context to give grounded answers.
40–45 minutes: Closing "Is there anything important about this topic that I didn't ask about?" This question frequently produces the most valuable insight of the session.
How Many Questions to Write
Write 5–8 questions for a 45-minute session. That sounds like too few — it isn't.
Each question should open a 5–10 minute conversation. If you have 15 questions, you're doing a survey, not an interview. You'll rush through topics instead of going deep on what matters.
According to the User Interviews State of User Research 2025, researchers who conducted 5 or fewer interviews reported significantly lower confidence in their findings than those who conducted 10+. Interview quality compounds — better questions in each session produce better data across all sessions.
Using AI to Ask Better Questions
AI-moderated interview platforms like Koji solve a persistent problem: follow-up question quality depends on who's running the session. Human moderators have bad days, get distracted, and forget to probe when something interesting comes up.
Koji's AI conducts every interview with consistent quality — probing when participants go shallow, following tangents that reveal unexpected themes, and maintaining a natural conversational tone throughout. The result is more consistent, higher-quality data across all participants.
According to the Maze Future of User Research Report 2026, demand for qualitative research grew 66% year-over-year while research team headcount stayed flat. AI-moderated interviews like Koji allow teams to meet that demand without hiring more researchers.
When designing your Koji study, the same principles apply: start with grand tour questions, focus on experience over opinion, and keep your question list to 5–8 items. The AI handles the follow-up probing automatically.
A Sample Question Set
Here is an example question set for a product discovery study exploring why users churn from a B2B SaaS product:
- "Walk me through what your team uses [product category] for on a weekly basis."
- "Tell me about a time when the tool you were using for this really let you down. What happened?"
- "How did you decide to try [your product]? What were you hoping it would do for you?"
- "Think back to a specific moment when you almost stopped using it. What was going on?"
- "What does your workflow look like now compared to before you started using [product]?"
- "If you were recommending a solution like this to a colleague, how would you describe what it does?"
Notice: no yes/no questions, no hypotheticals, no leading language. Six questions that could sustain a rich 45-minute conversation.
Key Takeaways
- Ask about past experience, not future hypotheticals
- Start broad with grand tour questions, then drill into specific instances
- Your best follow-up is often just "tell me more about that"
- Write 5–8 questions for a 45-minute session — going deep beats going wide
- Avoid leading language, double-barreled questions, and yes/no questions
- Ask what words participants use to describe the problem — their language is your copy
Want AI to ask your questions for you? Try Koji free — design your study, and Koji's AI conducts the interviews with expert follow-up probing built in.
Frequently Asked Questions
How many questions should a user interview have? Write 5–8 questions for a 45-minute session. Each question should open a 5–10 minute conversation. More questions means rushing through topics instead of going deep on what matters.
What makes a good user interview question? Good questions ask about real past experiences rather than opinions or hypotheticals. They are open-ended, avoid suggesting the answer, and create space for the participant to take the conversation where they need to go.
How do you avoid leading questions in user interviews? Avoid embedding assumptions in your question. Instead of "Don't you find it frustrating when X?" ask "Tell me about a time X happened. What was that like?" Review your question list and remove any word that implies a positive or negative judgment.
What is the most important question in a user interview? The closing question: "Is there anything important about this topic that I didn't ask about?" This question frequently produces the most valuable insight of the session because it gives participants permission to surface what mattered most to them.
Can AI conduct user interviews as effectively as a human moderator? AI-moderated interviews like Koji produce consistent follow-up probing across every participant — something human moderators do variably depending on energy, focus, and experience level. For discovery research at scale, AI moderation often produces more consistent data quality. For highly sensitive topics or complex relationship-building scenarios, human moderation still has advantages.