How Long Should User Interviews Be? Length and Duration Best Practices
How long user interviews should run by research type, with target durations for discovery, evaluative, and pulse research. Includes the science of fatigue, how AI moderation changes the math, and a calculator for picking your target length.
Quick answer: target durations by research type
| Research type | Target length | Hard ceiling |
|---|---|---|
| Foundational / discovery (moderated) | 45-60 min | 90 min |
| Evaluative / solution (moderated) | 20-30 min | 45 min |
| Generative AI-moderated (Koji) | 8-15 min | 20 min |
| Pulse / continuous AI-moderated | 5-10 min | 12 min |
| Quick screener interview | 2-5 min | 7 min |
These targets assume the interview is well-designed. A bloated 60-minute discovery interview that should have been 30 minutes still bleeds quality after minute 35 — duration alone doesn't buy depth.
Why length matters more than people think
Interview length affects three things simultaneously:
- Completion rate — the percentage of started interviews that finish. Longer = lower.
- Answer depth — substantive answers vs. one-word responses. Peaks in the middle of an interview, then declines.
- Cost — researcher time, scheduling overhead, transcription, analysis, and incentive size all scale with length.
The classic mistake is treating "more time = more insight". Past a certain point, you get diminishing returns and degraded data because fatigued respondents answer faster, less specifically, and less honestly. So shorter, sharper interviews often outperform long meandering ones.
The fatigue curve
Across thousands of user interviews, answer quality follows a predictable arc:
- Minutes 0-5: Warming up. Answers are short and surface-level.
- Minutes 5-15: The sweet spot. Respondents are engaged and substantive. The richest data lives here.
- Minutes 15-30: Sustained engagement is possible but requires good moderation. Depth declines slowly.
- Minutes 30-45: Fatigue starts. Answers shorten. Stories get formulaic.
- Minutes 45+: Diminishing returns on most research goals. Only deeply emotional or technical topics sustain depth past this point.
This is why a well-designed 10-minute AI-moderated interview can match a 30-minute traditional one — you're extracting the same minutes 5-15 sweet spot, just without the surrounding overhead.
Foundational discovery: 45-60 minutes
When to use this length: opening a brand new research area, building a persona from scratch, understanding a complex workflow, or doing customer development on a problem space you've never explored.
Budget breakdown:
- Opening (5 min) — consent, warm-up, context setting
- Background story (10-15 min) — who they are, what their world looks like
- Current behavior (15-20 min) — how they solve the problem today, what they've tried
- Pain and opportunity (10-15 min) — where it breaks, what they wish was different
- Closing (5 min) — anything we missed, referrals
Going past 60 minutes adds little. If you're consistently hitting time, your guide is too broad — split it into two studies.
Evaluative / solution interviews: 20-30 minutes
When to use this length: testing a prototype, validating a positioning narrative, evaluating a feature concept, or pricing research.
The goal is targeted reaction to specific stimuli, not open-ended exploration. Most evaluative interviews collapse into 2-3 core questions surrounded by probing, so 20-30 minutes is plenty.
AI-moderated generative interviews: 8-15 minutes
This is the sweet spot for most modern customer research. Platforms like Koji can run discovery-style interviews in roughly a third of the time a human moderator needs because:
- No scheduling overhead. The respondent starts whenever it suits them — no 5-minute "let me get set up" tax at the start.
- No note-taking pauses. The AI captures everything in real time without slowing the conversation.
- Adaptive coverage. If a respondent has clearly covered a topic in their answer to Q2, the AI doesn't mechanically ask Q3 about the same thing.
- Probing only where it matters. Each question has a
maxFollowUpssetting (0-3). Strategic questions probe deep; routine ones move on quickly.
Result: a 10-minute Koji interview routinely yields the substantive content of a 25-30 minute human-moderated session.
Pulse and continuous AI-moderated: 5-10 minutes
For ongoing research programs — monthly NPS follow-ups, post-purchase debriefs, beta feedback loops, support-ticket deep-dives — shorter is better. Respondents will come back for a 5-minute conversation; they won't for a 30-minute one. See customer interview cadence for cadence design.
A typical 7-minute Koji pulse interview has:
- 1 contextual opener
- 2-3 core questions (mixing open-ended and structured)
- 1 strategic question with deep probing
- 1 closing prompt
That's it. Resist the urge to "while we have them, ask about X".
The opening and closing budget
Two universal rules regardless of interview length:
- Opening should be 5-10% of total time. For a 10-minute interview, that's 30-60 seconds. For a 60-minute interview, 3-6 minutes. Anything more eats into the substantive minutes.
- Closing should be 5-10% of total time. Reserve time for "is there anything you wish I'd asked" — this single question routinely surfaces the most surprising insights in a study.
The middle 80-90% is your core research time — that's where the questions in your guide get answered.
Compressing without losing depth
The mistake most teams make when trying to shorten interviews is cutting questions. The better move is to change question types and rely on AI probing.
Example: a 30-minute interview with 12 open-ended questions can compress to a 12-minute interview by:
- Converting 5 of the questions to structured types (scale, single_choice, ranking) — see the structured questions guide. These take 15-20 seconds each vs. 90-120 seconds for open-ended.
- Keeping the 4 most strategic questions as open-ended with
maxFollowUps: 2— depth where it matters. - Dropping the 3 questions that were really "nice to know" — they belong in a separate study or a survey.
Same insight surface area, less than half the time. This is the fundamental advantage of AI-native research tools: you don't have to choose between depth and speed.
How to set expectations on the landing page
State the expected duration explicitly on the interview landing page. Always.
Good: "About 10 minutes. Choose voice or text." Bad: "A few questions about your experience."
Stating duration upfront lifts completion rates 5-15 percentage points because respondents self-select correctly. Always slightly under-promise: if your design target is 8 minutes, say "under 10".
Measuring actual interview length
In your Koji study analytics, look at:
- Median duration — your typical interview length
- P75 / P95 duration — your long tail; if these are 2-3x median, your AI probing is over-firing on a subset of respondents
- Duration vs. quality score — does longer correlate with higher quality? Often not past minute 15
If your P95 is creeping up, lower maxFollowUps on the questions that show the most variance.
Common length mistakes
- Padding with rapport. "How was your weekend?" wastes the first 90 seconds when the respondent is most alert. Skip it; the AI handles warmth conversationally.
- Asking "while I've got you" questions. Every extra question you add costs 60-120 seconds and a few points of completion.
- Treating long = thorough. A 10-minute focused interview beats a 40-minute meandering one on insights per minute.
- Not telling participants the length. Cardinal sin. Every interview landing page should state expected duration.
- Using deep probing (
maxFollowUps: 3) on every question. Reserve it for the 1-2 most strategic questions.
Related Resources
- Structured Questions Guide — how question types affect interview duration
- How Many User Interviews Do You Need? — sample size vs. length tradeoffs
- Probing and Follow-Up Questions — controlling probing depth
- AI-Moderated Interviews — why AI moderation compresses time
- Semi-Structured Interview Guide — balancing structure and flexibility
- How to Conduct User Interviews — moderation fundamentals
- Customer Interview Cadence — when shorter recurring interviews beat one big one
- Remote Interview Best Practices — running interviews at distance
Related Articles
AI-Moderated Interviews: How Automated Research Works (And Why It Works Better)
Understand how AI-moderated interviews work, when to use them over human-moderated sessions, and how to get the most from automated qualitative research.
Structured Questions in AI Interviews
Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.
Probing and Follow-Up Questions: Going Deeper in Research Interviews
Learn the different types of probing questions — clarification, elaboration, and contrast — and when to use each to get richer qualitative data from your participants.
Remote Interview Best Practices for Qualitative Research
Everything you need to run high-quality remote research interviews — from technical setup and rapport building to maintaining participant engagement over video, phone, or asynchronous channels.
How to Conduct User Interviews: The Complete Step-by-Step Guide
A complete step-by-step guide to planning, conducting, and analyzing user interviews—covering discussion guide writing, participant recruitment, facilitation techniques, sample size, and modern AI-powered approaches.
How Many User Interviews Do You Need? The Sample Size Guide for Qualitative Research
Discover the right number of user interviews for your research. Learn about data saturation, theoretical saturation, and practical frameworks for knowing when you've collected enough qualitative data.
Semi-Structured Interviews: The Complete Guide
Learn how to design, run, and analyze semi-structured interviews — the gold standard for qualitative research that balances structure with flexibility.
Customer Interview Cadence: How Often Should You Talk to Users? (2026)
Set the right customer interview cadence for your team — from one a week (Teresa Torres' baseline) to daily continuous discovery — and how AI moderation makes higher cadences sustainable.