How Many Customer Interviews Do You Really Need?
Learn exactly how many customer interviews you need for meaningful insights. From the 5-user myth to AI-powered scaling, get your sample size right.
Koji Team
How Many Customer Interviews Do You Really Need?
You've finally gotten buy-in for customer research. You've crafted your interview guide, identified your target customers, and you're ready to start learning. But then comes the question that trips up even experienced researchers: How many interviews do I actually need?
Ask five different experts and you'll get five different answers. "Five users is enough!" "No, you need at least 20!" "It depends on saturation!"
The confusion is understandable. Sample size guidance has historically been shaped by the constraints of traditional research—where every interview costs significant time and money. But those constraints are changing, and so should our thinking about sample sizes.
Let's cut through the noise and give you a practical framework for determining the right number of customer interviews for your research goals.
The "5 Users" Myth: Where It Came From and Why It Doesn't Always Apply
You've probably heard the famous claim: "Testing with 5 users uncovers 85% of usability problems." This insight, popularized by Jakob Nielsen of the Nielsen Norman Group, has become one of the most cited (and misapplied) guidelines in product research.
Here's the thing: Nielsen's research was specifically about usability testing—watching users interact with an interface to find problems. It was never meant to apply to all types of customer research.
What Nielsen Actually Said
Nielsen's original recommendation came with important caveats that often get overlooked:
- It applies to finding interface issues, not understanding customer needs, motivations, or behaviors
- It assumes a homogeneous user group—if you have distinct user segments, you need 5 per segment
- It's meant for iterative testing—test 5 users, fix issues, test another 5, repeat
Nielsen himself clarified: "You will likely need to test with 15 users to uncover all the usability problems in a design—but it is better to run 3 tests with 5 users each, fixing the problems uncovered in each test before testing again."
Why Customer Interviews Are Different
Customer interviews aren't about finding interface bugs. They're about understanding:
- What problems customers are trying to solve
- How they currently approach those problems
- What motivates their decisions
- What barriers they face
- What language they use to describe their needs
This type of discovery research requires enough conversations to identify patterns and validate that those patterns are real—not just individual opinions. And that typically requires more than 5 interviews.
Understanding Data Saturation: When Have You Heard Enough?
Rather than fixating on a magic number, experienced researchers focus on data saturation—the point where new interviews stop revealing new insights.
Think of it like panning for gold. The first few pans bring up plenty of nuggets. But eventually, each new pan yields less and less. When you're consistently coming up empty, you've likely found what there is to find.
How to Recognize Saturation
You're approaching saturation when:
- Themes repeat consistently across interviews
- New interviews confirm what you've already heard rather than adding new dimensions
- You can predict how customers will respond to certain questions
- Your research questions feel answered with confidence
What Research Shows About Saturation
Academic studies on data saturation have converged on some helpful guidelines:
| Population Type | Typical Saturation Point | |-----------------|-------------------------| | Homogeneous, focused scope | 9-12 interviews | | Moderately diverse | 15-20 interviews | | Highly diverse or broad scope | 25-30+ interviews |
Research by Griffin and Hauser found that 20-30 interviews typically uncover 90-95% of customer needs. Meanwhile, a systematic review of saturation studies found that 9-17 interviews typically reach saturation for most qualitative research.
The key insight? Saturation depends heavily on how diverse your customers are and how broad your research questions are.
Sample Size Recommendations by Research Type
Different research goals require different sample sizes. Here's a practical framework:
Usability Testing: 5-8 Participants
If you're testing whether customers can complete specific tasks in your product, 5-8 participants per user segment will reveal most usability issues. This is where Nielsen's original guidance applies.
When this works:
- Testing specific features or flows
- Identifying interface problems
- Validating design solutions
Discovery Interviews: 12-20 Participants
When you're trying to understand customer problems, needs, and behaviors, plan for more conversations. Discovery research requires identifying patterns that hold true across multiple customers.
When this applies:
- Understanding customer problems and pain points
- Exploring how customers currently solve problems
- Identifying unmet needs
- Validating problem hypotheses
Segmented Research: 8-12 Per Segment
If you're researching across distinct customer segments (e.g., enterprise vs. SMB, new users vs. power users), you need enough interviews within each segment to identify segment-specific patterns.
The math:
- 3 segments x 10 interviews each = 30 total interviews
- This ensures you can distinguish real segment differences from individual variation
Concept Validation: 15-25 Participants
When testing new product concepts or value propositions, you need enough feedback to feel confident about patterns. Positive feedback from 3 people isn't validation—it could be coincidence.
Target metrics:
- Strong positive signals from 60%+ of participants suggests real potential
- Consistent objections from 30%+ signals a problem worth addressing
- Mixed results require more investigation
Market Research: 30-50+ Participants
For broader market understanding—like sizing opportunities or understanding competitive dynamics—you need larger samples to draw meaningful conclusions.
Consider these benchmarks:
- 30 minimum for directional insights
- 50+ for more confident conclusions
- 100+ to support quantitative analysis
The Traditional Constraint: Why Teams Interview Too Few
Here's the uncomfortable truth: most teams conduct fewer interviews than they should. Not because they don't understand the value, but because of practical constraints:
Time Investment
A single 45-minute interview actually requires:
- 15-30 min scheduling and coordination
- 45-60 min conducting the interview
- 30-60 min taking notes and initial analysis
- Time for synthesis and pattern identification
Total: 2-3 hours per interview
At 20 interviews, that's 40-60 hours of researcher time—more than a full work week.
Access Challenges
Finding and recruiting the right customers takes effort:
- Identifying qualified participants
- Getting responses to outreach
- Scheduling around availability
- Managing no-shows (typically 10-20%)
Cost Considerations
Between researcher time, incentives ($50-200 per participant), and opportunity cost, traditional interviews often run $500-1,500 per interview at fully-loaded cost.
At those economics, teams make trade-offs. They interview fewer customers than ideal, prioritize only the highest-stakes research, or skip customer interviews altogether.
How AI Changes the Sample Size Equation
This is where things get interesting. AI-powered interview tools fundamentally change the economics of customer research.
What AI Interviewers Enable
With AI handling the conversation:
- Interviews run 24/7 — Customers participate on their schedule, not yours
- No scheduling coordination — Send a link, get responses
- Automated transcription and analysis — Insights emerge faster
- Consistent interview quality — Every conversation follows your guide
- Lower marginal cost — Each additional interview costs little more than the incentive
The New Math
Traditional approach: 10 interviews x $1,000/interview = $10,000 AI-powered approach: 50 interviews x $100/interview = $5,000
Result: 5x the insights for half the cost.
This doesn't just mean more interviews—it means:
- Confidence in patterns: When 35 of 50 customers say the same thing, you know it's real
- Segment-level insights: Interview enough customers to understand each segment
- Continuous learning: Run ongoing research, not just periodic projects
- Statistical validity: Large enough samples to support quantitative analysis alongside qualitative insights
When Human Interviews Still Matter
AI interviewers are powerful, but they're not always the right choice:
- Sensitive topics requiring careful rapport-building
- Complex exploratory research with unpredictable directions
- High-stakes stakeholder interviews where personal connection matters
- Deeply technical discovery requiring expert follow-up questions
The best research programs use both: AI interviews for scale and breadth, human interviews for depth and nuance.
A Practical Framework for Determining Your Sample Size
Rather than memorizing numbers, use this decision framework:
Step 1: Define Your Research Type
| Research Type | Minimum Sample | Recommended Sample | |---------------|---------------|-------------------| | Usability testing | 5 | 8 per segment | | Discovery interviews | 12 | 15-20 | | Concept validation | 15 | 20-25 | | Segment research | 8/segment | 12/segment | | Market research | 30 | 50+ |
Step 2: Assess Audience Diversity
- Homogeneous audience (similar contexts, needs, behaviors): Use minimum samples
- Moderately diverse (some variation in use cases): Add 25-50%
- Highly diverse (distinct segments, varied needs): Double the minimum or segment your research
Step 3: Consider Your Confidence Needs
- Exploratory/directional insights: Minimum samples acceptable
- Informing significant decisions: Recommended samples
- High-stakes or contentious topics: Add 20-30% buffer
Step 4: Plan for Attrition
Recruit 15-20% more than your target to account for:
- No-shows
- Technical issues
- Poor-quality responses
- Participants who don't match criteria
Step 5: Monitor Saturation as You Go
Don't commit to a rigid number upfront. Instead:
- Start with your minimum viable sample
- Analyze in batches (every 5-10 interviews)
- Track whether new themes are emerging
- Stop when you're consistently hearing familiar patterns
Common Sample Size Mistakes (And How to Avoid Them)
Mistake 1: Stopping at 5 Because "Nielsen Said So"
The fix: Recognize that 5 users is for usability testing, not discovery research. For understanding customer needs, start with 12 minimum.
Mistake 2: One-Size-Fits-All Sample Sizes
The fix: Match your sample size to your research type and audience diversity. A B2B startup with one customer segment needs fewer interviews than an enterprise product serving five distinct personas.
Mistake 3: Not Accounting for Segments
The fix: If you're researching across segments, you need critical mass within each segment—not just total interviews. 20 interviews across 4 segments means only 5 per segment, which isn't enough.
Mistake 4: Treating Each Study in Isolation
The fix: Build cumulative knowledge. If you interviewed 15 enterprise customers last quarter, you might only need 8 more for a focused follow-up study.
Mistake 5: Prioritizing Quantity Over Quality
The fix: More interviews only helps if they're well-conducted. Ensure your interview guide is solid, your participants match your criteria, and your analysis is rigorous.
Making Customer Research Sustainable
The real goal isn't to nail the "right" sample size for a single study. It's to build a sustainable research practice where customer insights continuously inform decisions.
Shift from Projects to Programs
Instead of occasional deep-dive studies, consider:
- Continuous discovery: Regular small-batch interviews (5-10/week)
- Always-on feedback channels: Embedded interview opportunities in your product
- Rapid research sprints: Quick-turn studies for specific questions
Use the Right Tool for Each Job
| Need | Best Approach | |------|---------------| | Broad pattern identification | AI interviews (30-50+) | | Deep exploration | Human interviews (8-12) | | Quick validation | AI interviews (15-20) | | Sensitive topics | Human interviews (10-15) | | Ongoing pulse | AI interviews (continuous) |
Build Institutional Knowledge
Don't let insights die in slide decks. Create:
- A searchable repository of interview insights
- Cross-study synthesis and pattern tracking
- Shared understanding of core customer segments
Key Takeaways
-
The "5 users" rule is for usability testing, not customer interviews. Discovery research typically requires 12-20 participants to reach saturation.
-
Saturation matters more than arbitrary numbers. Monitor when new interviews stop revealing new insights, and stop there.
-
Match sample size to research type. Usability testing (5-8), discovery (12-20), concept validation (15-25), and market research (30-50+) all have different needs.
-
Account for audience diversity. Diverse populations require larger samples. If researching segments, ensure adequate coverage within each.
-
AI changes the economics. When interviews cost less, you can afford the sample sizes that give you real confidence in your findings.
-
Think programs, not projects. Sustainable research practices beat occasional heroic efforts.
The question isn't really "how many interviews do I need?" It's "how can I make customer conversations a regular part of how we build products?" When research is easy and affordable, the sample size question becomes much less fraught.
Start Talking to More Customers Today
With Koji, you can run 50 customer interviews for the same effort traditional methods require for 5. Our AI interviewer conducts thoughtful, conversational interviews at any scale—giving you the sample sizes you need for confident decision-making.
Stop guessing about what customers want. Start knowing.
See how Koji works | Start your free trial
This article was adapted for Koji from industry research on sample sizes in qualitative user research. For the original research, see the Nielsen Norman Group's work on interview sample sizes and academic studies on data saturation in qualitative research.