Product-Market Fit Research: How to Go Beyond the 40% Survey (2026)
The Sean Ellis 40% survey tells you if you have product-market fit. AI-powered customer interviews tell you why — and what to do about it. Here is the complete PMF research framework for 2026.
Koji Team
April 26, 2026
The Short Answer
Product-market fit (PMF) research is the systematic process of measuring whether your product meets a real market need with enough intensity to drive sustainable growth. The Sean Ellis 40% benchmark — the threshold at which at least 40% of users would be "very disappointed" if they could no longer use your product — is the industry-standard starting point. But a one-question survey only tells you if you have fit. AI-powered customer interviews tell you why — and what to change if you do not.
Finding product-market fit is the defining challenge for every early-stage startup. Marc Andreessen called it "the only thing that matters." Yet most founders try to measure it with a five-question survey, get ambiguous results, and do not know what to change. This guide covers every research method for measuring PMF — from the classic Sean Ellis survey to AI-moderated interview frameworks — so you can move from uncertainty to clear direction.
What Is Product-Market Fit?
Product-market fit means your product solves a problem that enough people care about enough to pay for it, recommend it, and be genuinely disappointed if it went away. It is the inflection point where growth stops requiring constant effort and starts pulling itself forward via retention, referrals, and word-of-mouth.
The signal of fit: users are upset at the idea of losing your product. The signal of no fit: users churn without strong feelings, or they tolerate your product but would not miss it.
The Sean Ellis 40% Benchmark
In the early 2000s, growth strategist Sean Ellis — who led growth at Dropbox, Eventbrite, and LogMeIn — noticed a pattern after working with nearly 100 startups. Companies where 40% or more of users said they would be "very disappointed" if the product disappeared almost always achieved sustainable growth. Companies below 40% almost always struggled.
The methodology is a single survey question:
"How would you feel if you could no longer use [product]?"
- Very disappointed
- Somewhat disappointed
- Not disappointed
- N/A — I no longer use [product]
After analyzing over 100 startups, Ellis found a consistent pattern: the 40% threshold was the most reliable early predictor of growth potential ever identified.
The critical catch: the survey tells you whether you have fit. It does not tell you why — or what to do if you are at 33% and want to reach 50%.
Why Static PMF Surveys Are Not Enough
Most teams run the Ellis survey, get a number like 32%, and then debate it in a weekly meeting. The problem is not the survey. The problem is that surveys generate signals, not explanations.
Consider what you do not know after a PMF survey:
- Why are users very disappointed? Which specific feature or outcome drives their attachment?
- Who are the "very disappointed" users — what segments are most engaged?
- What would it take to move a "somewhat disappointed" user to "very disappointed"?
- What are your best customers doing with your product that others are not?
A 2025 industry study found that 95% of researchers now use AI tools to fill this gap. The most effective approach is AI-moderated customer interviews that automatically probe these questions at scale — generating the qualitative insight that surveys cannot reach.
4 PMF Research Methods Ranked by Depth
Method 1: The Sean Ellis Survey — Signal, Not Insight
- What it gives you: A PMF score between 0–100%
- What it misses: Why users feel that way, which segments have fit, what to change
- Best for: Establishing a baseline and tracking progress over time
- Time required: 1–2 days to collect 40+ responses
Method 2: Retention Curve Analysis — Behavior, Not Opinion
- What it gives you: Objective data on whether users stick around after signup
- What it misses: Why they stay or leave
- Best for: Validating PMF signals with behavioral data; identifying the activation moment
- Time required: Requires 3–6 months of user data in a product analytics tool
Method 3: Traditional Customer Interviews — Deep, But Slow
- What it gives you: Rich qualitative insight into user motivation, alternatives considered, and the critical use cases that create attachment
- What it misses: Scale — traditional interviews are slow, expensive, and subject to moderator bias
- Best for: Deep exploration of why your top users love the product
- Time required: 2–4 weeks to recruit, schedule, conduct, and analyze 20–30 interviews
Method 4: AI-Moderated PMF Interviews — Deep AND Fast
- What it gives you: Everything traditional interviews provide — at 10x the speed, with no moderator bias, and automatic thematic analysis across all respondents
- Best for: Teams that need PMF insight fast and cannot afford a four-week research cycle
- Time required: 48–72 hours from question to insight
The PMF Interview Framework: Questions to Ask
The following question framework works for both traditional and AI-moderated interviews. The goal is to identify the critical use cases, alternatives considered, and emotional attachment that define your PMF segment.
Opening: Establish context
- Walk me through how you first started using [product]. What problem were you trying to solve?
- What were you using before? How well was that working for you?
Core: Discover the value 3. Tell me about the last time [product] really helped you. What happened? 4. What would you do if [product] were not available tomorrow? What would you switch to? 5. What does [product] let you accomplish that you could not before?
Depth: Measure emotional attachment 6. How would you feel if [product] disappeared? Walk me through that. 7. Is there anything about [product] that frustrates you or that you wish worked differently?
Probing: Identify the PMF segment 8. Have you recommended [product] to anyone? What did you tell them? 9. Who else on your team or in your network uses [product]? How do they use it differently from you?
An AI-moderated version probes dynamically. When a user says "I would be devastated if I lost it," the AI follows up: "What specifically would you miss most?" When someone mentions a workaround, it asks: "Tell me more about why you needed that workaround."
How to Analyze PMF Interview Data
After collecting interviews, the analysis goal is to identify:
- The PMF segment — which users are "very disappointed" and what makes them different
- The critical use case — the specific job-to-be-done that creates genuine attachment
- The alternatives considered — what users would switch to (reveals your real competitive set)
- The improvement themes — what is preventing "somewhat disappointed" users from becoming "very disappointed"
In traditional research, this analysis takes days of transcript review and manual tagging. AI-moderated platforms like Koji extract these themes automatically after each interview — identifying patterns across all respondents and surfacing representative quotes with no manual work.
See how Koji thematic analysis works → Read how to structure your interview questions →
What Good PMF Research Looks Like in Practice
A B2B SaaS team runs a PMF survey and gets 38% "very disappointed" — just below the threshold. Instead of guessing what to change, they run 40 AI-moderated PMF interviews with their most active users over 72 hours.
Koji's analysis surfaces three themes:
- Power users (roughly 25% of respondents) are using a specific workflow that 75% of users have never discovered — their attachment score is extremely high
- Casual users like the product but have not found this workflow — they are "somewhat disappointed"
- The main barrier to deeper adoption is a UX friction point in the onboarding sequence
Result: the team adds an onboarding moment that introduces the power workflow on day 3. PMF score moves from 38% to 52% within 60 days. No new features built — just a research-driven onboarding change.
Running PMF Research with Koji
Koji is an AI-moderated interview platform built to run PMF research at scale without requiring a research team or expensive panel recruitment.
Here is how a PMF study works on Koji:
- Create a study — write your PMF questions using all 6 question types: open-ended for motivation, scale for satisfaction (1–10), yes/no for binary signals, single-choice and multiple-choice for segment attributes, and ranking for priorities
- Share the interview link — with existing users via email, in-app prompt, or Slack message
- AI moderates every interview — asking your questions, probing follow-ups, and adapting to each participant's answers
- Automatic analysis — themes, quotes, and patterns are extracted across all responses immediately
- One-click report — shareable with your team and stakeholders in minutes
PMF insight that used to take four weeks takes 48–72 hours with Koji. And because the AI asks every participant the same core questions without moderator influence, the data is cleaner and more comparable across respondents.
Start your first PMF study on Koji → Read the docs on setting up your first interview study →
PMF Research Checklist
Before declaring a PMF investigation complete, verify you have answered:
- [ ] What percentage of active users would be "very disappointed" if the product disappeared?
- [ ] Who are the "very disappointed" users — what segment, use case, or behavior defines them?
- [ ] What is the one use case or outcome that drives the strongest attachment?
- [ ] What are users switching to when they churn? (Reveals real competitive threats)
- [ ] What would move "somewhat disappointed" users to "very disappointed"?
- [ ] What is the primary barrier to more users discovering the core value?
If you cannot answer all six, you need more qualitative interviews.
Frequently Asked Questions
What is the Sean Ellis PMF test? The Sean Ellis PMF test is a single survey question: "How would you feel if you could no longer use [product]?" If 40% or more of users answer "very disappointed," the product likely has product-market fit. Sean Ellis developed this benchmark after analyzing 100+ startups and observing consistent correlation between this score and sustainable business growth.
What PMF score should I aim for? The 40% benchmark is the industry standard for a meaningful PMF signal. However, the score alone does not tell you what to change. Qualitative interviews with the "very disappointed" segment reveal which features or outcomes create attachment — and what would bring more users to that level.
How many customer interviews do you need to measure PMF? The classic recommendation is 5–10 interviews to start identifying patterns. For reliable PMF analysis, 20–40 interviews with active users give much stronger signal. With AI-moderated interviews, 40 conversations can be conducted and analyzed in 48 hours rather than four weeks.
What is the difference between PMF research and customer discovery? Customer discovery happens before product-market fit, exploring whether the problem is real and worth solving. PMF research happens after launch, measuring whether your current product is solving that problem at a level that drives retention. Both benefit from qualitative interviews, but the questions and timing differ.
Can AI replace human moderators in PMF interviews? AI-moderated platforms like Koji produce equivalent qualitative depth to human-moderated interviews for standard PMF research questions. AI eliminates moderator bias, schedules automatically, and analyzes results instantly — making it the preferred approach for scale. For highly sensitive topics or executive-level stakeholder interviews, human moderation may still be preferred.
What should I do after finding product-market fit? Once you identify your PMF segment and critical use case, the playbook is: (1) onboard new users directly into the workflow that creates attachment, (2) build features that deepen that use case, (3) run continuous discovery interviews to monitor whether fit holds as you scale. Koji supports continuous discovery automatically — you can run weekly interview batches without adding research headcount.