How to Measure Product-Market Fit with the Sean Ellis Test (and Go Deeper)
The complete guide to measuring product-market fit. Learn how to run the Sean Ellis "very disappointed" test, combine it with qualitative interviews, and use Koji to understand not just whether you have PMF but why.
How to Measure Product-Market Fit with the Sean Ellis Test (and Go Deeper)
Product-market fit is the most important milestone for any startup. Before PMF, everything is a hypothesis. After PMF, you have permission to scale. But how do you know when you've reached it?
The most widely used quantitative measure is the Sean Ellis test: ask users "How would you feel if you could no longer use [product]?" If 40%+ say "very disappointed," you have product-market fit.
The number is useful. But it's incomplete. 40% of users are very disappointed. Why? What specifically would they miss? What jobs does the product do for them that nothing else can? And what about the other 60%? Are they close to disappointed, or indifferent?
Koji turns the PMF survey from a data point into a research program.
The Sean Ellis Test
The standard PMF survey has four questions:
Question 1 (Single Choice): "How would you feel if you could no longer use [product]?"
- Very disappointed
- Somewhat disappointed
- Not disappointed (it's not really that useful)
- N/A (I no longer use it)
Question 2 (Open-ended): "What type of people do you think would benefit most from [product]?"
Question 3 (Open-ended): "What is the main benefit you receive from [product]?"
Question 4 (Open-ended): "How can we improve [product] for you?"
The 40% Threshold
Sean Ellis analyzed hundreds of startups and found that companies where 40%+ of surveyed users would be "very disappointed" consistently went on to achieve strong growth. Below 25% typically indicates weak PMF. Between 25-40% is the danger zone, you have something but need to sharpen it.
Why the Standard PMF Survey Is Not Enough
The Sean Ellis test tells you whether you have PMF. It doesn't tell you:
- Why specific users are very disappointed (what job does the product do for them?)
- What the "somewhat disappointed" group needs to become "very disappointed"
- Who your true target customer is within your user base
- Where the product falls short for different segments
- How to prioritize your roadmap to strengthen PMF
This is where Koji transforms the PMF survey from a metric into an actionable research program.
Building the PMF Study in Koji
Core Structure
Q1: PMF Score (Single Choice) "If you could no longer use [product], how would you feel?"
- Very disappointed
- Somewhat disappointed
- Not disappointed
- I no longer use it
- Configure: Single choice with probing enabled
Q2: Core Value (Open-ended, high probing) "What's the main benefit you get from [product]?"
- Probing depth: 3
- AI instruction: "Dig into specific workflows and outcomes. Get concrete examples of what the product enables them to do."
Q3: Alternatives (Open-ended) "What would you use instead if [product] didn't exist?"
- Probing depth: 2
- AI instruction: "Understand the competitive landscape from the user's perspective. What specific alternatives have they tried?"
Q4: Ideal Customer (Open-ended) "Who do you think would benefit most from [product]?"
- Probing depth: 1
- AI instruction: "This reveals how users perceive the product's target audience."
Q5: Improvement (Open-ended, high probing) "If you could change one thing about [product], what would it be?"
- Probing depth: 2
- AI instruction: "Separate nice-to-haves from critical improvements. Ask about impact on their workflow."
Q6: Usage Context (Single Choice) "How often do you use [product]?"
- Daily / Several times a week / Weekly / Monthly / Rarely
- No probing needed, this is segmentation data
Advanced: Segment-Aware Probing
Koji's AI adapts its follow-up questions based on the PMF answer:
For "Very Disappointed" users: The AI probes for what makes the product indispensable. "You said you'd be very disappointed. Can you tell me about a specific moment when [product] was critical for you?" This captures your product's competitive moat in the user's own words.
For "Somewhat Disappointed" users: The AI explores the gap. "What would need to change for this to be something you absolutely couldn't live without?" This reveals your PMF roadmap.
For "Not Disappointed" users: The AI investigates why. "What's preventing [product] from being more useful for you? Is it missing something, or is it just not the right fit?" This distinguishes between fixable problems and wrong-segment users.
Sample Sizes and Timing
- Minimum sample: 40 responses from active users (used product in last 2 weeks)
- Ideal sample: 100-200 for segment-level analysis
- Timing: After users have had enough time to experience core value (typically 2-4 weeks of active use)
- Frequency: Quarterly for early-stage. Semi-annually post-PMF.
- Exclude: Churned users, free trial users who never activated, internal team members
Analyzing PMF Results in Koji
Koji's report automatically generates:
Quantitative
- PMF score: % "Very Disappointed" with confidence interval
- Distribution chart: Breakdown across all four categories
- Segment analysis: PMF score by user cohort, plan, tenure, use case
- Trend tracking: How PMF score changes over time
Qualitative
- Core value themes: What do "very disappointed" users value most? Koji clusters their responses into actionable themes.
- Gap analysis: What do "somewhat disappointed" users need? These are your highest-ROI product investments.
- Alternative mapping: What would users switch to? This reveals your real competitive landscape.
- Improvement priority: Which requested improvements correlate most with higher PMF scores?
Best Practices
Survey the right people
Only survey users who have activated and used the product recently. New signups who haven't experienced core value will skew results downward. Users who churned 6 months ago have stale opinions.
Don't chase the number
The goal isn't to get to 40%. The goal is to understand who finds your product indispensable and why, then find more of those people and deepen the value for everyone else.
Segment ruthlessly
Your overall PMF score might be 30%, but your PMF among product managers who use the feature X workflow might be 65%. Find the segment with strongest PMF and double down.
Combine quant and qual
This is Koji's superpower. The PMF number tells you where you stand. The conversation tells you what to do about it. No other tool gives you both in a single, natural interaction.
Why Koji Is Ideal for PMF Research
- Conversational depth that traditional PMF survey tools can't match
- AI-driven segment analysis connecting PMF scores to user characteristics
- Probing that adapts based on the PMF answer, automatically exploring what matters most
- Scale without sacrifice since you can run 500 PMF conversations in the time it takes to schedule 5 user interviews
- Quote extraction that gives you the exact customer language to use in marketing
- Mixed methods in one flow combining the quantitative PMF metric with qualitative understanding
Related Articles
How to Run Usability Testing Surveys That Improve Your Product
The complete guide to usability testing surveys and post-task questionnaires. Learn how to combine SUS scores, task success rates, and conversational feedback to identify exactly where your UX breaks down.
How to Build an Onboarding Survey That Reduces Time-to-Value
The complete guide to user onboarding surveys and experience feedback. Learn how to identify friction points, measure activation milestones, and optimize the first-run experience using Koji's conversational feedback.
How to Run Pricing Research Surveys: Van Westendorp, Gabor-Granger, and Conjoint Analysis
The complete guide to pricing research methodologies. Learn how to determine optimal price points using Van Westendorp, test price sensitivity with Gabor-Granger, and combine quantitative pricing data with qualitative value perception using Koji.
How to Run Feature Prioritization Surveys That Build Products Users Actually Want
Learn how to run feature prioritization surveys using RICE, Kano, MoSCoW, and opportunity scoring frameworks. Combine quantitative ranking with AI-driven qualitative depth to build what users truly need.
How to Collect Beta Testing Feedback That Ships Better Products
Learn how to design beta testing feedback surveys that catch bugs, validate features, and gather early adopter insights. Combine structured SUS scoring with conversational AI follow-up for richer beta data.
How to Run Concept Testing Surveys Before You Build the Wrong Thing
Learn how to run concept testing surveys using monadic and sequential designs, concept scoring frameworks, and purchase intent scales. Use AI-driven interviews to uncover hidden objections before you invest.