New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Research Methods

Solution Interviews: How to Validate Product Ideas Before You Build

A practical guide to running solution interviews — the second half of customer development — to test whether your proposed solution actually solves a real, painful problem.

Solution interviews are the conversation you have after problem interviews and before you write a line of code: you put a low-fidelity version of your proposed solution in front of qualified prospects to test whether it solves a real problem at a price they will actually pay. Steve Blank's customer development model splits these into two distinct phases for a reason — problem interviews tell you whether the pain is real, solution interviews tell you whether your answer is the right one. Skipping either step is the most common reason validated problems still produce dead products.

This guide covers when to run solution interviews, the structure of a high-signal solution interview, the four signals you are looking for, and how to run 20+ solution interviews in a week using Koji's AI moderator instead of stacking your calendar with 30-minute calls.

What is a solution interview?

A solution interview is a structured conversation in which you present a hypothetical or low-fidelity solution to a customer who has already confirmed they have the underlying problem. The goal is not to sell. The goal is to learn whether the solution sparks the kind of intense, behavioral interest that predicts purchase — and to surface the objections that will kill conversion if you ignore them.

Solution interviews live inside Steve Blank's customer development methodology, the four-step process — Customer Discovery → Customer Validation → Customer Creation → Company Building — that underpins the lean startup. Inside the discovery step, the sequence is:

  1. State the hypothesis — who has the problem, what pain it causes.
  2. Test the problem — problem interviews to confirm the pain is real, frequent, and urgent.
  3. Test the solution — solution interviews to confirm your proposed answer fits.

The artifacts you bring to a solution interview are deliberately rough: a clickable prototype, a static landing page, a hand-drawn user flow, or even a written description. Fidelity is the enemy here. The lower the fidelity, the easier it is for the customer to push back on the underlying idea instead of getting distracted by visual polish.

When to run solution interviews

Run solution interviews after at least 10 problem interviews have produced strong signal that the pain is real, but before you have invested any meaningful engineering time. The right moment is when:

  • You can describe the problem in the customer's own words.
  • You have at least one specific solution hypothesis (not "an app for X" — a concrete proposal with at least three named features).
  • You can list two or three plausible alternatives the customer could pick instead.
  • You have not yet committed to building.

If you have already shipped, you are past solution interviewing — what you need now is usability testing, concept testing, or continuous discovery.

Solution interview structure (the 5-section script)

A solid solution interview runs about 30 minutes and follows this arc:

1. Re-confirm the problem (5 min)

Don't assume the participant remembers the problem context. Open by asking them to walk through the last time they faced the underlying pain. This re-anchors the conversation in their reality before you introduce your solution. If they struggle to recall a specific instance, you're talking to the wrong person — politely end the interview.

2. Present the solution at low fidelity (5 min)

Show the prototype, sketch, or description. Keep it brief, neutral, and free of marketing language. Resist the urge to "sell" or list benefits — describe what the solution does and let the participant react. Two sentences is plenty.

3. Capture the unguided reaction (10 min)

This is the most valuable section. Open with "Tell me what you're thinking" and then shut up. Long silences are fine. The first 60 seconds of unguided reaction is where genuine signal lives — once you start prompting, you're leading. Probe only on specifics ("you said it's 'interesting' — interesting how?") not on judgments.

4. Test pricing and commitment (5 min)

This is what separates solution interviews from concept tests. You are not asking "would you buy this?" — you are testing for behavioral commitment. Three questions in increasing order of power:

  • Price anchor: "What would you expect something like this to cost?"
  • Commitment ask: "If I had this ready in 30 days, would you give me an email address to be notified at launch?"
  • Pre-order ask (advanced): "Would you put a $1 deposit on it today?"

A verbal "yes" is worth almost nothing. An email address is worth a little. A credit card is worth a lot. The whole point of the pricing section is to escalate toward an action that costs the participant something, however small.

5. Surface alternatives and objections (5 min)

Close by asking what they would do if your solution didn't exist, what would stop them from using it, and who else they'd need to convince. The answers tell you about competitive alternatives (often surprising) and adoption blockers you haven't thought of.

The four signals you are looking for

A solution interview produces four signals, in roughly increasing order of value:

  1. Comprehension. Did the participant understand what the solution does without you explaining twice? If not, your positioning is broken.
  2. Excitement. Did they ask follow-up questions or volunteer use cases? Polite interest is not excitement.
  3. Pull, not push. Did they try to extract more information from you, or did you have to keep the conversation alive? Pull is the strongest qualitative signal of fit.
  4. Behavioral commitment. Did they hand over an email, a deposit, a calendar invite, or an introduction to a buyer? This is the only signal that survives optimism bias.

If you walk away from 10 solution interviews with comprehension and politeness but no pull and no commitment, your solution does not fit — and no amount of polishing the wireframes will change that.

Solution interview vs. problem interview vs. concept test

Problem InterviewSolution InterviewConcept Test
WhenBefore any solution existsAfter problem is validated, before buildAfter build, before launch
You bringOpen questions about the problemLow-fidelity prototype + hypothesisWorking concept or feature
Looking forReal, frequent, painful problemPull and commitment signalsComprehension and usability
Sample10–2015–3050–200
Risk if skippedBuild for a fake problemBuild the wrong solutionLaunch a confusing product

For a deeper read on the adjacent stages, see customer discovery interviews (problem interviews) and concept testing methodology (post-build).

How to run 20+ solution interviews in a week with Koji

The traditional knock against solution interviews is calendar gravity. Booking 20 separate 30-minute calls eats two weeks of founder time and self-selects toward people willing to take a stranger's call — usually not your target customer.

Koji removes the scheduling layer entirely:

  • Embed your prototype or solution mock as context inside the study so the AI moderator can describe it to each participant in their language.
  • Run text or voice mode. Voice produces richer reactions; text scales to participants who don't want a call. See Voice vs Text Interview for when to use each.
  • Three-layer AI probing ensures every "interesting" gets followed by "interesting how, specifically?" — the question that converts polite interest into real signal. The mechanics are documented in the AI Probing Guide.
  • Six structured question types let you embed price anchoring (scale), commitment asks (yes/no), and alternative ranking (ranking) without breaking the conversational flow.
  • Personalized links from your CRM mean every interview opens with the participant's name, company, and prior context already loaded.
  • Real-time results mean you don't wait for field to close — the report updates as each interview completes, so you can iterate the prototype mid-study and run a second batch by Friday.

A founding team using Koji can ship a tight problem-then-solution sequence — 12 problem interviews, 20 solution interviews, opportunity scoring — in two weeks of elapsed time. Two weeks of conversation is faster than most teams spend deciding what to build.

Common solution interview mistakes

Solution interviews fail for predictable reasons. The big four:

  • Showing high-fidelity mocks. Beautiful designs invite design feedback instead of solution feedback. Stay rough.
  • Pitching instead of presenting. "And the best part is…" is a tell. Describe and then shut up.
  • Skipping the commitment ask. Verbal interest without behavioral commitment is the most-cited source of false positives in early-stage validation.
  • Talking to the wrong people. Solution interviews are only as honest as your screener. If you can't qualify against the underlying problem, your participants will be polite about a solution they will never need.

For a broader catalog of failure modes, see User Research Mistakes.

Related Resources

Related Articles

Voice vs Text Interview: When to Use Each Mode

Choosing between voice and text mode for your AI interview? This guide breaks down response depth, completion rate, audience fit, and cost — plus a decision matrix that tells you which mode wins for each research scenario.

How Koji's AI Follow-Up Probing Works: Going Deeper Than Any Survey

Understand how Koji's AI interviewer automatically asks follow-up questions to go deeper on every answer — and how to configure probing depth, custom instructions, and anchor behavior for scale questions.

Structured Questions in AI Interviews

Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.

Concept Testing: The Complete Methodology Guide

How to evaluate product and marketing ideas with target audiences before development — covering methods, metrics, sample sizes, and AI-powered approaches.

Prototype Testing and Concept Validation: A Researcher's Complete Guide

Learn how to validate product concepts and prototypes through research interviews before committing to build. Covers when to use each approach, question frameworks, and how AI interviews scale concept validation 10x faster.

Customer Development: The Complete Guide to Steve Blank's 4-Step Methodology (2026)

Master Steve Blank's Customer Development methodology — Discovery, Validation, Creation, and Company Building. Learn the framework that prevents the #1 reason startups fail and how AI-native research platforms like Koji compress months of customer interviews into days.

The Mom Test: How to Talk to Customers Without Being Misled

Learn Rob Fitzpatrick's Mom Test methodology to ask questions that even your mother can't lie to you about.

User Research Mistakes: 14 Pitfalls That Sabotage Your Insights (2026)

The most common user research mistakes that lead to misleading insights — and how to avoid each one with better methodology and AI-powered interviews.

Customer Discovery Interviews: The Complete Guide

Learn how to conduct customer discovery interviews to validate your product ideas before building. Covers Steve Blank methodology, question frameworks, sample sizes, and common mistakes.

How to Validate Product-Market Fit Through Qualitative Interviews

Learn how to design and run customer interviews specifically focused on measuring and moving your product-market fit score.