New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Research Methods

Customer Discovery Workshop: The Step-by-Step Playbook with Templates (2026)

A complete, time-boxed customer discovery workshop playbook — agenda, exercises, templates, and the AI-native interview pipeline that turns workshop hypotheses into evidence within days. Designed for founders, product trios, and discovery teams running their first or fiftieth workshop.

A customer discovery workshop is a structured 1–2 day session where a cross-functional team converges on three things: who the customer is, what problem you are solving for them, and how you will validate the answer with real customer evidence in the next 30 days. It is the most reliable way to prevent the team from spending six months building the wrong thing — and it pays for itself many times over because, as Steve Blank's research on customer development has popularized, every hour of customer discovery saves an estimated 5–20 hours of wasted development time.

This guide gives you a full agenda for a one-day customer discovery workshop, the exercises that drive convergence, the artifacts you should leave with, and the post-workshop interview pipeline that turns hypotheses into evidence — including how AI-native research with Koji collapses the validation cycle from weeks to days.

TL;DR — what a customer discovery workshop produces

By the end of a well-run workshop you should walk out with five concrete artifacts:

  1. A shortlist of customer segments with one prioritized "primary."
  2. A problem statement for that segment, in the form "When [situation], [primary] wants to [job], so they can [outcome]."
  3. A list of riskiest assumptions ranked by impact and uncertainty.
  4. An interview plan — who to talk to, what to ask, how many.
  5. A 30-day validation timeline with named owners.

Workshops that produce only a vision board and a workshop photo are workshops that failed. The deliverable is decisions, not energy.

When to run a customer discovery workshop

Three triggers are worth pausing for a workshop:

  • Pre-build. You have an idea or a roadmap bet and you have not yet committed engineering capacity. A workshop now saves quarters later.
  • Mid-pivot. Your data is telling you the current direction is wrong, and the team is misaligned on what to do next.
  • Post-traction stagnation. You hit early product-market fit and have plateaued. The workshop reframes who your next customer is, not your current one.

Avoid running a discovery workshop when the team has already committed publicly to a direction — workshops cannot un-commit a roadmap, and the result is theater. The pre-condition is genuine willingness to change course based on what you learn.

Who should be in the room

Five to nine people is the right size. Larger groups dilute decisions; smaller groups miss perspectives. According to product discovery practitioners, the optimal workshop size is "5-10 participants to maintain engagement while ensuring diverse perspectives."

The required roles:

  • Product manager — owns the synthesis and the decisions.
  • Designer — represents the user and the experience.
  • Engineer — pushes back on what is and isn't actually buildable.
  • Founder or executive sponsor — present so decisions don't unravel after the workshop.
  • Customer-facing voice — sales, customer success, or support; they have heard more customer language than the rest of the room combined.

Optional but strongly recommended: one or two real customers in at least one session. Most workshops fail because the team imagines the customer rather than listening to one.

A one-day customer discovery workshop agenda

The full version below assumes a single-day workshop (8 hours including breaks). Compress to a half-day by skipping exercises 3 and 7; expand to two days by adding an interview-script-writing block on day two.

Block 1 (45 min): Frame the question

Start by writing the One Question the workshop is convening to answer. Examples:

  • "Which of three candidate customer segments should we build for first?"
  • "Why are paid signups dropping off in week two, and what would change it?"
  • "Should our next product bet target buyers, end-users, or admins?"

Without a single named question, every exercise drifts. Pin the question to the wall. Every decision must reduce uncertainty about it.

Block 2 (60 min): Customer segment mapping

List every customer segment the team has ever talked about. For each, fill in:

  • Who they are (job title, company size, life situation)
  • What they currently use to solve the problem
  • How urgent the problem is for them (1–5)
  • How much they are willing to pay (range)
  • How easy they are to reach (1–5)

Score on urgency × reachability. The segment with the highest combined score is your primary. The runner-up is your fallback if the primary doesn't hold up to validation.

Block 3 (60 min): The Job to Be Done

For the primary segment, write the Jobs to Be Done statement: "When [situation], [persona] wants to [functional job], so they can [emotional/social outcome]."

Push the team to write at least three drafts. The first draft is always too generic; by the third the team has converged on the specifics that make the job different from competitors' interpretations.

Block 4 (lunch — but you are still working)

Working lunch. Read existing customer transcripts, support tickets, sales call notes for 30 minutes individually. Each person flags one quote that surprised them. Share back over coffee.

Block 5 (75 min): Assumption mapping

List every assumption the current direction depends on. Categorize as:

  • Customer assumptions: "They have this problem." "They will pay." "They notice it weekly."
  • Solution assumptions: "They will use this feature." "This UI will make sense to them."
  • Business assumptions: "We can reach them via channel X." "They will renew."

Plot each assumption on a 2x2: impact (high/low) × certainty (high/low). The high-impact, low-certainty quadrant is your validation backlog.

Block 6 (45 min): Riskiest assumption test design

For the top three riskiest assumptions, write the test that would falsify each one. Each test must specify:

  • The signal that would prove the assumption true
  • The signal that would prove it false
  • The minimum sample size
  • The deadline

A good test is one where the team would genuinely change direction based on the result. If the test cannot change the answer, it is theater.

Block 7 (45 min): Interview plan

For the riskiest customer-facing assumptions, design the discovery interview. Each interview should:

The default sample size is 5 interviews per persona — Steve Blank, Rob Fitzpatrick, and Teresa Torres all converge on roughly this number for early-stage discovery. Run more if signals are noisy.

Block 8 (30 min): Commit to a 30-day timeline

Before anyone leaves, name owners and dates for:

  • Recruiting interviews (who, by when)
  • Running interviews (target completion date)
  • Synthesizing findings (workshop reconvenes for results review)
  • Roadmap implications (decision deadline)

Workshops that don't produce a 30-day timeline produce nothing. The single most predictive variable for workshop ROI is whether someone owns the validation work the day after.

The exercises in more detail

How might we questions

After the assumption mapping, reframe the riskiest assumptions as How Might We questions to open up the solution space. Example: assumption "users will accept a 7-day onboarding" becomes "How might we shorten time-to-value such that users see results within 24 hours?"

Empathy mapping

For workshops with limited customer access, run an empathy map for the primary persona. This is a placeholder exercise — its purpose is to surface what the team thinks the customer feels and to expose gaps that the upcoming interviews need to fill.

Opportunity solution tree

For mature teams running their second or third workshop, replace the segment mapping with an Opportunity Solution Tree. It produces a more structured connection between the desired outcome, the customer opportunities, and the candidate solutions — at the cost of being less accessible to first-time workshop attendees.

The post-workshop validation pipeline

The workshop does not produce evidence. It produces a list of things to test. The next 30 days are where the actual validation happens — and this is where most workshop momentum dies, because traditional customer interview pipelines (recruit → schedule → moderate → transcribe → analyze) take 4–6 weeks to produce findings, by which point the team has moved on.

AI-native research changes the math. With a platform like Koji, the post-workshop pipeline collapses to:

  • Day 1: Convert each riskiest assumption into a study brief. Koji's AI consultant interprets the brief and generates discussion guide questions automatically — including structured questions for quantitative checkpoints (yes_no, scale, single_choice) and open-ended questions for the qualitative depth.
  • Day 2–7: Recruit asynchronously. Use existing customer lists, in-product intercepts, or external recruiters. Koji's AI moderates conversations 24/7 — participants complete their interview when convenient, with no scheduling friction.
  • Day 8–10: Real-time thematic analysis runs as interviews complete. The moment the last interview ends, the team has the report — themes, quality scores, verbatim quotes attached to each finding.
  • Day 11: Reconvene the workshop attendees. Run a 90-minute synthesis session against the report. Each riskiest assumption has now been confirmed, falsified, or refined.

That cadence is what makes the workshop produce decisions instead of decks. Teams using AI-assisted research tools report up to 60% faster time-to-insight, and on workshop pipelines specifically the savings are larger because the bottleneck (scheduling-bound moderation) is the part that AI removes most cleanly.

Common workshop mistakes

Treating the workshop as the deliverable. The workshop produces hypotheses. The validation produces evidence. A team that conflates these two ships beautiful workshops and bad products.

Over-staffing. A 14-person workshop is a town hall, not a discovery session. Cap at 9.

Skipping pre-reading. Send the team a customer transcript, three support tickets, and a 1-pager on existing data 48 hours before. The workshop is for synthesis, not first-time learning.

No customer in the room. If you cannot get a real customer for at least 30 minutes, send someone to record three interviews the week before and play 5-minute clips during Block 5.

Punting the assumption test. "Let's discuss this offline" is the death sentence for workshop momentum. Every assumption gets a test, an owner, and a date — or it is removed from the list.

No reconvening. Without a Day-30 reconvene, the workshop output sits unused. Schedule the reconvene before anyone leaves the room.

A reusable post-workshop study brief template

For each riskiest assumption, fill in:

  • Assumption being tested: [statement]
  • Confidence today: [1–5]
  • What would change confidence: [evidence that would prove or falsify]
  • Target persona: [primary segment from Block 2]
  • Sample size: [5 for directional; 12+ for confident reads]
  • Methodology: [Mom Test, JTBD switch interview, problem exploration]
  • Owner: [name]
  • Synthesis date: [date]

Drop this template into Koji's research brief flow and the AI consultant will generate the discussion guide automatically. The team's job becomes reviewing and adjusting, not writing from scratch.

Related Resources

Related Articles

Structured Questions in AI Interviews

Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.

Open-Ended Interview Questions: 100+ Examples and How to Use Them

A comprehensive library of open-ended interview questions for product discovery, UX research, customer feedback, employee experience, and more — plus how to write your own.

Empathy Map: The Complete Guide to Building User Empathy

Learn how to create an empathy map from scratch — the 6-section framework, step-by-step process, common mistakes, and how AI-powered interviews with Koji give you richer empathy data in less time.

Customer Development: The Complete Guide to Steve Blank's 4-Step Methodology (2026)

Master Steve Blank's Customer Development methodology — Discovery, Validation, Creation, and Company Building. Learn the framework that prevents the #1 reason startups fail and how AI-native research platforms like Koji compress months of customer interviews into days.

The Mom Test: How to Talk to Customers Without Being Misled

Learn Rob Fitzpatrick's Mom Test methodology to ask questions that even your mother can't lie to you about.

How Might We Questions: The Complete Framework for Turning Insights Into Innovation Opportunities

Master the How Might We (HMW) question framework — its origin from Min Basadur and IDEO, the linguistic logic of why it works, the seven HMW patterns, common mistakes, real examples like P&G Coast, and how AI-native research lets you generate sharper HMWs from real customer evidence.

Jobs to Be Done Framework: The Complete Guide

The definitive guide to the Jobs to Be Done (JTBD) framework — its history, two schools of thought, how to write JTBD statements, famous examples, how to conduct JTBD research, and how AI interviews enable JTBD at scale.

Research Brief Template: How to Define Your Research Before You Start

A complete research brief template with sections for problem context, participant profile, methodology, and success criteria — the foundation of any effective user research project.

Opportunity Solution Tree: The Complete Guide to Continuous Product Discovery

Learn how to build and use the Opportunity Solution Tree (OST) framework — Teresa Torres' visual map for connecting business outcomes to validated customer solutions through continuous discovery. Includes step-by-step instructions, templates, and how Koji automates the evidence-collection process.

Continuous Discovery: How to Run Weekly Customer Interviews Without Burning Out

Continuous discovery is the practice of conducting customer interviews every week as part of your normal workflow. This guide explains how to build an always-on research practice that actually scales.