The Continuous Discovery Handbook: How Product Teams Run Weekly Customer Interviews (2026)
64% of software features are rarely or never used. Continuous discovery — weekly customer interviews baked into your product workflow — is the antidote. Here's how to implement it in practice.
Koji Team
April 4, 2026
64% of software features are rarely or never used.
That statistic, from Standish Group research confirmed by Pendo's analysis of 6,800 products, represents roughly $29.5 billion in wasted R&D annually. Teams built things. They shipped them. Customers ignored them.
The root cause is not poor engineering. It is disconnected product development: teams making roadmap decisions based on assumptions, stakeholder requests, and internal opinions — without a continuous signal from the people they are building for.
Continuous discovery is the antidote. And for modern product teams, AI is finally making it operationally achievable.
What Is Continuous Discovery?
Teresa Torres, product coach and author of Continuous Discovery Habits (2021), defines it precisely:
"At a minimum, weekly touchpoints with customers by the team that's building the product, where they conduct small research activities in pursuit of a desired product outcome."
The key words are weekly, the team (not just the researcher), and in pursuit of a desired outcome (not generic insight-gathering).
Continuous discovery is not a quarterly research project. It is not a usability test before shipping. It is a structural habit built into how product teams operate every single week.
The current reality falls far short of this standard. According to the UserInterviews State of User Research 2024, only 44% of teams conduct continuous research. The majority — 56% — do research only project-by-project or not at all. Meanwhile, 46% of product strategy is driven by senior leadership direction or sales and support feedback, not customer data (ProductPlan, 2024).
This is the gap continuous discovery closes.
The Opportunity Solution Tree
Teresa Torres developed the Opportunity Solution Tree (OST) as the primary organizing tool for continuous discovery. It maps four levels:
- Desired outcome — the business or product metric your team is trying to move
- Opportunity space — customer needs, pain points, and desires discovered through interviews
- Solutions — product ideas mapped to specific opportunities
- Assumptions and experiments — the cheapest test to validate each solution before building
The OST prevents the most common failure mode in product development: jumping from "a customer mentioned this" directly to "let's build it," skipping the step of understanding whether that need is widespread, addressable, and worth solving.
Without this structure, teams build features that solve one person's problem — or worse, features that nobody specifically requested but that seemed logical in a planning meeting.
The Weekly Interview Habit
The foundational practice is simple: one customer interview per week, every week.
Not one per sprint. Not one before a major release. Every week.
Torres argues this cadence is achievable by any product team, even those without dedicated research staff. One interview per week generates approximately 50 customer conversations per year — a meaningful, growing knowledge base that informs decisions in real time.
Unlike episodic research (big sprint, long gap, another sprint), weekly interviews give your team live signal that can influence decisions as they are being made. The team that interviewed three customers this week has better context for tomorrow's prioritization meeting than the team that ran a big study three months ago.
A 2022 CDH Benchmark Survey of nearly 2,000 respondents found that even in Teresa Torres' own practitioner community — which skews heavily toward teams trying to do this well — only 45% had talked to a customer in the past week. The gap between intention and practice is real. But it is closeable.
Setting Up Your Continuous Discovery System
Step 1: Define Your Outcome
Before scheduling any interviews, name the specific metric your team is trying to move. "Improve the product" is not an outcome. "Increase week-2 retention from 42% to 55%" is an outcome.
Your interviews should be focused on understanding the customer behaviors, needs, and pain points that connect to this outcome. Unfocused discovery generates interesting notes. Outcome-focused discovery generates decisions.
Step 2: Build Your Interview Pipeline
The operational bottleneck for most teams is finding people to talk to. The UserInterviews State of User Research 2024 found that 61% of researchers struggle with recruiting — and for teams without a dedicated researcher, this is even harder.
Strategies that work in practice:
In-product recruitment: Intercept users after a key action. "You just completed your first project — would you answer a few questions?" Convert a small percentage into interview participants on an ongoing basis.
Customer success referrals: CS teams talk to customers constantly. A monthly ask — "Can you introduce us to someone willing to talk to the product team?" — builds a steady pipeline over time.
Research panel building: After any interview, ask participants if they would be willing to talk again in 3–6 months. Most say yes. This builds your own private panel without paying for a recruitment panel.
AI-powered async interviews: Configure an AI interviewer to run conversations at scale — participants complete interviews on their own schedule, without any scheduling or moderator overhead. Koji is built specifically for this model.
Step 3: Run Story-Based Interviews
The most common error in product team interviews: asking customers what they want. According to Nielsen Norman Group research, leading questions, closed questions, and hypothetical questions are the three most frequent mistakes — and all three produce unreliable data.
Story-based interviews focus on past behavior, not future intentions:
- "Tell me about the last time you [did the thing you are researching]."
- "What were you trying to accomplish?"
- "What made that hard?"
- "What did you do to work around it?"
- "What did that cost you in time or money?"
These questions surface the actual opportunity space — real problems customers have encountered and cared enough to work around — rather than wish lists that often do not reflect actual behavior or willingness to pay.
Step 4: Update Your Opportunity Solution Tree
After each interview, spend 15–20 minutes updating your OST:
- What new opportunities (customer needs, pain points, desires) did you hear?
- Do they map to existing opportunities on your tree, or are they genuinely new?
- Did any interviews challenge or invalidate existing assumptions?
- Do new opportunities suggest solution directions worth exploring?
The OST is a living document. It should evolve weekly as you gather new signal. Teams that use the OST as a static artifact defeat its purpose.
Step 5: Test Assumptions Before Building
For every solution on your OST, identify the riskiest assumption — the one that, if wrong, would cause the solution to fail. Design the cheapest possible test:
- A landing page that describes the feature and measures sign-ups
- A prototype walkthrough with three customers
- A "fake door" button in the UI that surfaces intent before the feature is built
- A short async interview asking specifically about the assumption
Research shows teams can validate 80% of critical assumptions using experiments that cost just 5–10% of full development investment. Teams using this approach validate assumptions 5–8x faster than teams that build complete solutions before testing.
The Real Cost of Skipping Discovery
The data on what happens when teams skip continuous discovery is sobering.
$29.5 billion in wasted R&D annually — Pendo's analysis of 6,800 products found that only 6.4 out of every 100 features drive 80% of user activity. Best-in-class products reach 15.6 per 100. The rest is shipped and ignored.
40–50% of development effort goes to features customers rarely or never use, according to multiple studies cited by the Standish Group and corroborated by Pendo's benchmarks.
$2 trillion in annual global software rework costs — the downstream cost of building without validating. Teams that invest 2–3 days in rigorous problem framing upstream reduce scope changes by 60%.
The 1:10:100 Rule, widely cited across product and engineering disciplines, captures the underlying logic: every $1 spent on research upstream saves $10 in development costs and $100 in post-launch support. This is not a precise formula — it is a directional principle that every experienced product leader has felt.
How AI Is Transforming Continuous Discovery in 2026
The biggest barrier to continuous discovery is operational: finding time for recruiting, scheduling, note-taking, and synthesis. These are not strategic activities, but they consume the hours most teams do not have.
According to the UserInterviews State of User Research 2025, AI usage in research jumped from 56% to 80% in a single year — the fastest adoption rate in any research practice area. The most common applications:
- Automated transcription (the first and most universal use case)
- Thematic synthesis across multiple interviews simultaneously
- AI-generated discussion guides and research briefs
- Automatic tagging and clustering of feedback across sources
For teams practicing continuous discovery, AI interview platforms like Koji change the operational model entirely. Instead of scheduling and moderating individual sessions, you configure your study once and the AI interviewer conducts conversations asynchronously. Thematic analysis runs automatically across all sessions — not one at a time, but across your entire interview dataset.
Dovetail reported that customers including Amazon, Canva, Meta, Notion, and Mayo Clinic saved over 38 hours per week using AI-powered research features. Manual interview synthesis — which typically takes 2–3 hours per session — compresses to minutes.
Teams using AI-assisted research are running 5x more studies in the same time window as teams using traditional manual methods, according to early 2025 data from Sprig.
This does not replace the human judgment needed to interpret findings and make product decisions. But it removes the operational friction that causes most teams to do less research than they know they should.
Common Pitfalls in Continuous Discovery
Pitfall 1: Interviewing for confirmation, not discovery The most dangerous interview is one where you go in believing you already know the answer. Confirmation bias — filtering what you hear through your existing hypothesis — is the dominant error in founder and PM-run research (documented extensively by Rob Fitzpatrick in The Mom Test). The antidote: write down your assumptions before the interview, then actively look for evidence that contradicts them.
Pitfall 2: Building an insight graveyard Teams that do research but do not connect it to decisions create an insight graveyard — a growing archive of reports nobody reads. The OST prevents this by mapping every opportunity to a specific product outcome. If an insight does not connect to a desired outcome, it is filed, not acted upon.
Pitfall 3: Researcher as gatekeeper One of Torres' most important contributions is the principle that the entire product trio — PM, designer, and engineer — should attend customer interviews, not just the researcher. Only 16.8% of product trios reported that all three members participated in their last interview (CDH Benchmark Survey, 2022). Engineers who have directly heard a customer's pain point make very different technical decisions than engineers who received a secondhand summary.
Pitfall 4: Waiting for certainty before acting Some teams fall into analysis paralysis — waiting until they have heard the same thing from 20 customers before acting. Continuous discovery is not about eliminating uncertainty. It is about maintaining a steady signal that reduces uncertainty over time. Ship small bets, measure, learn, iterate.
Measuring the Impact of Continuous Discovery
According to the Maze State of User Research 2024, teams that conducted research bi-weekly or more frequently were 2.4x more likely to report high confidence in their product roadmap decisions.
Teams with higher research frequency also correlated with stronger organizational buy-in: 75% of active research teams reported high peer buy-in (up from 66% the prior year), and 57% had strong leadership buy-in (up from 43%), per UserInterviews 2024 data.
The simplest case for continuous discovery is the one built from absence: companies that do not talk to customers regularly build the wrong things. The Standish Group documented this in 2002. Pendo confirmed it in 2024 with 6,800 products. The teams that avoid the $29.5 billion waste problem are the ones that maintained the habit.
Getting Started This Week
You do not need to redesign your entire research practice to start. One interview this week is better than a perfect system next quarter.
- Name your current outcome: What metric is your team trying to move in the next 90 days?
- Identify three customers to talk to: CS referrals, in-product intercept, or LinkedIn outreach.
- Schedule 30 minutes each: Use the story-based questions above. Take notes. Do not pitch.
- Update or create your OST: Map what you heard to opportunities.
- Repeat next week.
For teams that want to scale this habit without adding moderator overhead, Koji's AI interviewer lets you run continuous research asynchronously — gathering and synthesizing customer conversations without scheduling a single call. You get from question to insight in hours, not weeks.
Start your first continuous discovery study on Koji →
Frequently Asked Questions
What is continuous discovery in product management? Continuous discovery is the practice of running at minimum one customer interview per week, conducted by the full product team, to maintain a persistent feedback loop with real customers. Defined by Teresa Torres in Continuous Discovery Habits (2021), it uses an Opportunity Solution Tree to connect customer insights to specific product outcomes.
How many teams actually practice weekly customer interviews? According to the UserInterviews State of User Research 2024, only 44% of teams conduct continuous research. The majority still research on a project-by-project basis. Even in Teresa Torres' own practitioner community, only 45% had talked to a customer in the past week.
Why do most teams struggle to maintain a continuous discovery cadence? Time is the top barrier (cited by 43% of teams in the Maze 2024 survey), followed by difficulty recruiting the right participants. Manual recruiting, scheduling, and synthesis each consume significant hours. AI-powered interview tools are now the primary way teams reduce this overhead — AI adoption in research reached 80% in 2025.
What is the difference between continuous discovery and traditional user research? Traditional research is project-based: a big study at the start, perhaps a usability test before launch. Continuous discovery is an always-on practice that generates ongoing insight as the product evolves. Traditional research informs a specific decision; continuous discovery shapes how the team thinks about every decision.
How does Koji support continuous discovery? Koji's AI interviewer conducts voice and text conversations with participants asynchronously, without requiring a human moderator. Automatic thematic analysis surfaces patterns across all interviews simultaneously. This lets product teams maintain a weekly research cadence without adding scheduling or synthesis overhead to their workflow.