Customer Interview Cadence: How Often Should You Talk to Users? (2026)
Set the right customer interview cadence for your team — from one a week (Teresa Torres' baseline) to daily continuous discovery — and how AI moderation makes higher cadences sustainable.
The right customer interview cadence is at least one interview per week per product team — and most teams should be running three to five. Teresa Torres' continuous discovery research is unambiguous on the lower bound: if you go a full month without talking to a customer, you spend that month making decisions on stale assumptions. The upper bound is set by what you can synthesize: there is no point collecting more conversations than you can read. With AI moderation eliminating the scheduling cost, the practical sweet spot for a product trio in 2026 is 3–10 customer interviews per week, run async, with synthesis happening in real time as each interview completes.
This guide gives you cadence benchmarks by team size and stage, a decision framework for picking yours, and the operational playbook for sustaining whatever cadence you choose.
TL;DR cadence benchmarks
| Team / stage | Minimum | Healthy | High-performing |
|---|---|---|---|
| Pre-PMF founder | 5 / week | 10 / week | 15+ / week |
| Product trio (PM + design + eng) | 1 / week | 3 / week | 5–10 / week |
| Mature product team | 1 / week | 2–4 / week | 5+ / week |
| UX research function | 5 / week | 10 / week | 20+ / week |
| Enterprise / regulated | 1 / month | 1 / week | 2 / week |
The minimums are the floor below which decisions outpace insight. The "healthy" column is where most well-run teams live. The "high-performing" column is where AI moderation pays for itself many times over — it is essentially impossible to sustain manually.
Why weekly is the canonical floor
The modern consensus on cadence comes from Teresa Torres' Continuous Discovery Habits and the Product Talk research that preceded it. Her position: each product trio should have at least one weekly touch-point with customers, conducted by the team building the product, in pursuit of a current product outcome.
The argument for weekly is structural, not aesthetic. A monthly cadence means you spend three weeks of every four making decisions on assumptions you haven't tested. A weekly cadence keeps your model of the customer current with your roadmap. Below weekly, decisions outpace evidence. Above weekly, the marginal interview produces less marginal insight — until you are running a deeper study where the right sample size becomes 10–20.
The inverse is also true: teams that try to "save up" research into one big quarterly study almost always discover that the questions they wrote 12 weeks ago are the wrong questions now. Cadence is not just about volume — it is about staying in sync.
How to pick your cadence
Three variables drive the answer: stage, team structure, and synthesis capacity.
Stage
- Pre-PMF or new market. You have more open questions than time to answer them. Run as many interviews as your synthesis capacity allows — 10+ per week is normal for an early founding team.
- Active product development. Each squad should hit at least 1/week, ideally 3–5/week, to keep the build cycle informed.
- Mature product, optimizing. 1–2 per week is usually enough, supplemented by occasional deeper studies.
- Enterprise or regulated. Cadence is gated by access to participants, not time. 1/week is aspirational; 1/month is the realistic floor.
Team structure
A product trio (PM + designer + engineer) should run cadence per trio, not per company. Five trios at one interview per week is five interviews per week of company throughput, not one. This is where the math becomes painful for teams using traditional moderated interviews — five trios scheduling separate calls means five different recruitment ops and five sets of calendar coordination per week.
Synthesis capacity
The rule that nobody likes hearing: if you are not synthesizing each interview within 48 hours, more interviews will hurt more than they help. Unsynthesized interviews don't change decisions; they pile up as guilt. Pick a cadence you can actually metabolize. With Koji, synthesis runs in real time as each interview completes — every conversation generates structured themes, quotes, and quality scores automatically — which is what lets teams sustain cadences that would otherwise be unmanageable.
The four cadence patterns
Most successful cadences fall into one of four patterns:
1. Weekly Discovery (the Teresa Torres baseline)
One interview per trio per week. Synthesis at the end of each week. New assumptions tested next week. The lowest viable cadence for a team that wants to stay current.
2. Always-On Discovery
A standing study runs continuously, recruiting from in-product traffic or your CRM. Interviews come in throughout the week without the team initiating each one. This is where async AI interviews shine — the published study runs on its own and the team reviews completed interviews in their normal workflow.
3. Sprint-Aligned Discovery
Research is timed to product cycles: 5–10 interviews in week 1 of a sprint to scope the work, then 3–5 in week 2 to validate decisions. Useful for teams already running rigorous sprint cadences.
4. Continuous + Deep
The most mature pattern: continuous always-on discovery (1–3/week) supplemented by periodic deep studies (15–30 interviews) when a major decision approaches. The continuous stream catches drift; the deep studies answer specific strategic questions.
Operational playbook for sustained cadence
Most teams pick a cadence and miss it within four weeks. The blockers are predictable, and they are operational, not strategic.
Eliminate scheduling
Moderated calls need a calendar invite, a meeting link, a reminder, a no-show buffer, and a follow-up. At three interviews per week, that's 3–6 hours of operational overhead before any insights arrive. Async AI interviews remove this entire layer — see Asynchronous User Interviews for how. With Koji, a participant clicks a link, the AI moderator runs the conversation in voice or text, and the transcript is in your dashboard 30 seconds after they finish.
Recruit continuously, not per-study
The second-biggest cadence killer is per-study recruitment. Teams running weekly cadence build a continuous recruitment funnel — usually a research widget embedded in their product or a recurring CRM segment that drips invitations. Invitations should go out automatically, not manually.
Standardize the question set
Writing a fresh discussion guide every week is unsustainable. Maintain a "core" guide (the questions you ask in every interview to track over time) and a small "spotlight" section that rotates with the current research question. Koji's structured questions make the core comparable across interviews — every NPS rating, feature priority, or willingness-to-pay number ladders into the same chart automatically.
Synthesize as interviews complete
The synthesis-batching tax is real: if you wait until end-of-month to read 20 interviews, you get 20 interviews of recall lag and a deeply boring afternoon. Real-time theme detection means each interview produces a usable summary the moment it finishes. By Friday you have a week's worth of digested evidence, not an undifferentiated pile.
Make the loop visible
The cadence dies fast when the team can't see the impact of the conversations they're running. Pipe finished-interview notifications into Slack (Slack Research Insights Integration) so the trio sees insights land in real time. Every two weeks, link a roadmap decision back to the specific interview that informed it. Visibility creates the gravity that sustains the habit.
How Koji enables high-cadence research
The single biggest reason teams fail their cadence target is operational, not motivational. Koji is built around the cadence problem:
- Always-on async interviews mean a published study collects participants 24/7 without a moderator. See Always-On User Interviews.
- Three-layer AI probing ensures every interview produces real depth, not just answers — so you're not running more interviews to make up for shallow ones. (AI Probing Guide)
- Real-time research insights mean themes, quotes, and quality scores surface as interviews complete, not weeks later. (Real-Time Research Insights)
- Quality gating ensures that only interviews scoring 3 or higher consume credits — junk interviews never enter your dataset and never cost you anything.
- Six structured question types let you compare quantitative signals (NPS, ranking, scale) across every week's interviews automatically.
- Participant management at scale. Personalized links, CRM imports, and embed widgets all run from one workspace (Managing Research Participants).
With these in place, the marginal cost of moving from one interview a week to ten is essentially zero — which is what lets a small team operate at the cadence of a full research function.
Related Resources
- Structured Questions in AI Interviews — the six question types that make weekly comparisons possible
- Continuous Discovery: Weekly Customer Interviews — Teresa Torres' methodology in depth
- Always-On User Interviews — the 24/7 standing study pattern
- Asynchronous User Interviews — why async beats moderated for cadence
- Real-Time Research Insights — synthesis as interviews complete
- In-Product Research Recruiting — the recruitment funnel that sustains cadence
- Slack Research Insights Integration — pipe insights to where decisions get made
Related Articles
Real-Time Research Insights: How to See Themes, Quotes, and Quality Scores as Interviews Complete
Stop waiting weeks for analysis — modern AI research platforms surface themes, structured-question distributions, sentiment, and quality-scored quotes the moment each interview ends. Here is how real-time research insights work in Koji and how to design studies that take advantage of them.
Send Research Insights to Slack: Real-Time Customer Interview Notifications via Webhooks
Pipe customer interview insights from Koji into your Slack workspace in real time. Use Koji webhooks to notify a #research channel the moment an interview completes, post quote highlights to #product-feedback, or alert #cs-alerts when a churn signal is detected. Step-by-step setup with a working Slack incoming webhook recipe.
Managing Research Participants: The Complete Guide to Koji's Recruit Tab
How to track, filter, import, and export research participants in Koji — including personalized links, quality management, and CRM integration.
How Koji's AI Follow-Up Probing Works: Going Deeper Than Any Survey
Understand how Koji's AI interviewer automatically asks follow-up questions to go deeper on every answer — and how to configure probing depth, custom instructions, and anchor behavior for scale questions.
Structured Questions in AI Interviews
Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.
In-Product Research Recruiting: Recruit Customer Interview Participants From Inside Your App
Stop paying recruiting panels for participants you already have. Learn how to recruit research participants directly from your product using embedded prompts, in-app banners, email triggers, and personalized AI interview links. Faster, cheaper, and more representative than external panels — with zero scheduling friction.
Asynchronous User Interviews: The Complete Guide to Async Research
Learn how asynchronous user interviews work, why they outperform scheduled sessions for scale, and how AI makes async research as rich as live interviews.
User Research Mistakes: 14 Pitfalls That Sabotage Your Insights (2026)
The most common user research mistakes that lead to misleading insights — and how to avoid each one with better methodology and AI-powered interviews.
Continuous Discovery: How to Run Weekly Customer Interviews Without Burning Out
Continuous discovery is the practice of conducting customer interviews every week as part of your normal workflow. This guide explains how to build an always-on research practice that actually scales.
Always-On User Interviews: Run 24/7 With an AI Moderator
Run user interviews around the clock without a researcher on every call. An AI moderator interviews participants whenever they show up — across timezones, in voice or text, with results scored and themed automatically.