Always-On User Interviews: Run 24/7 With an AI Moderator
Run user interviews around the clock without a researcher on every call. An AI moderator interviews participants whenever they show up — across timezones, in voice or text, with results scored and themed automatically.
The Bottom Line
Always-on user interviews mean you publish one shared interview link, and an AI moderator handles every participant who shows up — at 9am or 3am, in your language or theirs, in voice or text. With a tool like Koji, your research stops being throttled by your calendar. A moderator-led study that traditionally takes three weeks (recruit → schedule → interview → transcribe → analyze) compresses into hours, because all four bottleneck stages run continuously and in parallel.
If you have ever lost momentum waiting for the next interview slot to free up, this is the operating model that fixes it.
What "Always-On" Actually Means
Traditional moderated research has five sequential bottlenecks: recruit, schedule, moderate, transcribe, analyze. Each one waits for a human. Always-on interviews collapse all five into a continuous, autonomous loop:
- Recruit continuously — share one Koji interview link in-product, in email, on social, or in a community. Anyone who clicks can interview themselves immediately.
- No scheduling step — the AI moderator is available 24/7. Participants in any timezone start whenever they want.
- AI conducts the interview — voice or text, in 30+ languages, following your structured plan with up to three probing follow-ups per question.
- Auto-transcribe and theme — every conversation is transcribed and mapped to your structured questions in real time.
- Real-time report — themes, quotes, and per-question distributions update as new interviews complete.
Nothing waits for a researcher. The researcher reads completed, scored, themed conversations whenever they like.
Why Async-Moderated Beats Both Surveys and Live Interviews
Most teams have framed the choice as survey (fast, scalable, shallow) or moderated interview (deep, slow, expensive). Always-on AI-moderated interviews are a third option that takes the best of both:
| Dimension | Surveys | Live moderated | Always-on AI moderated |
|---|---|---|---|
| Time to first response | Hours | Days | Minutes |
| Depth (follow-up probing) | None | Skilled human | AI probes 1–3× per question |
| Throughput | Unlimited | 4–8/day | Unlimited |
| Researcher hours per session | 0 | 1.5–3 | ~5 minutes (review only) |
| 24/7 coverage | Yes | No | Yes |
| Auto themed analysis | No | No | Yes |
| Quantitative aggregation | Yes | Manual | Automatic |
The critical unlock is probing depth without human moderation. Surveys can never ask "why did you say that?" — and that is the answer that actually moves the strategy needle. Koji’s AI moderator probes up to three follow-ups per question, configured per-question via the maxFollowUps setting in your interview plan.
How Koji Implements Always-On Interviews
A few product details make Koji’s always-on model work in practice:
1. Self-serve interview link
Every study generates a public link. Participants land on a branded intake page (configurable headline, description, CTA, optional consent text) and choose voice or text mode. There is no scheduling, no Calendly, no rescheduling — they start the interview the moment they decide to.
2. Voice or text interaction mode
The interactionMode configuration on your study controls which modes are offered: voice (powered by ElevenLabs voice agents), text (Gemini-powered conversational interviewer), or both. You can default to one and let participants switch.
3. Configurable screener and intake
Before the interview begins, an optional intake form filters participants (role, segment, experience, custom attributes). This replaces panel-platform screening and keeps off-target respondents from consuming credits.
4. Quality-gated billing
Koji only consumes a credit for conversations that score 3 or higher on the post-interview quality check. Abandoned and nonsense responses are filtered out — meaning your always-on link can stay public without burning credits on spam.
5. Auto-themed real-time reports
As interviews complete, a research report aggregates themes, quotes, and per-question distributions. The report refreshes on demand (5 credits per refresh) or you can read individual transcripts as they finish.
6. Multilingual coverage
The AI moderator runs interviews in 30+ languages, switching based on participant choice or the defaultLanguage you set. A single link can interview a French participant at 9am Paris time and a Singaporean participant at 3am their time — with both transcripts translated and themed in your working language.
Use Cases Where Always-On Wins
Always-on AI interviews shine in five concrete operating contexts:
Continuous discovery
Stop running quarterly "discovery sprints." Keep an always-on link active in your product or onboarding email. New users get interviewed during their first week, you never miss a wave of fresh signal, and synthesis becomes a weekly habit instead of a quarterly cliff.
NPS root-cause investigation
When NPS comes in, automatically route detractors and promoters to a Koji interview link with a scale question for the score and open_ended follow-ups for "what made you say that?" — running 24/7. You get qualitative depth on every score, not just a sample.
Churn exit interviews
Trigger an interview link in your churn email. Customers who would never schedule a 30-minute moderated call will spend 5 minutes with an AI right after they cancel. The conversion rate on async exit interviews is often 3–5× higher than scheduled calls.
International expansion research
Launching in a new region? An always-on link in the local language interviews participants in their working hours, while you stay on yours. No 6am video calls.
Founder-led customer research
Founders rarely have time for 30 hours of moderated calls per quarter. An always-on Koji link in onboarding, paired with a 10-minute weekly review of new transcripts, replaces most of that calendar.
Design Tips for Always-On Studies
- Keep it short. Async participants tolerate 5–10 minutes; cut anything that does not move the needle.
- Open with a high-impact open-ended question. Hook the participant before any structured questions.
- Use structured questions for anything you want to chart. Scale, single_choice, multiple_choice, ranking, and yes_no questions auto-aggregate; open-ended questions auto-theme.
- Tune
maxFollowUpsper question. Set 0 for screeners, 1 for standard depth, 2–3 for the questions you most want to dig into. - Set a clear consent footer. For public links, surface privacy and recording terms in the intake screen.
- Refresh reports weekly, not nightly. Each refresh is 5 credits — once a week is usually enough for an always-on link.
- Watch the screener completion rate. If too many participants drop at the screener, your filter is too tight or too clunky.
What This Replaces
With always-on AI interviews dialed in, most teams retire several legacy line items:
- Calendly + scheduling tools (no more 1:1 booking)
- Panel platforms for self-serve studies (in-product link beats panel recruiting)
- Transcription services (built in)
- Manual coding spreadsheets (auto-themed)
- Quarterly research sprints (replaced by continuous coverage)
Not every research study should be async — but for the 70% that should, an always-on link saves you 80% of the calendar time and cuts cost by an order of magnitude.
Getting Started
- Create a study and write 4–8 questions (mix of
open_endedand structured types) - Set
interactionModeto allow voice + text, with voice as default - Configure an intake screen (optional screener fields)
- Publish the interview link
- Drop the link into your in-product onboarding, post-purchase email, NPS survey, or community
- Check the report once a week — read scored, themed conversations as they roll in
The whole setup typically takes 30 minutes. From that point forward, every participant who clicks the link runs a fully moderated interview without your calendar getting involved.
Related Resources
- Setting Up Voice Interviews — voice mode configuration
- Structured Questions Guide — the 6 question types that auto-aggregate
- Async User Interviews — the broader async research methodology
- How to Automate User Research — the operating model around always-on links
- Screener Questions Guide — filtering participants before they consume credits
- Generating Research Reports — how the real-time report aggregates findings
Related Articles
Generating Research Reports
Create comprehensive aggregate reports across all your interviews — including summaries, themes, recommendations, and statistics.
How to Set Up AI Voice Interviews: A Researcher's Complete Guide
Step-by-step guide to configuring, testing, and optimizing voice interview studies in Koji — from research brief to launch.
Structured Questions in AI Interviews
Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.
Screener Questions for User Research: A Complete Guide
Learn how to write effective screener questions that find the right research participants — and how Koji's intake forms and AI interviews make screening faster and more natural.
Asynchronous User Interviews: The Complete Guide to Async Research
Learn how asynchronous user interviews work, why they outperform scheduled sessions for scale, and how AI makes async research as rich as live interviews.
How to Automate User Research: Build a Pipeline That Runs 24/7
A step-by-step guide to automating user research — from setting up AI-moderated interviews to continuous discovery pipelines that generate insights every week without manual effort.