New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Study Design

How Koji's AI Follow-Up Probing Works: Going Deeper Than Any Survey

Understand how Koji's AI interviewer automatically asks follow-up questions to go deeper on every answer — and how to configure probing depth, custom instructions, and anchor behavior for scale questions.

The defining difference between an AI interview and a survey isn''t voice support or a conversational UI. It''s probing — the ability to follow up on what a participant says, ask the natural next question, and dig into the reasoning behind an answer.

Every Koji interview includes built-in AI probing. Understanding how to configure and optimize it is the key to unlocking research-grade insights at survey-level scale.

What Is Probing?

In traditional qualitative research, a skilled moderator listens carefully to each answer and follows up with questions like:

  • "Can you tell me more about that?"
  • "What did you mean when you said [X]?"
  • "What was happening right before that moment?"
  • "Why is that important to you?"

This is probing. It''s what separates a surface-level answer from a genuine insight.

Koji''s AI interviewer does this automatically — for every question, across every participant, at any scale. Whether you''re running 5 interviews or 500, every participant gets the same quality of probing follow-up that a skilled human moderator would provide.

How Probing Is Configured

Probing behavior is set per question in your interview plan. Each question has a probing configuration with three settings:

maxFollowUps (0–3)

Controls how many follow-up questions the AI may ask after the participant answers.

ValueBehavior
0Ask the question, record the answer, move on — no probing
1One follow-up (the default) — good for most questions
2Two follow-ups — for questions where the "why" is the core insight
3Deep probing — reserved for your most critical research questions

Setting maxFollowUps to 0 is useful for purely quantitative questions (scale, yes/no) where you want the rating without a probing conversation, or for screening questions where you only need the factual answer.

instructions (Optional Text)

Custom guidance for how the AI should probe this specific question. Without instructions, the AI uses general best-practice probing techniques drawn from qualitative research methodology. With instructions, you can direct it precisely.

Examples of effective probing instructions:

  • "Ask what specifically triggered that decision."
  • "If they mention a competitor, ask which one and what they liked about it."
  • "Focus on the emotional experience, not just the functional outcome."
  • "If they describe a negative experience, ask what would have made it better."
  • "Don''t move on until you''ve understood the specific moment in their workflow where this problem occurs."

Good probing instructions are the difference between a study that surfaces generic themes and one that produces specific, actionable insights.

anchor (Scale Questions Only)

For scale questions, the anchor option tells the AI to use the participant''s specific rating as the basis for the follow-up. Instead of a generic "tell me more," the AI asks a rating-aware question:

  • Score 9–10: "You gave us a [X] — what specifically makes your experience so positive?"
  • Score 6–8: "You gave us a [X] — what would change that to a 9 or 10?"
  • Score 1–5: "You gave us a [X] — what''s been the biggest source of frustration?"

Anchoring produces dramatically more useful data than generic follow-ups, because participants are responding to their own stated position — not a formulaic probe.

The Probing Flow in Practice

Here''s how probing plays out in a real Koji interview:

Question: "What was the biggest barrier to getting started with us?"

Participant: "Honestly, the setup was kind of confusing."

AI follow-up (with probing instructions: "Ask about the specific moment and what would have helped"):

"What part of the setup was most confusing for you — and what would have made it easier?"

Participant: "The API documentation assumed I already knew how webhooks work. I''m a product manager, not an engineer. A step-by-step guide would have helped."

Second follow-up (if maxFollowUps is 2):

"Did you eventually figure it out on your own, or did you get help?"

Participant: "I ended up asking our engineering team. But I almost gave up before that."

Without probing, you''d have: "Setup was confusing." With probing, you have: "Product managers can''t follow webhook documentation — nearly churned — needed engineering support to proceed." These are very different research findings.

Probing in Text vs. Voice Mode

Text Mode

In text mode, probing questions appear as new chat messages from the AI, continuing the natural conversation thread. The participant types their response. This creates a written record that''s easy to search and reference later in your transcript.

Voice Mode

In voice mode, probing questions are spoken aloud by the AI in a natural conversational tone. There''s no awkward pause or shift — the probing flows seamlessly from the initial question. Participants often don''t realize they''re being probed at all; it feels like a normal conversation.

Voice mode probing tends to produce more emotional and spontaneous responses. Text mode probing produces more considered and detailed responses. Choose based on your research goals and participant audience.

What the AI Does When Probing

Koji''s AI interviewer is built to follow qualitative research best practices:

Follows the thread. If a participant mentions something unexpected but significant, the AI can follow that thread — especially in exploratory or hybrid mode. You don''t lose interesting detours by being locked to a rigid script.

Stays neutral. The AI doesn''t lead or suggest answers. It probes with open-ended questions that invite elaboration without steering the participant toward a particular conclusion.

Knows when to move on. When a participant has given a thorough answer and follow-ups aren''t producing new information, the AI moves to the next question rather than repeating the same probe.

Respects boundaries. If a participant declines to elaborate ("I''d rather not get into that"), the AI acknowledges this and moves on without pressure.

Maintains context. The AI remembers what was said earlier in the interview. A follow-up question can naturally reference something mentioned 10 minutes ago.

Probing and Interview Mode

How aggressively the AI probes also depends on the interview mode you select:

ModeProbing Behavior
StructuredStays close to your interview questions; probes within each question but returns to the guide afterward
ExploratoryFollows interesting threads more freely; may pursue an unexpected topic if the participant reveals something significant
HybridStarts structured, goes exploratory when something particularly interesting surfaces

See the Interview Mode Guide for more detail on choosing the right mode for your research goals.

How Probing Results Appear in Reports and Transcripts

In Transcripts

Every probing exchange is captured in the full interview transcript — the initial question, the participant''s answer, each follow-up question, and each follow-up response. You can read the complete conversational arc for any question in the Recruit tab.

In Structured Answers

For structured questions (scale, choice, ranking, yes/no), Koji extracts follow-up insights — a list of question-and-answer pairs from the probing exchange, with an optional distilled insight for each. This makes it easy to see both the structured value (the rating or selection) and the qualitative context (what probing revealed) in one place.

In the Research Report

The research report surfaces the most important probing insights in two ways:

  1. Representative quotes — Memorable statements from probing follow-ups, shown under each question''s analysis section
  2. Themes — If multiple participants expressed similar things during probing follow-ups, the AI surfaces this as a pattern

The Generating Research Reports guide explains the full report structure.

Configuring Probing for Common Research Goals

Validation Research

When you have a hypothesis and want to validate it:

  • maxFollowUps: 1
  • Instructions: "Focus on whether they confirm or challenge [hypothesis]. Ask for their direct experience rather than their opinion."

Discovery Research

When you''re exploring unknown territory:

  • maxFollowUps: 2–3
  • Instructions: "Follow threads that seem emotionally significant or that the participant seems eager to discuss."
  • Use exploratory or hybrid interview mode

Exit and Churn Research

When you need to understand why someone left:

  • maxFollowUps: 2
  • Instructions: "Ask what moment triggered the decision to leave. Ask what would have kept them."
  • Enable anchor probing for any scale questions: "You gave us a [X] — what would have changed that?"

Feature Usage Research

When you want to understand how a specific feature is being used:

  • maxFollowUps: 1–2
  • Instructions: "Ask about their specific workflow — what are they doing before and after using this feature, and what is the trigger that brings them to it?"

Best Practices

Write specific probing instructions. "Probe deeper" is not useful. "Ask about the specific moment when they decided to abandon the checkout, and what they did instead" is useful. Concrete instructions produce concrete insights.

Match probing depth to question importance. Not every question deserves three levels of follow-up. Reserve maxFollowUps of 2–3 for your most critical research questions.

Use anchor probing for all scale questions. A generic "tell me more about that rating" is weaker than "You gave us a 4 — what would it take to make that a 7?" Anchoring converts ratings into action items.

Test your probing before deploying. Run a pilot interview to see how the AI probes your questions. You may find your instructions need adjustment to produce the insights you''re after before you send it to all participants.

Don''t over-probe quantitative questions. For yes/no or single-choice questions where you just need the answer, set maxFollowUps to 0 or write minimal instructions.

Frequently Asked Questions

Does the AI always probe, or only sometimes? The AI probes whenever a participant gives an answer with room for deeper exploration, up to the maxFollowUps limit. If a participant gives an extremely thorough answer that already covers what the probing questions would ask, the AI recognizes this and moves on rather than asking redundant follow-ups.

Can I disable probing for specific questions? Yes. Set maxFollowUps to 0 for any question where you only need the surface answer — for example, a screening question, a yes/no qualifier, or a quantitative rating where the number is the only data point you need.

Will participants know the AI is following a probing strategy? Generally, no. Koji''s probing is designed to feel like natural conversation. The AI responds to what the participant actually says rather than following a rigid script. Most participants experience Koji interviews as conversations with an unusually attentive and curious interviewer.

Can the AI probe based on something said earlier in the interview? Yes, in exploratory and hybrid modes. The AI maintains full conversation context. If a participant mentioned switching from a competitor 15 minutes earlier, the AI can reference that later: "Earlier you mentioned switching from [X] — can you tell me more about what drove that decision?"

How does probing work for multiple-choice questions? After a participant selects their choices, the AI probes into the selections. For example: "You mentioned [option A] and [option B] — can you tell me more about why those two stand out for you?" The instructions field lets you direct probing toward specific selections.

What if a participant refuses to elaborate? The AI gracefully accepts this. A response like "I''d rather not go into that" is met with a natural acknowledgment and a move to the next question. The AI never pressures participants.

Related Resources