New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Analysis & Synthesis

How to Analyze AI-Moderated Interview Results

A complete guide to analyzing interview results from AI-moderated sessions — read the quality score, the per-question structured answers, the auto-generated themes, and the cross-interview report. With Koji, analysis is done before you open the transcript.

What it means to "analyze" an AI-moderated interview

When you run a traditional user interview, you (the researcher) record it, transcribe it, code it, write a summary, and finally aggregate it with other interviews. Analysis takes 6 to 8 hours per interview — and 80% of that time is mechanical work, not insight generation.

With AI-moderated interviews on platforms like Koji, that ratio inverts. By the time the participant clicks "finish", Koji has already:

  • Scored the interview's quality on three dimensions
  • Mapped each answer back to the specific question in your brief
  • Generated cycle-1 theme codes (2-5 word labels grounded in verbatim quotes)
  • Updated the cross-interview research report with the new data

So "analyzing AI-moderated interview results" doesn't mean coding transcripts from scratch — it means reading the synthesized output, validating the surprises, and deciding what to do next.

The 4 layers of analysis you get automatically

Koji produces four analytical layers for every interview, ordered from "first thing you see" to "deepest detail":

  1. Quality score — a 1-5 rating with rationale, used to flag bad interviews before they pollute your data.
  2. Structured answers per question — each question from your brief gets a structured value (number, choice, ranking) plus a qualitative answer extracted from the conversation.
  3. Theme codes per open-ended answer — short labels like "Onboarding friction" or "Trust in platform", each grounded in a specific message in the transcript.
  4. Cross-interview report — distribution charts, theme clusters, and a synthesized narrative across all respondents in the study.

The rest of this guide walks each layer, what to look for, and when to escalate to the raw transcript.

Layer 1: read the quality score first

Every completed interview gets an overall quality score from 1 to 5, broken down into:

  • Relevance — How much of the conversation was on-topic for the research goal
  • Depth — How substantive the respondent's answers were (single-word vs. story-driven)
  • Coverage — How many key questions/topics from the brief were actually addressed

A score of 3 or higher means trustworthy data. Conversations scoring below 3 are treated as low-quality and don't consume credits — Koji's quality gate filters them automatically so you're not paying for noise.

What to do at this layer:

  • Glance at the score and rationale. If everything's 4-5, move to layer 2.
  • If a score is 2 or lower, read the rationale — sometimes a low score reflects a hostile or rushed respondent, sometimes it reveals a problem with the guide itself (e.g., a confusing question causing drop-off).
  • If many interviews score low in coverage, your guide is too long for the available time. Trim core questions.

Layer 2: structured answers per question

Open the interview detail view. Each question from your brief has a StructuredAnswer with:

  • The structured value — a number for scale questions, a string for single-choice and yes/no, a string array for multiple-choice and ranking, null for open-ended
  • The qualitative answer — the AI's extracted summary of what the respondent said
  • A confidence rating (high / medium / low) on the extraction accuracy
  • The message indices linking back to the exact transcript turns that justify the answer

What to do at this layer:

  • Scan confidence ratings. Anything marked "low" is worth checking against the transcript.
  • For scale and choice questions, the structured value flows straight into the cross-interview report — no extra work needed.
  • For open-ended questions, the qualitative answer is the AI's extracted summary; the theme codes (layer 3) are the analytical labels you'll cluster across interviews.

This per-question structure is only possible because Koji uses the 6 structured question types. Traditional transcript analysis can't separate "what they said about pricing" from "what they said about onboarding" without manual coding — Koji does it automatically because every question carries a stable ID that flows from brief → AI prompt → conversation → analysis → report.

Layer 3: read auto-generated theme codes

For every open-ended question, Koji generates cycle-1 theme codes — short 2-5 word labels in English (the study language) grounded in verbatim respondent quotes. Each theme has:

  • A label like "Convenience preference" or "Wants relaxation" (the analyst-paraphrased topic)
  • A kind: descriptive (most codes) or in_vivo (preserves the participant's specific framing)
  • A supportingQuote — verbatim respondent words in their original language, preserved character-by-character including filler words
  • A messageIndex pointing to the exact transcript message that grounds the code

What to do at this layer:

  • Skim the themes for each open-ended question. They're the qualitative DNA of the interview.
  • Click any theme to see the supporting quote and the message in transcript context. This is your "read the original" escape hatch.
  • If a theme feels off, you can edit or remove it — but in practice, server-side validation drops hallucinations before they reach you (themes whose supportingQuote can't be matched against a real message are filtered out).

The next analytical step — clustering similar themes into a canonical codebook across all respondents — happens automatically in the report (layer 4). You don't have to do it.

Layer 4: the cross-interview research report

Once you have a few completed interviews, open the research report for the study. This is where analysis becomes synthesis:

  • Per-question aggregation — scale distributions, choice frequencies, ranking averages, all visualized
  • Theme clustering (cycle-2 axial coding) — near-duplicate themes from layer 3 are merged into a canonical codebook per question, with prevalence percentages
  • Strongest quotes — Koji surfaces the most representative verbatim quotes for each cluster
  • Segment cuts — if your study has segments (plan tier, persona, recruitment source), you can slice every theme by segment

This is the layer most stakeholders should see — it's also the one Koji is best at, because aggregation is where machine analysis dominates manual. A human researcher analyzing 30 interviews makes inconsistent coding decisions across the set; Koji applies the same grounded logic to every respondent.

See generating research reports and reading your research report for the full report walkthrough.

When to dig into the raw transcript

The auto-analysis covers ~80% of typical research work. You should escalate to the raw transcript when:

  • A confidence rating is low on a strategic question
  • The quality score rationale flags a surprise (e.g., "respondent contradicted earlier answer")
  • A cluster in the report has only 2-3 respondents — small clusters need verification before they become a finding
  • You're writing a stakeholder narrative and want to find the strongest possible quote rather than the auto-selected one

The transcript view highlights the exact messages that ground each theme and structured answer, so you're jumping straight to the relevant passage, not skimming the whole conversation.

Real-world workflow: from finished interview to insight

A typical analysis session in Koji looks like:

  1. Glance at the dashboard — see new completed interviews and their quality scores
  2. Skim low scores (rare) — drop the bad ones, note any guide-level issues
  3. Open the report — see how the new data shifted distributions and themes
  4. Click 2-3 surprising themes — read the supporting quotes in transcript context
  5. Tag key quotes for stakeholder share-out
  6. Decide next steps — saturate, change guide, or move to a new segment

Total time per batch of 10-20 interviews: typically 30-45 minutes. Compare that to a manual qualitative analyst spending 2-3 days on the same volume.

This 10x time advantage is the main reason research teams move to AI-native platforms like Koji — it's not that AI replaces analytical judgment, it's that it removes the mechanical work that crowded judgment out.

Common analysis mistakes

  • Skipping the quality score. Treating all interviews as equally valid pollutes your themes. Trust the gate.
  • Over-trusting low-confidence answers. Always check transcripts for "low" confidence on strategic questions.
  • Ignoring the report and reading raw transcripts. You're re-doing work the system already did. Start at the report; drill down only when needed.
  • Acting on single-interview themes. Wait for 2+ respondents in a cluster before treating it as a finding.
  • Editing themes too aggressively. The auto-coding is grounded; manual edits often unintentionally lose the verbatim anchor.

Related Resources

Related Articles

How to Read Your Koji Research Report: A Section-by-Section Guide

A complete walkthrough of every section in a Koji research report — from the overview and themes to quantitative charts, key quotes, and Insights Chat — so you can extract maximum value from your findings.

Generating Research Reports

Create comprehensive aggregate reports across all your interviews — including summaries, themes, recommendations, and statistics.

Understanding Themes & Patterns

Learn how Koji identifies recurring themes across interviews and how to use them for decision-making.

How to Analyze Interview Transcripts with AI: From Raw Conversations to Actionable Insights

A complete guide to AI-powered interview transcript analysis — how it works, where it outperforms manual methods, and how Koji automates the entire pipeline from conversation to published report.

Turning Interviews Into Insights: From Raw Data to Action

A complete guide to transforming raw interview transcripts into structured, actionable insights — covering manual analysis, AI-assisted workflows, and frameworks for prioritizing findings.

How to Analyze Qualitative Data: From Raw Interviews to Actionable Insights

A step-by-step guide to qualitative data analysis — from reviewing raw transcripts to synthesizing themes, generating insights, and presenting findings that teams act on.

Sentiment Analysis in Qualitative Research: Understanding Emotional Patterns

Learn how to identify and interpret emotional patterns in qualitative interview data — and why emotional insights predict behavior better than stated opinions.

Structured Questions in AI Interviews

Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.