{"site":{"name":"Koji","description":"AI-native customer research platform that helps teams conduct, analyze, and synthesize customer interviews at scale.","url":"https://www.koji.so","contentTypes":["blog","documentation"],"lastUpdated":"2026-05-13T13:21:57.686Z"},"content":[{"type":"documentation","id":"7edd2897-c7f9-4172-b946-b829a6ab89a0","slug":"analyzing-ai-moderated-interview-results","title":"How to Analyze AI-Moderated Interview Results","url":"https://www.koji.so/docs/analyzing-ai-moderated-interview-results","summary":"AI-moderated interview results in Koji come pre-analyzed: each transcript has a quality score (1-5), a per-question structured answer with confidence rating, auto-generated theme codes grounded in verbatim quotes, and a cross-interview report that aggregates findings across all respondents. The work that takes manual researchers 6-8 hours per interview takes Koji minutes — you start at the synthesized layer, not the raw transcript.","content":"## What it means to \"analyze\" an AI-moderated interview\n\nWhen you run a traditional user interview, you (the researcher) record it, transcribe it, code it, write a summary, and finally aggregate it with other interviews. Analysis takes **6 to 8 hours per interview** — and 80% of that time is mechanical work, not insight generation.\n\nWith AI-moderated interviews on platforms like Koji, that ratio inverts. By the time the participant clicks \"finish\", Koji has already:\n\n- Scored the interview's quality on three dimensions\n- Mapped each answer back to the specific question in your brief\n- Generated cycle-1 theme codes (2-5 word labels grounded in verbatim quotes)\n- Updated the cross-interview research report with the new data\n\nSo \"analyzing AI-moderated interview results\" doesn't mean coding transcripts from scratch — it means **reading the synthesized output**, validating the surprises, and deciding what to do next.\n\n## The 4 layers of analysis you get automatically\n\nKoji produces four analytical layers for every interview, ordered from \"first thing you see\" to \"deepest detail\":\n\n1. **Quality score** — a 1-5 rating with rationale, used to flag bad interviews before they pollute your data.\n2. **Structured answers per question** — each question from your brief gets a structured value (number, choice, ranking) plus a qualitative answer extracted from the conversation.\n3. **Theme codes per open-ended answer** — short labels like \"Onboarding friction\" or \"Trust in platform\", each grounded in a specific message in the transcript.\n4. **Cross-interview report** — distribution charts, theme clusters, and a synthesized narrative across all respondents in the study.\n\nThe rest of this guide walks each layer, what to look for, and when to escalate to the raw transcript.\n\n## Layer 1: read the quality score first\n\nEvery completed interview gets an overall **quality score from 1 to 5**, broken down into:\n\n- **Relevance** — How much of the conversation was on-topic for the research goal\n- **Depth** — How substantive the respondent's answers were (single-word vs. story-driven)\n- **Coverage** — How many key questions/topics from the brief were actually addressed\n\nA score of **3 or higher** means trustworthy data. Conversations scoring below 3 are treated as low-quality and don't consume credits — Koji's quality gate filters them automatically so you're not paying for noise.\n\n**What to do at this layer:**\n\n- Glance at the score and rationale. If everything's 4-5, move to layer 2.\n- If a score is 2 or lower, read the rationale — sometimes a low score reflects a hostile or rushed respondent, sometimes it reveals a problem with the guide itself (e.g., a confusing question causing drop-off).\n- If many interviews score low in **coverage**, your guide is too long for the available time. Trim core questions.\n\n## Layer 2: structured answers per question\n\nOpen the interview detail view. Each question from your brief has a **StructuredAnswer** with:\n\n- The **structured value** — a number for scale questions, a string for single-choice and yes/no, a string array for multiple-choice and ranking, null for open-ended\n- The **qualitative answer** — the AI's extracted summary of what the respondent said\n- A **confidence rating** (high / medium / low) on the extraction accuracy\n- The **message indices** linking back to the exact transcript turns that justify the answer\n\n**What to do at this layer:**\n\n- Scan confidence ratings. Anything marked \"low\" is worth checking against the transcript.\n- For scale and choice questions, the structured value flows straight into the cross-interview report — no extra work needed.\n- For open-ended questions, the qualitative answer is the AI's extracted summary; the theme codes (layer 3) are the analytical labels you'll cluster across interviews.\n\nThis per-question structure is only possible because Koji uses the [6 structured question types](/docs/structured-questions-guide). Traditional transcript analysis can't separate \"what they said about pricing\" from \"what they said about onboarding\" without manual coding — Koji does it automatically because every question carries a stable ID that flows from brief → AI prompt → conversation → analysis → report.\n\n## Layer 3: read auto-generated theme codes\n\nFor every open-ended question, Koji generates **cycle-1 theme codes** — short 2-5 word labels in English (the study language) grounded in verbatim respondent quotes. Each theme has:\n\n- A **label** like \"Convenience preference\" or \"Wants relaxation\" (the analyst-paraphrased topic)\n- A **kind**: `descriptive` (most codes) or `in_vivo` (preserves the participant's specific framing)\n- A **supportingQuote** — verbatim respondent words in their original language, preserved character-by-character including filler words\n- A **messageIndex** pointing to the exact transcript message that grounds the code\n\n**What to do at this layer:**\n\n- Skim the themes for each open-ended question. They're the qualitative DNA of the interview.\n- Click any theme to see the supporting quote and the message in transcript context. This is your \"read the original\" escape hatch.\n- If a theme feels off, you can edit or remove it — but in practice, server-side validation drops hallucinations before they reach you (themes whose supportingQuote can't be matched against a real message are filtered out).\n\nThe next analytical step — clustering similar themes into a canonical codebook across all respondents — happens automatically in the report (layer 4). You don't have to do it.\n\n## Layer 4: the cross-interview research report\n\nOnce you have a few completed interviews, open the **research report** for the study. This is where analysis becomes synthesis:\n\n- **Per-question aggregation** — scale distributions, choice frequencies, ranking averages, all visualized\n- **Theme clustering (cycle-2 axial coding)** — near-duplicate themes from layer 3 are merged into a canonical codebook per question, with prevalence percentages\n- **Strongest quotes** — Koji surfaces the most representative verbatim quotes for each cluster\n- **Segment cuts** — if your study has segments (plan tier, persona, recruitment source), you can slice every theme by segment\n\nThis is the layer most stakeholders should see — it's also the one Koji is best at, because aggregation is where machine analysis dominates manual. A human researcher analyzing 30 interviews makes inconsistent coding decisions across the set; Koji applies the same grounded logic to every respondent.\n\nSee [generating research reports](/docs/generating-research-reports) and [reading your research report](/docs/reading-your-research-report) for the full report walkthrough.\n\n## When to dig into the raw transcript\n\nThe auto-analysis covers ~80% of typical research work. You should escalate to the raw transcript when:\n\n- A confidence rating is **low** on a strategic question\n- The quality score rationale flags a surprise (e.g., \"respondent contradicted earlier answer\")\n- A cluster in the report has only **2-3 respondents** — small clusters need verification before they become a finding\n- You're writing a stakeholder narrative and want to find the **strongest possible quote** rather than the auto-selected one\n\nThe transcript view highlights the exact messages that ground each theme and structured answer, so you're jumping straight to the relevant passage, not skimming the whole conversation.\n\n## Real-world workflow: from finished interview to insight\n\nA typical analysis session in Koji looks like:\n\n1. **Glance at the dashboard** — see new completed interviews and their quality scores\n2. **Skim low scores** (rare) — drop the bad ones, note any guide-level issues\n3. **Open the report** — see how the new data shifted distributions and themes\n4. **Click 2-3 surprising themes** — read the supporting quotes in transcript context\n5. **Tag key quotes for stakeholder share-out**\n6. **Decide next steps** — saturate, change guide, or move to a new segment\n\nTotal time per batch of 10-20 interviews: typically 30-45 minutes. Compare that to a manual qualitative analyst spending 2-3 days on the same volume.\n\nThis 10x time advantage is the main reason research teams move to AI-native platforms like Koji — it's not that AI replaces analytical judgment, it's that it removes the mechanical work that crowded judgment out.\n\n## Common analysis mistakes\n\n- **Skipping the quality score.** Treating all interviews as equally valid pollutes your themes. Trust the gate.\n- **Over-trusting low-confidence answers.** Always check transcripts for \"low\" confidence on strategic questions.\n- **Ignoring the report and reading raw transcripts.** You're re-doing work the system already did. Start at the report; drill down only when needed.\n- **Acting on single-interview themes.** Wait for 2+ respondents in a cluster before treating it as a finding.\n- **Editing themes too aggressively.** The auto-coding is grounded; manual edits often unintentionally lose the verbatim anchor.\n\n## Related Resources\n\n- [Structured Questions Guide](/docs/structured-questions-guide) — how question types drive per-question analysis\n- [AI Transcript Analysis Guide](/docs/ai-transcript-analysis-guide) — deeper dive into transcript analysis\n- [Generating Research Reports](/docs/generating-research-reports) — how the cross-interview report is built\n- [Reading Your Research Report](/docs/reading-your-research-report) — section-by-section walkthrough\n- [Understanding Themes and Patterns](/docs/understanding-themes-patterns) — theme clustering explained\n- [Turning Interviews into Insights](/docs/turning-interviews-into-insights) — synthesis workflows\n- [How to Analyze Qualitative Data](/docs/how-to-analyze-qualitative-data) — foundational method primer\n- [Sentiment Analysis in Interviews](/docs/sentiment-analysis-interviews) — emotion-level analysis","category":"Analysis & Synthesis","lastModified":"2026-05-13T03:17:16.162598+00:00","metaTitle":"How to Analyze AI-Moderated Interview Results | Koji","metaDescription":"Step-by-step guide to analyzing AI-moderated interview results — quality scoring, structured answers, theme coding, cross-interview reports, and what to look for first.","keywords":["analyzing interview results","analyze interview results","ai interview analysis","analyze user interviews","interview analysis ai","analyze user research results","interview transcript analysis","ai moderated interview analysis"],"aiSummary":"AI-moderated interview results in Koji come pre-analyzed: each transcript has a quality score (1-5), a per-question structured answer with confidence rating, auto-generated theme codes grounded in verbatim quotes, and a cross-interview report that aggregates findings across all respondents. The work that takes manual researchers 6-8 hours per interview takes Koji minutes — you start at the synthesized layer, not the raw transcript.","aiPrerequisites":["At least one completed Koji interview","5-10 minutes per interview to review","Optional: a research goal or hypothesis to test against"],"aiLearningOutcomes":["How to read Koji's quality score and what each dimension means","How to interpret per-question structured answers and confidence ratings","How to use auto-generated themes for qualitative coding","How to read the cross-interview research report","When to dig into the raw transcript vs. trust the auto-analysis"],"aiDifficulty":"intermediate","aiEstimatedTime":"8 minutes"}],"pagination":{"total":1,"returned":1,"offset":0}}