{"site":{"name":"Koji","description":"AI-native customer research platform that helps teams conduct, analyze, and synthesize customer interviews at scale.","url":"https://www.koji.so","contentTypes":["blog","documentation"],"lastUpdated":"2026-05-13T13:27:27.348Z"},"content":[{"type":"documentation","id":"e7a5a17d-567c-418a-8652-2b220bbf5c94","slug":"analyzing-interview-results","title":"How to Analyze Interview Results: From AI-Moderated Sessions to Decisions","url":"https://www.koji.so/docs/analyzing-interview-results","summary":"Koji collapses traditional interview analysis (2-4 weeks of transcribing, coding, clustering, validating, and writing up) into a continuous workflow where themes emerge as responses arrive. Read the report in four layers: executive summary first, then structured question charts, then themes with quote provenance checks, then quality scores to filter signal from noise. Each interview is scored 1-5 on response depth, coherence, and engagement — use score 4-5 as primary evidence and exclude 1-2 from thematic analysis. Insights Chat lets you query the dataset in plain English for follow-ups you did not think to ask. The biggest analysis failure is stopping at theme extraction — every theme needs an action, a what-if-wrong size, and a next test before it becomes a decision.","content":"# How to Analyze Interview Results: From AI-Moderated Sessions to Decisions\n\n**Bottom line:** Traditional interview analysis is a 2–4 week ritual — transcripts, color-coded spreadsheets, affinity diagrams, and a slide deck shared with stakeholders who skim the first three pages. Koji collapses this into a continuous workflow: themes emerge as responses arrive, structured questions produce charts in real time, and an Insights Chat lets you ask follow-up questions of the entire dataset in plain English. Analyzing 30 interviews in Koji takes about 90 minutes — the same work historically took 30+ hours.\n\nThe shift matters more than the time savings. As an [MIT Sloan Management Review article on generative AI in consumer insight](https://sloanreview.mit.edu/article/gain-consumer-insight-with-generative-ai/) put it: *\"LLMs enable continuous, real-time insight generation.\"* The boundary between collecting and analyzing qualitative data is disappearing — and the researchers who win are the ones who learn to read AI-moderated reports critically and translate them into decisions, not the ones who try to recreate the old workflow with new tools.\n\nThis guide covers how to read every layer of a Koji interview report, how to filter for quality, how to move from themes to product decisions, and the five most common analysis mistakes.\n\n## What Analyzing Interview Results Used To Mean\n\nThe traditional analysis workflow had five steps, each of which took days:\n\n1. **Transcribe** — manually or via a transcription service, $1–2/minute\n2. **Code line by line** — assign labels to segments of text, usually in NVivo, Dovetail, or a spreadsheet\n3. **Cluster codes into themes** — affinity diagramming on a Miro board\n4. **Validate themes** — re-read transcripts checking for examples\n5. **Write up** — slides or a report, then a stakeholder readout meeting\n\nA 2025 industry survey by Lumivero found that traditional transcription and coding \"could take weeks for a large research project.\" ([Lumivero, State of AI in qualitative research](https://lumivero.com/resources/blog/state-of-ai-in-qualitative-research/)) The Harvard Business Review reports that AI-moderated platforms compress that timeline by 60% or more. ([HBR, How AI Helps Scale Qualitative Customer Research](https://hbr.org/2026/04/how-ai-helps-scale-qualitative-customer-research))\n\nThe cost wasn't just time. It was that by the time the analysis landed, the product team had already made the decision the research was supposed to inform.\n\n## What Koji Changes\n\nKoji's AI-moderated interviews produce four layers of structured output that arrive as the data is collected, not weeks later:\n\n1. **Executive summary** — auto-generated synthesis updated after every new completion\n2. **Structured question charts** — quantitative charts for scale, choice, ranking, and yes_no questions\n3. **Themes + verbatim quotes** — automatically extracted from open-ended responses, with citations to the source interview\n4. **Quality scores (1–5)** — every interview is scored on response depth, coherence, and follow-up engagement\n\nEach layer answers a different question. Reading them in the right order is the meta-skill of modern research analysis.\n\n## Layer 1: The Executive Summary\n\nOpen the report. Read the executive summary first.\n\nThis is a 200–400 word synthesis that names the top 3–5 findings, each backed by frequency data (\"23 of 30 participants mentioned...\"), the direction of the signal (positive, negative, mixed), and a representative quote.\n\nUse the executive summary to answer one question: *Does this match what I expected?*\n\n- **If yes** — your hypothesis is validated. Skim the rest to find the supporting evidence and any nuance.\n- **If no** — slow down. The disconfirming evidence is usually the most valuable part of any study. Drop into Layer 3 (themes) to figure out *why* the signal contradicts your prior.\n\nNN/G's research lead notes that *\"the most useful interviews are the ones that change your mind.\"* The executive summary is the fastest way to learn whether yours did.\n\n## Layer 2: Structured Question Charts\n\nIf you customized your guide with [structured questions](/docs/structured-questions-guide), each one produces a chart in the report:\n\n| Question type | Chart | What to read |\n|---|---|---|\n| **scale** (NPS, CSAT) | Distribution + mean/median | Look at the shape, not just the average. A bimodal distribution (lots of 9s and 2s) is a different story than a 5.5 average that looks neutral. |\n| **single_choice** | Frequency bar chart | The top option is the headline. The second and third options are the nuance. |\n| **multiple_choice** | Stacked frequency | Calculate co-occurrence — which options were chosen together? That's where the segmentation lives. |\n| **ranking** | Avg position + ranked list | The top 2 by average ranking are the priorities. The bottom 2 are what to deprioritize, not just \"lower priority.\" |\n| **yes_no** | Pie/donut | The 80% threshold matters. If 80%+ say yes to *would you stop using this if X were missing?*, treat X as a must-have. |\n\nThe big leverage point here: structured charts let you make claims like *\"73% of respondents ranked pricing transparency as their top concern\"* — which lands in a stakeholder room very differently than *\"some participants mentioned pricing.\"*\n\n## Layer 3: Themes and Verbatim Quotes\n\nFor every open-ended question, Koji extracts themes and quotes them with citations. A theme has three components:\n\n1. **Name** — the AI's label for the pattern (\"Onboarding friction at step 3\")\n2. **Frequency** — how many participants mentioned it (e.g., 18 of 30)\n3. **Representative quotes** — 3–5 verbatim citations from the source interviews\n\nRead themes critically. The AI is good at clustering, but it doesn't know what *matters*. Three things to check on every theme:\n\n**Check 1: Frequency vs intensity.** A theme mentioned by 22 of 30 participants but only in passing is weaker than a theme mentioned by 8 participants with strong emotional language. Both matter, but for different decisions.\n\n**Check 2: Quote provenance.** Click into 2–3 quotes per theme. Read the surrounding context. Sometimes the AI extracts a quote that *sounds* like the theme but means something different in context.\n\n**Check 3: Disconfirming evidence.** Ask: *\"Who in the study would disagree with this theme?\"* If you can't name anyone, the AI may have over-clustered. Use Insights Chat to ask: *\"Which participants pushed back on the onboarding friction theme?\"*\n\n## Layer 4: Quality Scores\n\nEvery Koji interview is scored 1–5 on response quality based on:\n\n- Average response length\n- Coherence (does the response actually answer the question?)\n- Engagement with follow-up probing (did the participant elaborate when asked?)\n- Specificity (concrete examples vs vague generalizations)\n\nThe score matters because not all responses deserve equal weight. A score-1 interview with two-word answers shouldn't carry the same analytic weight as a score-5 interview with rich, specific stories.\n\nThe recommended filter for analysis:\n\n- **Score 4–5:** Primary evidence. Quote in your readout.\n- **Score 3:** Supporting evidence. Include in frequency counts but verify before quoting.\n- **Score 1–2:** Use only for aggregate completion and drop-off metrics. Exclude from thematic analysis.\n\nA typical study has 60–70% of interviews scoring 4 or 5, 20–30% scoring 3, and 5–10% scoring 1–2. If your distribution is worse than that, the issue is usually a question that's hard to answer (rewrite it) or a screening problem (you recruited the wrong audience).\n\n## Layer 5: Insights Chat (The New Move)\n\nThe biggest workflow change in AI-moderated research is the ability to *ask the dataset questions* in plain English after the study ends. Koji's [Insights Chat](/docs/insights-chat-guide) is a conversational interface that searches across all transcripts and structured responses to answer follow-ups you didn't think to ask in the original guide.\n\nCommon Insights Chat prompts that produce decision-grade output:\n\n- *\"Which 5 participants gave the most useful feedback on pricing?\"* — generates a shortlist for follow-up interviews\n- *\"Show me every quote where a participant said they almost churned\"* — surfaces churn signal across the dataset\n- *\"What did Enterprise customers say differently from SMB?\"* — segments responses without setting up filter groups\n- *\"Did anyone mention competitor X?\"* — finds competitive intel buried in open responses\n- *\"Summarize the top 3 reasons participants chose us\"* — quick win-loss synthesis\n\nInsights Chat is the fastest way to test hypotheses on completed data. Use it freely — it costs nothing and the queries don't affect the dataset.\n\n## From Themes to Decisions\n\nThemes are not decisions. The most common analysis failure is to stop at theme extraction and present the report to stakeholders as if the work is done.\n\nA theme becomes a decision when you answer three follow-up questions:\n\n1. **What action does this theme imply?** (\"Rewrite onboarding step 3\" is an action. \"Customers feel friction in onboarding\" is not.)\n2. **What would change if we were wrong?** (If the theme is wrong, what's the cost of the action you would have taken? This sizes the bet.)\n3. **What's the next test?** (Even validated themes deserve a follow-up — a usability test, a quant survey, an A/B test in production.)\n\nThe Koji report's \"Insights\" section is designed to answer #1 — every theme should have at least one action item next to it. If your report has rich themes but no actions, push back on yourself before sharing with stakeholders.\n\n## The Five Most Common Analysis Mistakes\n\n1. **Reading the report and stopping.** The report is the input to analysis, not the output. The output is a decision.\n2. **Treating all responses equally.** A score-2 interview should not have the same weight as a score-5 interview in your synthesis.\n3. **Ignoring disconfirming evidence.** The participants who *disagree* with your dominant theme are usually the most valuable. Quote them.\n4. **Confusing frequency with importance.** A theme mentioned by 22 people in passing may matter less than a theme mentioned by 6 people with high emotional intensity.\n5. **Skipping Insights Chat.** Static reports answer the questions you asked. Insights Chat answers the questions you didn't think to ask. Spend 15 minutes querying the dataset before you finalize your readout.\n\n## How Koji Compresses the Analysis Timeline\n\nThe data point that matters most for product teams: a 30-participant study traditionally produces a stakeholder-ready synthesis in 2–4 weeks. The same study in Koji produces a continuously-updated synthesis as responses arrive, with the final readout ready 90 minutes after the last interview completes.\n\nThat compression unlocks two product behaviors:\n\n1. **Faster iteration loops.** A team that can analyze in 90 minutes instead of 4 weeks can run 10 studies per quarter instead of 2.\n2. **Earlier disconfirmation.** When themes emerge as responses arrive, you can spot a flawed assumption after interview 8 and rewrite a question before interview 9. The traditional workflow forced you to wait until the end.\n\nBoth behaviors are visible in Koji's [real-time research insights](/docs/real-time-research-insights) view, which streams themes as new completions land.\n\n## When to Slow Down\n\nSpeed isn't always virtue. There are three cases where you should deliberately slow down the analysis:\n\n1. **High-stakes decisions** — pricing changes, M&A research, strategic pivots. Spend the extra week.\n2. **Disconfirming evidence is dominant** — if the report surprises you, audit the data before acting on the surprise.\n3. **The study is contradictory** — different segments giving opposite signals usually means you've found a real segmentation, but the analysis to confirm it is slow work.\n\nFor everything else — feature prioritization, copy testing, onboarding optimization, churn diagnostics — the 90-minute analysis is the right move.\n\n## Related Resources\n\n- [Structured Questions in AI Interviews](/docs/structured-questions-guide) — the six chart-ready question types that produce quantitative analysis\n- [Insights Chat Guide](/docs/insights-chat-guide) — how to query your research dataset in plain English\n- [Thematic Analysis Guide](/docs/thematic-analysis-guide) — the six-step thematic analysis method\n- [Turning Interviews Into Insights](/docs/turning-interviews-into-insights) — from raw data to decision-grade output\n- [Coding Qualitative Data](/docs/coding-qualitative-data) — the open, axial, and selective coding techniques\n- [Reading Your Research Report](/docs/reading-your-research-report) — a walkthrough of every section of a Koji report\n","category":"Analysis & Synthesis","lastModified":"2026-05-13T03:16:56.386556+00:00","metaTitle":"How to Analyze Interview Results from AI-Moderated Sessions (2026 Guide)","metaDescription":"Analyze interview results in 90 minutes instead of 2-4 weeks. Read the four-layer Koji report — executive summary, structured charts, themes & quotes, quality scores — then use Insights Chat to query the dataset. Includes the from-themes-to-decisions framework and the five most common analysis mistakes.","keywords":["analyzing interview results","interview analysis","how to analyze interviews","interview data analysis","AI interview analysis","qualitative interview analysis","interview synthesis","research analysis workflow","reading research reports","interview themes"],"aiSummary":"Koji collapses traditional interview analysis (2-4 weeks of transcribing, coding, clustering, validating, and writing up) into a continuous workflow where themes emerge as responses arrive. Read the report in four layers: executive summary first, then structured question charts, then themes with quote provenance checks, then quality scores to filter signal from noise. Each interview is scored 1-5 on response depth, coherence, and engagement — use score 4-5 as primary evidence and exclude 1-2 from thematic analysis. Insights Chat lets you query the dataset in plain English for follow-ups you did not think to ask. The biggest analysis failure is stopping at theme extraction — every theme needs an action, a what-if-wrong size, and a next test before it becomes a decision.","aiPrerequisites":["Completed at least one Koji study with responses","Basic understanding of qualitative analysis concepts"],"aiLearningOutcomes":["Read all four layers of a Koji interview report in the right order","Filter responses by quality score for cleaner analysis","Use Insights Chat to query a dataset for follow-up questions","Translate themes into decisions using the action/what-if-wrong/next-test framework","Avoid the five most common interview analysis mistakes"],"aiDifficulty":"intermediate","aiEstimatedTime":"13 minutes"}],"pagination":{"total":1,"returned":1,"offset":0}}