How to Read Your Koji Research Report: A Section-by-Section Guide
A complete walkthrough of every section in a Koji research report — from the overview and themes to quantitative charts, key quotes, and Insights Chat — so you can extract maximum value from your findings.
Your Koji research report is the final output of your study — a synthesized, AI-generated document that transforms raw interview data into structured insights. This guide walks through every section of the report, explains what each element means, and shows you how to extract maximum value from your findings.
What Is the Research Report?
After your study collects enough responses (typically 5 or more completed interviews), Koji automatically generates a research report. The report synthesizes all interview data — qualitative themes, quantitative structured answers, sentiment, and key quotes — into a readable document you can review, share, and publish.
You don''t need to read every transcript. The report surfaces the patterns, highlights the most illuminating quotes, and presents quantitative data in chart form. Then you can dive into specific transcripts to explore anything that needs more context.
To generate or refresh a report, see Generating Research Reports. To share your report with stakeholders, see Publishing and Sharing Reports.
Report Overview Section
The first section of every report is the Overview, which provides context for everything that follows.
Research Goals
A restatement of your research brief — what you were trying to learn and what decisions this research was meant to inform. This anchors the entire report in the original question you set out to answer.
Interview Summary
Key stats at a glance:
- Total interviews completed
- Average interview duration
- Interview mode (voice, text, or mixed)
- Quality distribution — how many interviews scored 3, 4, or 5 on Koji''s quality scale
Quality scores matter because Koji only includes interviews that meet a minimum quality threshold (score of 3 or higher on a 1–5 scale) in your report analysis. Low-quality responses — incomplete interviews, very short sessions, or off-topic conversations — are filtered out automatically, so your findings reflect genuine research conversations.
See Understanding Quality Scores to understand how quality is calculated.
Key Takeaways
Three to five bullet-point findings that represent the most important patterns across all interviews. These are the top-line insights for an executive summary — generated by AI from the full dataset. They''re designed to answer: "What did we learn, and what should we do about it?"
Themes Section
The Themes section is the heart of qualitative synthesis. Koji automatically groups insights from all interviews into thematic clusters based on what participants expressed.
How Themes Are Created
The AI reads all interview transcripts and identifies recurring ideas, concerns, opinions, and experiences. Themes emerge when multiple participants independently express similar things — not just using the same words, but expressing the same underlying idea.
For example, five participants might say: "The setup took too long," "The onboarding confused me," "I almost gave up during configuration," "Getting started was painful," and "The first hour was frustrating." These surface as a single theme: Onboarding friction.
Each theme card includes:
- Theme name — A concise label
- Theme description — What the theme represents across participants
- Evidence count — How many participants this theme appears in
- Representative quotes — Direct quotes from transcripts that illustrate the theme
Reading the Evidence Count
A theme that appears in 14 out of 15 interviews is a different signal than one that appears in 3 out of 15. The evidence count tells you how prevalent each theme is across your participant group.
Themes with high evidence counts (more than 60% of participants) are structural — they represent shared experiences that your whole user base likely has. Themes with lower counts may represent minority experiences that are still worth investigating.
Navigating from Themes to Transcripts
From any theme in the report, you can click through to the specific interview excerpts that contribute to it. This lets you read the full conversational context around any pattern you want to understand more deeply — going from the aggregate view to the individual voice.
Questions Section
If your study includes specific interview questions — which all structured studies do — the Questions section shows a breakdown of results for each one.
Open-Ended Questions
For open-ended questions, the report shows:
- Summary — A 2–3 sentence synthesis of how participants answered this question overall
- Key insights — The most common or significant responses
- Representative quotes — The most illuminating direct quotes from participants
Scale Questions
For scale questions (NPS, CSAT, satisfaction ratings), the report shows:
- Distribution chart — A bar chart showing how responses spread across the scale values
- Mean and median — Summary statistics across all participants
- Qualitative context — Patterns from probing follow-ups that explain the score distribution
A score distribution that clusters around 6–7/10 tells you something. The qualitative context tells you what — why participants aren''t at 9 or 10. See the Scale Questions Guide for details on how scale questions work.
Choice Questions
For single choice and multiple choice questions, the report shows:
- Frequency bar chart (single choice) or stacked frequency chart (multiple choice)
- Response counts and percentages for each option
- Probing insights — Qualitative context from follow-up questions about the selection
Ranking Questions
For ranking questions, the report shows:
- Ranked list with average position — Each option''s mean rank across all participants
- Distribution — How often each item was ranked 1st, 2nd, 3rd, etc., across all participants
Yes/No Questions
For yes/no questions, the report shows:
- Pie/donut chart showing the percentage split between yes and no
- Probing insights — What participants in each answer direction said when probed further
See the Choice and Ranking Questions Guide for a full explanation of how these question types work.
Sentiment Analysis
Every interview is analyzed for overall sentiment: positive, negative, neutral, or mixed. The report shows a sentiment distribution across all interviews.
Sentiment isn''t just about whether participants were happy or unhappy — it''s a signal about what kind of data you collected. A churn research study with 60% negative sentiment is expected by design. A customer satisfaction study with 60% negative sentiment is a very different finding that demands attention.
You can filter the themes and quotes sections by sentiment to see how the picture changes between satisfied and dissatisfied participants — revealing whether problems are universal or segment-specific.
Key Quotes Section
The Key Quotes section surfaces the most memorable, specific, and illuminating quotes from across all interviews. These are selected by the AI for their research value — not for being positive or negative, but for being genuinely informative.
The best research quotes share three qualities:
- Specific — They describe a concrete experience, not a vague opinion
- Surprising — They challenge assumptions or reveal something unexpected
- Representative — They capture something multiple participants expressed differently
These quotes are what you''ll use in design briefs, product discussions, roadmap presentations, and stakeholder meetings. They turn abstract findings into human voices.
Insights Chat
After your report generates, you can use Insights Chat to ask natural language questions about your data. Think of it as a research analyst who has read every single transcript.
Examples of questions you can ask:
- "What is the most common reason participants gave for switching from a competitor?"
- "Did any participants mention pricing as a concern?"
- "What did participants say about the mobile experience?"
- "Which themes appear most often in interviews with negative sentiment?"
- "What did participants who gave low NPS scores have in common?"
Insights Chat draws on both the structured data (charts, ratings, selections) and the full qualitative data (all transcripts) to answer your questions. It''s especially useful for testing hypotheses that didn''t surface in the main report — a specific competitor mention, a pattern you noticed in one transcript, or a question a stakeholder raised after seeing the top-line findings.
See Insights Chat Guide for full documentation.
Sharing Your Report
Once you''re satisfied with your report, you can:
- Share a link — Generate a public or password-protected link to share with stakeholders who don''t have Koji accounts
- Publish to your docs — Make the report available at a permanent URL
- Export the data — Download interview data in CSV or JSON format for further analysis in external tools
See Publishing and Sharing Reports for step-by-step instructions, and Exporting Research Data for all available export formats.
Refreshing the Report
As more interviews complete, you can refresh the report to incorporate the new data. Each refresh re-analyzes all interviews and updates all themes, charts, and insights. A report refresh costs 5 credits. The refreshed report fully replaces the previous version.
Reading Reports as a Team
Research reports are most valuable when the right people read them before decisions are made — not after. Some ways to make reports work harder for your team:
Share early. Use the share link to get findings in front of decision-makers as soon as the report generates, not after a polished presentation is prepared.
Use Insights Chat for stakeholder questions. When a stakeholder asks "but did any participants mention [X]?", use Insights Chat to answer on the spot rather than manually searching transcripts.
Cross-reference quantitative and qualitative data. A distribution chart shows you the what (40% rated satisfaction 3/5). The probing insights show you the why ("the product is good but the documentation is impossible to find"). Always read both together.
Prioritize by evidence count. A theme that appears in 12 out of 15 interviews demands action. A theme that appears in 2 out of 15 is worth noting but not necessarily acting on immediately. Evidence count is your prioritization signal.
Frequently Asked Questions
How many interviews do I need before a report is useful? A report becomes meaningful around 5 interviews. With fewer than that, patterns are hard to distinguish from noise. Most researchers aim for 8–15 interviews for qualitative studies, and 15 or more for studies with quantitative structured questions where chart distributions matter.
Can I edit the AI-generated report content? You cannot directly edit the AI-generated text sections, but you can use Insights Chat to get a different framing or angle. The export formats (CSV, JSON) let you work with the raw data in your own analysis tools.
What is the difference between themes and key takeaways? Key takeaways are curated top-line findings for executive audiences — brief, actionable, and high-level. Themes are the full thematic synthesis — more numerous, more nuanced, and with supporting evidence and quotes. Key takeaways are good for presentations; themes are good for research.
How does quality filtering affect the report? Only interviews scoring 3 or higher on Koji''s 1–5 quality scale are included in report analysis. This filters out incomplete interviews, very short sessions, and off-topic conversations. The quality distribution is shown in the report overview so you can see how many interviews were included vs. filtered.
Can I download the full transcripts? Yes. You can view all transcripts in the Recruit tab and export them via the export options. See Exporting Research Data for all available formats including CSV and JSON.
How long does report generation take? Typically 2–5 minutes for studies with fewer than 20 interviews. Larger studies may take longer. You will receive a notification when the report is ready.
Related Resources
- Generating Research Reports — How to generate and refresh your report
- Structured Questions in AI Interviews — How structured questions produce charts in your report
- Insights Chat Guide — Ask natural language questions about your research data
- Understanding Quality Scores — How interview quality is calculated and why it matters
- Publishing and Sharing Reports — How to share your findings with stakeholders
- Exporting Research Data — CSV, JSON, and transcript access
Related Articles
Exporting Research Data from Koji: CSV, JSON, and Transcript Access
A complete guide to every way you can get your interview data out of Koji — from one-click CSV downloads to real-time webhook pipelines.
Understanding Quality Scores
Learn how Koji evaluates interview quality on a 0-5 scale and why it matters for your research and billing.
Insights Chat: Ask Any Question About Your Research Data with AI
The Insights Chat is a conversational AI interface that lets you query your qualitative research data in natural language — surfacing themes, retrieving quotes, comparing segments, and answering stakeholder questions instantly, without re-reading every transcript.
Generating Research Reports
Create comprehensive aggregate reports across all your interviews — including summaries, themes, recommendations, and statistics.
Publishing & Sharing Reports
Make your research reports accessible to stakeholders, team members, and decision-makers.
Structured Questions in AI Interviews
Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.