Understanding Quality Scores
Learn how Koji evaluates interview quality on a 0-5 scale and why it matters for your research and billing.
Every completed interview in Koji receives a quality score from 0 to 5. This score tells you how useful the interview is for your research — and it directly affects your billing, because only interviews scoring 3 or above count toward your monthly limit.
How Quality Scoring Works
After each interview concludes, Koji's AI evaluates the conversation across multiple dimensions to produce a single quality score. The evaluation happens automatically and takes just a few seconds.
The score reflects the overall research value of the interview — not the participant's intelligence or effort, but how much actionable data the conversation produced. A high-quality interview is one that gives you clear, detailed, and relevant information you can use to make decisions.
The 0-5 Scale Explained
Here's what each score range means in practice:
| Score | Rating | What It Means |
|---|---|---|
| 0-1 | Poor | The interview produced very little usable data. The participant may have given one-word answers, gone off-topic, or disengaged early. |
| 2 | Below Average | Some relevant information was shared, but responses lacked depth. There may be a few useful data points, but not enough for confident analysis. |
| 3 | Good | A solid interview with meaningful responses. The participant engaged with most questions and provided enough detail to identify themes and insights. |
| 4 | Very Good | A strong interview with detailed, thoughtful responses. The participant shared personal experiences, examples, and reasoning behind their opinions. |
| 5 | Excellent | An exceptional interview with rich, nuanced data. The participant went deep on multiple topics, provided vivid examples, and offered perspectives you didn't anticipate. |
What Makes a High-Quality Interview
Several factors contribute to a higher quality score:
-
Response depth: Detailed, multi-sentence answers score higher than brief or one-word responses. When a participant explains why they feel a certain way or how they approached a problem, that's high-quality data.
-
Relevance: Responses that directly address the research questions in your study brief contribute more to the score. Off-topic tangents, while sometimes interesting, don't add to the research value as measured by the score.
-
Engagement level: Participants who stay engaged throughout the entire interview — answering follow-up questions, elaborating when asked, and maintaining focus — produce higher-quality conversations.
-
Specificity: Concrete examples, real-life stories, and specific details are more valuable than vague generalizations. "I switched to a competitor last March because their onboarding was faster" is more useful than "I sometimes use other products."
-
Completeness: Interviews where the participant completes the full conversation tend to score higher than those where the participant drops off early.
Research methodology experts at the Interaction Design Foundation emphasize that the richest qualitative data comes from interviews where participants feel comfortable sharing specific personal experiences rather than offering abstract opinions.
Why Quality Scores Matter
Quality scores serve two important purposes:
1. Research Data Quality
Not all interviews are equally valuable. A five-minute conversation where a participant gave one-word answers tells you far less than a twenty-minute deep dive. The quality score helps you quickly identify which interviews deserve your attention first and which ones might need to be supplemented with additional data collection.
When you're reviewing results, sorting by quality score helps you prioritize your time. Start with the highest-scoring interviews to get the strongest signals, then work your way down.
2. Fair Billing Through the Quality Gate
Here's something that works strongly in your favor: interviews scoring below 3 don't count toward your monthly interview limit. This is Koji's quality gate, and it exists to protect you.
If a participant rushes through your interview giving minimal answers, or if someone provides off-topic responses, you shouldn't have to pay for that. The quality gate ensures you're only billed for interviews that actually deliver research value.
Learn more about how this works in How the Quality Gate Works.
Improving Your Quality Scores
While you can't control how individual participants respond, you can set the stage for better interviews:
-
Write a clear study brief: The better your research objectives and target questions are defined, the more relevant the AI interviewer's questions will be. Clear briefs lead to focused conversations.
-
Target the right participants: People with direct experience in your research topic naturally provide richer, more detailed responses. A product user will give you better feedback about your product than someone who's never used it.
-
Set expectations upfront: Your study description (what participants see before starting) should explain what the interview is about and roughly how long it takes. Prepared participants tend to give more thoughtful answers.
-
Design for engagement: Studies with interesting, relevant topics naturally produce better interviews. If your research questions connect to things participants genuinely care about, they'll be more willing to share detailed responses.
A study published in the International Journal of Qualitative Methods found that participant preparation and clear research framing can improve interview data quality by a meaningful margin compared to unstructured approaches.
Viewing Quality Scores
You can see quality scores in several places:
- Results page: Each interview card displays its quality score prominently, making it easy to scan across all interviews at a glance.
- Individual transcript view: The score appears at the top of each transcript alongside other metadata.
- Insights dashboard: Aggregate quality statistics for your study, including average score and distribution.
Key Things to Know
- Scores are final: Quality scores are calculated once after the interview completes and don't change. This ensures consistency in your billing and analysis.
- Scores are not participant ratings: A low score doesn't mean the participant was "bad." It means the conversation didn't produce enough usable research data for various possible reasons.
- You can't manually override scores: The scoring is automated to ensure objectivity and consistency across all interviews.
Related Articles
- How the Quality Gate Works — Understanding why low-quality interviews don't count toward your limits
- AI-Generated Insights — What analysis Koji produces from your interviews
- Viewing Interview Transcripts — How to read and navigate interview conversations
Frequently Asked Questions
Q: Can a participant retake an interview to improve the quality score? A: Each interview submission is scored independently. If you share the link again with the same participant, they could complete a new interview, which would receive its own quality score.
Q: Do low-quality interviews still generate insights? A: Yes, Koji generates AI insights for every completed interview regardless of its quality score. However, the insights from lower-quality interviews will naturally be less detailed and less reliable.
Q: What if I think a quality score is wrong? A: The scoring evaluates objective factors like response depth, relevance, and completeness. If an interview has a lower score than expected, review the transcript — you may find that while the participant said some useful things, overall response depth or completeness was limited.
Q: Does the quality score affect report generation? A: All completed interviews are included when generating research reports. However, higher-quality interviews naturally contribute more themes, quotes, and insights to the aggregate analysis.
Q: What's the average quality score across all Koji interviews? A: Quality scores vary significantly by study topic, participant recruitment, and study design. Well-designed studies with targeted participants typically see average scores above 3.
Related Articles
Viewing Interview Transcripts
How to read, navigate, and get value from your interview transcripts in Koji.
AI-Generated Insights
Discover what analysis Koji automatically produces for each interview — themes, sentiment, key quotes, and findings.
Insights Dashboard
Navigate visual analytics including interview counts, completion rates, quality distributions, and participant statistics.
How the Quality Gate Works
Understand Koji's quality gate — conversations scoring below 3/5 are completely free and don't consume credits, protecting your research budget.
Interview Completion Flow
What happens when an interview ends — thank-you screen, participant feedback, and behind-the-scenes AI analysis.
Interview Not Counting
Understand why some interviews do not count toward your usage and how the quality gate works.