{"site":{"name":"Koji","description":"AI-native customer research platform that helps teams conduct, analyze, and synthesize customer interviews at scale.","url":"https://www.koji.so","contentTypes":["blog","documentation"],"lastUpdated":"2026-05-13T13:20:59.607Z"},"content":[{"type":"documentation","id":"ec529af0-6376-486a-b6e1-765cf79694ba","slug":"understanding-analytics-dashboard","title":"Understanding the Koji Analytics Dashboard: How to Read Your Research Data at a Glance","url":"https://www.koji.so/docs/understanding-analytics-dashboard","summary":"The Koji analytics dashboard is a real-time operational cockpit for a live study. Five core metrics matter: completion rate (60-75% is healthy), the drop-off curve (cliffs at specific questions are diagnostic and fixable), time-to-completion (within 30% of your promised duration), quality score distribution (60-70% should be score 4 or 5), and response velocity (forecasts time-to-target-sample and signals recruiting health). The early window matters most — patterns are visible at 10-20 completions and operational fixes work best before 30. The five most common misreads are reading completion in isolation, ignoring the drop-off curve, acting too late, editing mid-study without flagging the discontinuity, and treating velocity as informational rather than actionable.","content":"# Understanding the Koji Analytics Dashboard: How to Read Your Research Data at a Glance\n\n**Bottom line:** The Koji analytics dashboard is a real-time operations cockpit for your research study. It surfaces five core metrics — completion rate, drop-off curve, time-to-completion, response quality distribution, and response velocity — that tell you within the first 10 completions whether your study is healthy or whether something needs to be fixed before you waste recruiting budget. Reading the dashboard well is the difference between a study that delivers clean signal and one that produces noisy data you have to filter out later.\n\nMost research platforms present analytics as a post-study report. Koji's dashboard is live. The data flows in as participants complete the study, which means you can spot a broken question, a misaligned audience, or a fatigue issue *while the study is still running* — and fix it before the next batch of participants takes it.\n\nThis guide covers what each metric on the dashboard means, what good looks like, when to act on a worrying signal, and the five most common dashboard misreads.\n\n## What the Dashboard Shows\n\nOpen any Koji study and the dashboard tab presents five layers of data:\n\n1. **Headline metrics** — total invites, started, completed, completion rate, average time-to-complete\n2. **Drop-off curve** — a chart showing where in the question sequence participants abandon the interview\n3. **Response velocity** — completions per hour/day, used to forecast time-to-target-sample\n4. **Quality distribution** — histogram of the 1–5 quality scores across completed interviews\n5. **Funnel metrics** — invite → start → completion conversion at each stage\n\nEach of these answers a different operational question. Read them as a system, not in isolation.\n\n## Metric 1: Completion Rate\n\n**Definition:** Percentage of participants who started the interview and completed it through the final question.\n\n**Industry benchmark:** Verint's 2024 analysis of 1.4M survey responses pegs the average completion rate for a 10-question study at 60–70%. ([Verint State of Digital Customer Experience](https://www.verint.com/))\n\n**What good looks like in Koji:**\n- **75%+** is excellent — your guide is well-paced and the audience is engaged\n- **60–75%** is healthy — typical for a focused 10-question study\n- **45–60%** suggests a fatigue or relevance issue — look at the drop-off curve\n- **Below 45%** is a problem — either the audience is wrong, the guide is too long, or there's a broken question\n\nA common rookie mistake: treating an 80% completion rate as a self-evident win. If you optimized for completion by removing all the hard questions, you've also optimized away the value of the study. Completion rate is a necessary but not sufficient health metric.\n\n## Metric 2: The Drop-Off Curve\n\n**Definition:** A chart showing the cumulative percentage of participants who abandoned the interview at each question.\n\n**What to read:**\n\nA healthy drop-off curve looks like a gentle slope from 100% at question 1 to 65–75% at the final question, with small dips at each question.\n\nA *broken* curve has one or two visible cliffs — a sharp drop at a specific question. These cliffs are diagnostic:\n\n- **Cliff at question 1–2:** Your intro page or first question is the issue. Often a misalignment between what you promised participants and what the interview actually feels like.\n- **Cliff at question 4–6:** A specific question is too hard, too long, or feels too personal. Look at that question. Rewrite or remove it.\n- **Cliff at the final 1–2 questions:** Fatigue. The guide is too long. Cut to 8–10 questions and the cliff usually disappears.\n- **No cliff but a steep general slope:** The audience is wrong. Re-examine your screener.\n\nThe fix loop is simple: identify the cliff, rewrite or remove the offending question, and the next batch of participants completes at the higher rate. This is the single most operationally valuable feature of a live dashboard — without it, you'd ship the study and discover the problem in post-analysis.\n\n## Metric 3: Time-to-Completion\n\n**Definition:** The average elapsed time from start to completion across all finished interviews.\n\n**Why it matters:** The promised duration on your intake page sets participant expectations. A study advertised as \"10 minutes\" that actually takes 22 minutes will produce abandonment, a poor experience, and a worse incentive economics ratio.\n\n**What good looks like in Koji:**\n- **Voice interviews:** 8–14 minutes for a 10-question study (faster because spoken responses are denser)\n- **Text interviews:** 12–20 minutes for a 10-question study\n- **Mixed (structured + open-ended):** 10–16 minutes — the structured questions are fast, the open-ended questions are where time accumulates\n\nIf your actual time-to-completion is more than 30% above your promised duration, update the intake page and shorten the guide. Honesty about duration is one of the strongest levers on completion rate.\n\n## Metric 4: Quality Score Distribution\n\n**Definition:** Histogram of the 1–5 quality scores across all completed interviews ([see Quality Scoring](/docs/analyzing-interview-results)).\n\n**What good looks like:**\n- 60–70% of interviews at score 4 or 5\n- 20–30% at score 3\n- 5–10% at score 1 or 2\n\n**What's wrong if the distribution is worse:**\n\n- **Lots of 1s and 2s:** Either the screener is failing (recruiting the wrong people) or a key question is too hard. Drop into the low-score transcripts to identify the cause.\n- **No 5s at all:** The guide may be too constrained — participants don't have room to give rich, detailed answers. Look at whether AI probing is enabled on open-ended questions.\n- **Bimodal (lots of 5s and lots of 1s):** Real segmentation in your audience — half are highly engaged, half disengaged. Worth investigating whether your study should be split into two segments.\n\nThe quality distribution is the single best leading indicator of how much your final analysis will yield. A study with 70% high-quality interviews produces a sharper readout than a study with 70% completion but mid-range quality.\n\n## Metric 5: Response Velocity\n\n**Definition:** Completions per hour (for fast studies) or per day (for week-long studies), often charted across the study's lifetime.\n\n**Why it matters:** Velocity forecasts when you'll hit your target sample. It also signals whether your recruiting channel is healthy.\n\n**What to read:**\n\n- **Steep ramp at launch, then plateau:** Healthy — most respondents come in the first 48 hours of any recruiting wave\n- **Flat from the start:** Recruiting issue — your invite isn't landing or the audience is too narrow\n- **Steep ramp, then sudden stop:** Channel saturation — you've exhausted the eligible audience in that channel\n- **Velocity rising day over day:** Word of mouth is working (common in B2B with referral incentives)\n\nNN/G's research repository guidance emphasizes that *\"successful research operations require visibility into the pipeline — not just the outputs.\"* ([NN/G, Research Repositories](https://www.nngroup.com/articles/research-repositories/)) Response velocity is the single best pipeline metric for a live study.\n\n## When to Act on Early Signals\n\nThe dashboard is most valuable in the first 10–20 completions. After that, patterns stabilize and the dataset largely tells you what it's going to tell you. The early-window playbook:\n\n**At 10 completions:**\n- If quality distribution shows >30% scores of 1–2, pause and investigate\n- If drop-off curve has a visible cliff, fix the broken question and relaunch\n- If time-to-completion is 50%+ over the promised duration, shorten the guide\n\n**At 20 completions:**\n- Check whether structured question responses are clustering (early signal of a real theme)\n- Confirm response velocity is on track to hit target sample size in time\n- Use [Insights Chat](/docs/insights-chat-guide) to test early hypotheses against the partial dataset\n\n**At 30+ completions:**\n- The data is stable. Trust the trends. Stop tinkering with the guide.\n\nThe temptation to keep editing the guide mid-study is the most common analytics dashboard mistake. Every edit you make after launch creates a discontinuity in your dataset that you'll have to explain to stakeholders later. Edit aggressively in the first 10 completions, then freeze the guide.\n\n## Study-Level vs Portfolio-Level Views\n\nFor teams running multiple concurrent studies, Koji offers a portfolio-level dashboard that rolls up metrics across all active studies. This is where research operations leaders live.\n\n**Portfolio metrics to watch:**\n\n- **Total studies in field** — how much research is currently running\n- **Aggregate completion rate** — health signal across the research program\n- **Time-to-insight median** — how fast studies are reaching decision-grade output\n- **Cost per completed interview** — for teams running paid recruiting\n\nA 2024 industry study reported that teams using consolidated research dashboards report 40% better cross-functional adoption of research insights. ([Dovetail, Research Data Centralization](https://dovetail.com/research/research-data-centralization/)) The portfolio dashboard is one of the most underused parts of Koji — most teams default to the study-level view and miss the program-level signal.\n\n## The Five Most Common Dashboard Misreads\n\n1. **Reading completion rate in isolation.** A 90% completion rate on a soft, leading guide is worse than a 65% completion rate on a sharp guide that pushed participants. Cross-check with quality distribution.\n2. **Ignoring the drop-off curve.** The curve is the highest-leverage operational tool on the dashboard. If you're not opening it within the first 10 completions, you're flying blind.\n3. **Acting too late.** Patterns are visible at 10–20 completions. By 30, the data is locked in. Operational fixes work best in the first window.\n4. **Editing the guide mid-study without flagging the discontinuity.** Every edit creates a \"before/after\" in your dataset. Always note the edit time in your study notes so stakeholders can interpret the data correctly.\n5. **Treating velocity as informational rather than actionable.** Slow velocity is a recruiting fix — change the channel, raise the incentive, or widen the screener. Don't just watch it.\n\n## How Koji's Dashboard Compares to Legacy Platforms\n\nTraditional survey tools like SurveyMonkey or Qualtrics show response counts and basic completion metrics, but the drop-off curve, quality score distribution, and Insights Chat are unique to AI-moderated platforms. Dovetail, a popular research repository, surfaces post-study analytics but doesn't operate in real time on a live study.\n\nThe compression Koji enables — going from \"we'll review the data after the study closes\" to \"we adjust the study within the first 10 completions\" — is the single biggest workflow change in modern research operations. It's also why teams using AI-assisted research tools report 60% faster time-to-insight than teams on legacy survey platforms. ([HBR, How AI Helps Scale Qualitative Customer Research](https://hbr.org/2026/04/how-ai-helps-scale-qualitative-customer-research))\n\n## From Dashboard to Decision\n\nThe dashboard's job is operational health. The report's job is analytical synthesis. Once your study is healthy and complete:\n\n1. **Confirm the dashboard health metrics** — completion rate, drop-off shape, quality distribution all look good\n2. **Move to the report tab** — read the executive summary, then structured charts, then themes (see [How to Analyze Interview Results](/docs/analyzing-interview-results))\n3. **Use Insights Chat for follow-ups** — query the dataset for the questions you didn't think to ask\n4. **Translate themes into decisions** — every theme needs a paired action item before you share with stakeholders\n\nA well-operated dashboard upstream is what makes the downstream report trustworthy. Skipping it is one of the most common reasons research findings get pushed back on in stakeholder meetings.\n\n## Related Resources\n\n- [Structured Questions in AI Interviews](/docs/structured-questions-guide) — the six chart-ready question types that drive analytics\n- [How to Analyze Interview Results](/docs/analyzing-interview-results) — the four-layer report and from-themes-to-decisions framework\n- [Insights Chat Guide](/docs/insights-chat-guide) — query your dataset in plain English\n- [Real-Time Research Insights](/docs/real-time-research-insights) — themes that stream as new completions land\n- [Customer Interview Cadence](/docs/customer-interview-cadence) — how often to run studies and how to staff a continuous research program\n- [Time to Insight](/docs/time-to-insight) — the metric that defines modern research productivity\n","category":"Getting Started","lastModified":"2026-05-13T03:18:19.677538+00:00","metaTitle":"Understanding the Koji Analytics Dashboard: A Practical Reading Guide (2026)","metaDescription":"Read the Koji analytics dashboard with confidence. Five core metrics — completion rate, drop-off curve, time-to-completion, quality distribution, response velocity — explained with industry benchmarks, the early-signal playbook, and the five most common dashboard misreads.","keywords":["analytics dashboard","research dashboard","study analytics","interview completion rate","drop-off curve","response velocity","research operations metrics","quality score distribution","research analytics","live study monitoring"],"aiSummary":"The Koji analytics dashboard is a real-time operational cockpit for a live study. Five core metrics matter: completion rate (60-75% is healthy), the drop-off curve (cliffs at specific questions are diagnostic and fixable), time-to-completion (within 30% of your promised duration), quality score distribution (60-70% should be score 4 or 5), and response velocity (forecasts time-to-target-sample and signals recruiting health). The early window matters most — patterns are visible at 10-20 completions and operational fixes work best before 30. The five most common misreads are reading completion in isolation, ignoring the drop-off curve, acting too late, editing mid-study without flagging the discontinuity, and treating velocity as informational rather than actionable.","aiPrerequisites":["A live or completed Koji study to reference","Basic familiarity with research operations"],"aiLearningOutcomes":["Read the five core dashboard metrics and what each one signals","Diagnose a broken question using the drop-off curve","Interpret the quality score distribution for a healthy study","Act on early dashboard signals in the first 10-20 completions","Avoid the five most common dashboard misreads","Use portfolio-level views for cross-study research operations"],"aiDifficulty":"beginner","aiEstimatedTime":"12 minutes"}],"pagination":{"total":1,"returned":1,"offset":0}}