New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Getting Started

Understanding the Koji Analytics Dashboard: How to Read Your Research Data at a Glance

Learn how to read the Koji analytics dashboard. Covers the five core metrics (completion rate, drop-off curve, time-to-completion, response quality distribution, response velocity), industry benchmarks, when to act on early signals, and how to use dashboard data to improve a live study.

Understanding the Koji Analytics Dashboard: How to Read Your Research Data at a Glance

Bottom line: The Koji analytics dashboard is a real-time operations cockpit for your research study. It surfaces five core metrics — completion rate, drop-off curve, time-to-completion, response quality distribution, and response velocity — that tell you within the first 10 completions whether your study is healthy or whether something needs to be fixed before you waste recruiting budget. Reading the dashboard well is the difference between a study that delivers clean signal and one that produces noisy data you have to filter out later.

Most research platforms present analytics as a post-study report. Koji's dashboard is live. The data flows in as participants complete the study, which means you can spot a broken question, a misaligned audience, or a fatigue issue while the study is still running — and fix it before the next batch of participants takes it.

This guide covers what each metric on the dashboard means, what good looks like, when to act on a worrying signal, and the five most common dashboard misreads.

What the Dashboard Shows

Open any Koji study and the dashboard tab presents five layers of data:

  1. Headline metrics — total invites, started, completed, completion rate, average time-to-complete
  2. Drop-off curve — a chart showing where in the question sequence participants abandon the interview
  3. Response velocity — completions per hour/day, used to forecast time-to-target-sample
  4. Quality distribution — histogram of the 1–5 quality scores across completed interviews
  5. Funnel metrics — invite → start → completion conversion at each stage

Each of these answers a different operational question. Read them as a system, not in isolation.

Metric 1: Completion Rate

Definition: Percentage of participants who started the interview and completed it through the final question.

Industry benchmark: Verint's 2024 analysis of 1.4M survey responses pegs the average completion rate for a 10-question study at 60–70%. (Verint State of Digital Customer Experience)

What good looks like in Koji:

  • 75%+ is excellent — your guide is well-paced and the audience is engaged
  • 60–75% is healthy — typical for a focused 10-question study
  • 45–60% suggests a fatigue or relevance issue — look at the drop-off curve
  • Below 45% is a problem — either the audience is wrong, the guide is too long, or there's a broken question

A common rookie mistake: treating an 80% completion rate as a self-evident win. If you optimized for completion by removing all the hard questions, you've also optimized away the value of the study. Completion rate is a necessary but not sufficient health metric.

Metric 2: The Drop-Off Curve

Definition: A chart showing the cumulative percentage of participants who abandoned the interview at each question.

What to read:

A healthy drop-off curve looks like a gentle slope from 100% at question 1 to 65–75% at the final question, with small dips at each question.

A broken curve has one or two visible cliffs — a sharp drop at a specific question. These cliffs are diagnostic:

  • Cliff at question 1–2: Your intro page or first question is the issue. Often a misalignment between what you promised participants and what the interview actually feels like.
  • Cliff at question 4–6: A specific question is too hard, too long, or feels too personal. Look at that question. Rewrite or remove it.
  • Cliff at the final 1–2 questions: Fatigue. The guide is too long. Cut to 8–10 questions and the cliff usually disappears.
  • No cliff but a steep general slope: The audience is wrong. Re-examine your screener.

The fix loop is simple: identify the cliff, rewrite or remove the offending question, and the next batch of participants completes at the higher rate. This is the single most operationally valuable feature of a live dashboard — without it, you'd ship the study and discover the problem in post-analysis.

Metric 3: Time-to-Completion

Definition: The average elapsed time from start to completion across all finished interviews.

Why it matters: The promised duration on your intake page sets participant expectations. A study advertised as "10 minutes" that actually takes 22 minutes will produce abandonment, a poor experience, and a worse incentive economics ratio.

What good looks like in Koji:

  • Voice interviews: 8–14 minutes for a 10-question study (faster because spoken responses are denser)
  • Text interviews: 12–20 minutes for a 10-question study
  • Mixed (structured + open-ended): 10–16 minutes — the structured questions are fast, the open-ended questions are where time accumulates

If your actual time-to-completion is more than 30% above your promised duration, update the intake page and shorten the guide. Honesty about duration is one of the strongest levers on completion rate.

Metric 4: Quality Score Distribution

Definition: Histogram of the 1–5 quality scores across all completed interviews (see Quality Scoring).

What good looks like:

  • 60–70% of interviews at score 4 or 5
  • 20–30% at score 3
  • 5–10% at score 1 or 2

What's wrong if the distribution is worse:

  • Lots of 1s and 2s: Either the screener is failing (recruiting the wrong people) or a key question is too hard. Drop into the low-score transcripts to identify the cause.
  • No 5s at all: The guide may be too constrained — participants don't have room to give rich, detailed answers. Look at whether AI probing is enabled on open-ended questions.
  • Bimodal (lots of 5s and lots of 1s): Real segmentation in your audience — half are highly engaged, half disengaged. Worth investigating whether your study should be split into two segments.

The quality distribution is the single best leading indicator of how much your final analysis will yield. A study with 70% high-quality interviews produces a sharper readout than a study with 70% completion but mid-range quality.

Metric 5: Response Velocity

Definition: Completions per hour (for fast studies) or per day (for week-long studies), often charted across the study's lifetime.

Why it matters: Velocity forecasts when you'll hit your target sample. It also signals whether your recruiting channel is healthy.

What to read:

  • Steep ramp at launch, then plateau: Healthy — most respondents come in the first 48 hours of any recruiting wave
  • Flat from the start: Recruiting issue — your invite isn't landing or the audience is too narrow
  • Steep ramp, then sudden stop: Channel saturation — you've exhausted the eligible audience in that channel
  • Velocity rising day over day: Word of mouth is working (common in B2B with referral incentives)

NN/G's research repository guidance emphasizes that "successful research operations require visibility into the pipeline — not just the outputs." (NN/G, Research Repositories) Response velocity is the single best pipeline metric for a live study.

When to Act on Early Signals

The dashboard is most valuable in the first 10–20 completions. After that, patterns stabilize and the dataset largely tells you what it's going to tell you. The early-window playbook:

At 10 completions:

  • If quality distribution shows >30% scores of 1–2, pause and investigate
  • If drop-off curve has a visible cliff, fix the broken question and relaunch
  • If time-to-completion is 50%+ over the promised duration, shorten the guide

At 20 completions:

  • Check whether structured question responses are clustering (early signal of a real theme)
  • Confirm response velocity is on track to hit target sample size in time
  • Use Insights Chat to test early hypotheses against the partial dataset

At 30+ completions:

  • The data is stable. Trust the trends. Stop tinkering with the guide.

The temptation to keep editing the guide mid-study is the most common analytics dashboard mistake. Every edit you make after launch creates a discontinuity in your dataset that you'll have to explain to stakeholders later. Edit aggressively in the first 10 completions, then freeze the guide.

Study-Level vs Portfolio-Level Views

For teams running multiple concurrent studies, Koji offers a portfolio-level dashboard that rolls up metrics across all active studies. This is where research operations leaders live.

Portfolio metrics to watch:

  • Total studies in field — how much research is currently running
  • Aggregate completion rate — health signal across the research program
  • Time-to-insight median — how fast studies are reaching decision-grade output
  • Cost per completed interview — for teams running paid recruiting

A 2024 industry study reported that teams using consolidated research dashboards report 40% better cross-functional adoption of research insights. (Dovetail, Research Data Centralization) The portfolio dashboard is one of the most underused parts of Koji — most teams default to the study-level view and miss the program-level signal.

The Five Most Common Dashboard Misreads

  1. Reading completion rate in isolation. A 90% completion rate on a soft, leading guide is worse than a 65% completion rate on a sharp guide that pushed participants. Cross-check with quality distribution.
  2. Ignoring the drop-off curve. The curve is the highest-leverage operational tool on the dashboard. If you're not opening it within the first 10 completions, you're flying blind.
  3. Acting too late. Patterns are visible at 10–20 completions. By 30, the data is locked in. Operational fixes work best in the first window.
  4. Editing the guide mid-study without flagging the discontinuity. Every edit creates a "before/after" in your dataset. Always note the edit time in your study notes so stakeholders can interpret the data correctly.
  5. Treating velocity as informational rather than actionable. Slow velocity is a recruiting fix — change the channel, raise the incentive, or widen the screener. Don't just watch it.

How Koji's Dashboard Compares to Legacy Platforms

Traditional survey tools like SurveyMonkey or Qualtrics show response counts and basic completion metrics, but the drop-off curve, quality score distribution, and Insights Chat are unique to AI-moderated platforms. Dovetail, a popular research repository, surfaces post-study analytics but doesn't operate in real time on a live study.

The compression Koji enables — going from "we'll review the data after the study closes" to "we adjust the study within the first 10 completions" — is the single biggest workflow change in modern research operations. It's also why teams using AI-assisted research tools report 60% faster time-to-insight than teams on legacy survey platforms. (HBR, How AI Helps Scale Qualitative Customer Research)

From Dashboard to Decision

The dashboard's job is operational health. The report's job is analytical synthesis. Once your study is healthy and complete:

  1. Confirm the dashboard health metrics — completion rate, drop-off shape, quality distribution all look good
  2. Move to the report tab — read the executive summary, then structured charts, then themes (see How to Analyze Interview Results)
  3. Use Insights Chat for follow-ups — query the dataset for the questions you didn't think to ask
  4. Translate themes into decisions — every theme needs a paired action item before you share with stakeholders

A well-operated dashboard upstream is what makes the downstream report trustworthy. Skipping it is one of the most common reasons research findings get pushed back on in stakeholder meetings.

Related Resources

Related Articles

Insights Chat: Ask Any Question About Your Research Data with AI

The Insights Chat is a conversational AI interface that lets you query your qualitative research data in natural language — surfacing themes, retrieving quotes, comparing segments, and answering stakeholder questions instantly, without re-reading every transcript.

Real-Time Research Insights: How to See Themes, Quotes, and Quality Scores as Interviews Complete

Stop waiting weeks for analysis — modern AI research platforms surface themes, structured-question distributions, sentiment, and quality-scored quotes the moment each interview ends. Here is how real-time research insights work in Koji and how to design studies that take advantage of them.

How to Analyze Interview Results: From AI-Moderated Sessions to Decisions

Learn how to analyze interview results from AI-moderated research sessions. Covers the four-layer Koji output (summary, structured charts, themes, quality scores), how to filter low-quality responses, the from-themes-to-decisions framework, and how to use Insights Chat for follow-up questions.

Structured Questions in AI Interviews

Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.

Time to Insight: How to Cut Research Cycles from Weeks to Hours

Time to insight is the lag between asking a question and acting on the answer. Here is how to measure it, where teams lose time, and how AI interviews collapse the cycle to under a day.

Customer Interview Cadence: How Often Should You Talk to Users? (2026)

Set the right customer interview cadence for your team — from one a week (Teresa Torres' baseline) to daily continuous discovery — and how AI moderation makes higher cadences sustainable.