New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Use Cases

User Onboarding Research: How to Interview New Users to Improve Activation

Learn how to design and run user onboarding research interviews that reveal why new users activate or drop off — and how to use those insights to improve your first-run experience.

User Onboarding Research: How to Interview New Users to Improve Activation

User activation is one of the most leveraged improvements a product team can make. Improving activation by 10% means 10% more paying customers from the same acquisition spend — no new marketing, no new sales effort, just more of the users you already acquired making it to value.

Yet most teams treat onboarding as a design problem (better UI, clearer copy, fewer clicks) without first treating it as a research problem. They optimise the experience without understanding why users drop off. The result is a well-polished onboarding flow that still fails at the same conceptual points.

User onboarding research fixes this. By interviewing users in the first days and weeks of their experience, you learn not what they clicked or where they dropped off (you already have that from analytics), but why they felt confused, what expectations were violated, and what would have made them more likely to succeed.

Why Onboarding Research Is Different from Other User Research

Onboarding research has unique characteristics that require a tailored approach:

Recency matters enormously. Interview users 24–72 hours after signup while the experience is fresh. Wait two weeks and they will not remember which step confused them. Koji's async format solves this — you can trigger an interview invite automatically after signup without any scheduling overhead.

You are studying two things at once. First: what happened during onboarding (the actual experience). Second: what the user expected to happen (their mental model). Discrepancies between the two are where friction lives.

Survivors are a biased sample. If you only interview users who completed onboarding, you are studying people who overcame whatever friction existed. The insights you most need come from users who dropped off — which requires an additional recruitment strategy (email non-activating signups before they churn).

Activation is not always a single moment. For some products, activation is binary (they connected the integration or they didn't). For others, it is a sequence (setup → first output → aha moment → habit). Your research design needs to match your activation model.

Types of Onboarding Research

Activation interviews (7-day cohort)

Recruit users who signed up in the last 7 days — a mix of those who completed setup and those who did not. Run async interviews 24–72 hours after their last product action. Questions focus on: what they expected, what surprised them, where they felt confused, and what would have helped.

Best for: Understanding the initial setup experience, identifying conceptual confusion, and finding gaps between your marketing promise and product reality.

First-value interviews (activation milestone)

Trigger an interview when a user reaches their first meaningful outcome in your product — first report generated, first integration connected, first task completed. The question set explores: what they were trying to accomplish, what obstacles they overcame, and what made it click.

Best for: Understanding what the "aha moment" actually is (it is often different from what the team assumed), and what paved the way for it.

Drop-off interviews (non-activating users)

Email users who signed up 3–7 days ago but have not returned or completed a key action. Subject line: "Did something go wrong?" Keep the study short (5–6 questions, 8 minutes max). The questions focus on: what they were hoping to accomplish, what stopped them, and what would bring them back.

Best for: Understanding the friction that the rest of your research misses, because these users never made it far enough to appear in activation analytics.

30-day retention check-in

For users who activated but have not yet converted to paid (or have not returned in 2 weeks), run a brief interview focused on engagement barriers. What did they come back for? What made them not return? What would make the product indispensable?

Best for: Understanding the gap between activation and habit formation.

Designing the Onboarding Interview in Koji

Here is a recommended question structure for an activation interview targeting 7-day cohort users:

Warm-up: Context setting

  • "What were you hoping to accomplish when you signed up for [product]? What problem were you trying to solve?" (open_ended — establishes the job-to-be-done and expected outcome)
  • "Had you used any similar tools before? If so, what were they?" (open_ended, AI probes: "How was [product] different from what you expected based on that experience?")

Setup experience

  • "Walk me through what happened when you first logged in. What did you do first?" (open_ended — reconstructs the experience chronologically; AI probes at any point of confusion or hesitation)
  • "Was there any step in setup that you found confusing or had to stop and figure out?" (yes_no, AI probes on "yes": "Tell me about that moment — what were you expecting and what did you see instead?")
  • "How clear was it what you were supposed to do next at each step?" (scale: 1 = completely unclear, 10 = always obvious)

First value

  • "Did you get to a point where [product] did something useful for you? Describe that moment." (open_ended, AI probes: "How long did that take from signup? What made it valuable?")
  • "If you hit a moment where the product clicked for you — where you thought 'okay, I get it now' — what was that?" (open_ended)

Barriers and expectations

  • "Was there anything that slowed you down or made you less confident during setup?" (open_ended, AI probes for root cause)
  • "If you could change one thing about the experience in the first 10 minutes, what would it be?" (open_ended)

Forward intent (segmentation signal)

  • "How likely are you to keep using [product] over the next month?" (scale: 1 = very unlikely, 10 = very likely)
  • "What would need to be true for you to make [product] a regular part of your workflow?" (open_ended)

This structure produces both qualitative insight (the specific friction points and mental model mismatches) and quantitative signals (confusion scale, forward intent) that help you prioritise which friction to fix first.

How to Trigger Onboarding Research Automatically

The biggest challenge in onboarding research is timing. You need to catch users while the experience is fresh. Scheduling live interviews within 72 hours of signup is operationally impractical at scale.

Koji solves this through two approaches:

Automated email invites. Connect your email automation (Customer.io, Klaviyo, HubSpot) to send a Koji study link 24 hours after signup. The email subject line "Quick question about your setup experience" typically achieves 20–35% open rates from recent signups who are still engaged.

In-product embed. Use Koji's embed widget to surface the interview directly inside your product — after a user completes setup, at the end of the first session, or at the moment they hit (or fail to hit) a key milestone. The embed triggers in-context, when motivation to share feedback is highest.

API integration. For product teams with engineering resources, Koji's API lets you programmatically start an interview for a specific user at a precisely defined trigger point. This is the most accurate targeting — firing at the exact moment after the event you care about.

Analysing Onboarding Research Findings

The mental model gap analysis

For each friction point surfaced in qualitative responses, note:

  • What the user expected to happen
  • What actually happened
  • The gap between the two

Most onboarding friction is not a UX problem — it is a mental model problem. Users came in expecting one workflow and the product works differently. The fix might be better copy, a different tutorial framing, or a product redesign — but you cannot know which until you understand the mental model.

Confusion clustering

Use Koji's theme analysis to find which steps or concepts generated the most confusion. If 12 out of 20 interviews mention confusion at the same step, that is your highest-impact fix. The qualitative quotes tell you exactly what was confusing and what language would have helped.

Intent vs. outcome matching

Compare what users said they came to accomplish (from the warm-up question) with what they actually experienced. Mismatches reveal positioning problems — users who signed up with the wrong expectation, driven by marketing language that does not match product reality.

Forward intent distribution

The scale question ("how likely are you to keep using?") produces a distribution that acts as a leading indicator for 30-day retention. If the distribution skews low, onboarding is not creating sufficient confidence in the product's value. High intent plus low activation (from analytics) suggests the friction is in setup, not motivation.

Common Onboarding Research Mistakes

Only studying successful users. If you only interview users who activated, you have no visibility into why the majority dropped off. Always include a non-activating cohort in your research design.

Waiting too long. Every day between the onboarding experience and the interview erodes recall quality. Automate recruitment to trigger within 24–48 hours of the qualifying event.

Asking about features instead of experience. "What features did you find most useful?" produces feature feedback. "Walk me through what happened when you first logged in" produces onboarding insight. Focus on the experience, not the product catalogue.

Ignoring the expectation gap. Users do not evaluate your onboarding in absolute terms — they evaluate it against what they expected based on your marketing, previous tools, and mental models of similar products. Always ask what they expected before asking what happened.

Fixing symptoms instead of causes. Analytics show you where users drop off. Research shows you why. Never redesign an onboarding step based on drop-off data alone — interview first to understand the reason, then design the fix.

Onboarding Research Across the Funnel

Onboarding research integrates with your broader research program:

Feed back to acquisition: If onboarding interviews reveal a systematic mismatch between user expectations and product reality, the root cause is often in marketing. The job-to-be-done users arrive with tells you which positioning is driving signups — and whether it is the right positioning.

Connect to retention: What users experience in onboarding determines whether they build the habits that lead to retention. Onboarding interviews that track forward intent (scale question) are a leading indicator for 30-day and 90-day retention curves.

Inform feature roadmap: Onboarding confusion often reveals missing capabilities. "I expected to be able to do X in step 2, but I couldn't find it" — is that a UX problem (it exists but is hidden) or a product gap (it does not exist yet)? Onboarding research surfaces feature gaps that activation analytics cannot.

Validate after changes: After shipping onboarding improvements, run a fresh cohort through the same interview structure. Compare scale scores and theme distribution to verify that the friction you addressed is actually reduced.

Related Resources

Related Articles

Structured Questions in AI Interviews

Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.

NPS Follow-Up Interviews: How to Turn Your Score Into Actionable Insights

NPS tells you the score. Follow-up interviews tell you what to do about it. Learn how to run qualitative interviews with Promoters, Passives, and Detractors to unlock the real story behind your Net Promoter Score.

How to Identify and Validate Customer Pain Points Through Research

A complete guide to discovering the real problems customers face — using AI interviews, structured questions, and proven frameworks to surface pain points that drive product decisions.

Power User Interviews: How to Learn from Your Best Customers to Drive Growth

Learn how to identify and interview your power users to understand what drives product mastery, advocacy, and expansion — and how AI interviews make this research scalable.

How to Validate Product-Market Fit Through Qualitative Interviews

Learn how to design and run customer interviews specifically focused on measuring and moving your product-market fit score.

How to Automate User Research: Build a Pipeline That Runs 24/7

A step-by-step guide to automating user research — from setting up AI-moderated interviews to continuous discovery pipelines that generate insights every week without manual effort.

Continuous Discovery: How to Run Weekly Customer Interviews Without Burning Out

Continuous discovery is the practice of conducting customer interviews every week as part of your normal workflow. This guide explains how to build an always-on research practice that actually scales.