Feature Adoption Research: How to Interview Users Who Aren't Using Your Product
A complete guide to understanding why users ignore, avoid, or misuse features — and how to use AI-powered interviews to get honest answers at scale.
Feature Adoption Research: How to Interview Users Who Aren't Using Your Product
The bottom line: When users don't adopt a feature, the product team's instinct is to make it more discoverable, write better tooltips, or add an onboarding prompt. But more often than not, low adoption signals a deeper mismatch — between the feature's design assumptions and users' actual mental models, workflows, or motivations. The only way to know which is true is to ask. Feature adoption research uses structured qualitative interviews to diagnose the real barrier before committing to a solution.
Why Features Fail to Get Adopted
Industry research consistently shows that most features in mature software products are used by fewer than 20% of users. The reasons cluster into four categories:
1. Discovery failure — Users don't know the feature exists. 2. Comprehension failure — Users see the feature but don't understand what it does or why it would help them. 3. Motivation failure — Users understand the feature but don't believe it's worth their time to try. 4. Workflow mismatch — Users try the feature but it doesn't fit their actual workflow well enough to stick.
The mistake most teams make is diagnosing their feature as a "discovery problem" without evidence, then investing in tooltips and onboarding prompts that don't move the needle — because the real problem was motivation or workflow mismatch all along.
Feature adoption research gets to the right diagnosis.
What Is Feature Adoption Research?
Feature adoption research is a qualitative research method focused on understanding why users are or aren't engaging with a specific feature. It combines:
- Behavioral observation — What does usage data actually show?
- Qualitative interviews — Why are users behaving this way?
- Mental model mapping — How do users conceptualize the problem this feature solves?
The output is a clear diagnosis of the adoption barrier — with enough qualitative context to recommend a specific intervention (redesign, reposition, retrain, or retire).
When to Run Feature Adoption Research
Run feature adoption research when:
- A feature launched 60+ days ago and adoption is below target
- Usage dropped after an initial spike (the feature got tried but not retained)
- A critical feature has high adoption variance across user segments
- You're deciding whether to invest further in a feature vs. deprioritize it
- Users report confusion or friction around a specific capability
Don't run feature adoption research when the feature has been live less than 2–4 weeks (insufficient time to form meaningful usage patterns) or when you already have clear behavioral evidence of a discovery issue (just fix the UI).
The 3-Segment Interview Approach
The most diagnostic feature adoption research interviews users across three segments simultaneously:
Segment 1: Active Users (the feature works for them)
Goal: Understand the aha moment — what clicked, how they use it, what workflow it fits into
Interviewing active users first gives you the benchmark. You'll learn what the feature is supposed to feel like when it's working — and that context makes the non-adopter interviews much more revealing.
Segment 2: Tried-But-Stopped Users
Goal: Understand what made them try it and then abandon it
This is the richest segment. These users had enough motivation to start but something blocked them. Their abandonment reasons are your most actionable signal.
Segment 3: Never-Tried Users
Goal: Understand whether they're aware of the feature, and if so, why they haven't engaged
This segment reveals discovery and messaging gaps. Often, users in this segment articulate a pain point that your feature solves — they just don't know the feature exists or understand how it relates to their problem.
Feature Adoption Interview Questions
The questions below are organized by segment. In Koji, each set can be deployed as a separate study targeting the relevant user cohort, or combined into a single adaptive study that branches based on an initial usage question.
For Active Users
- "Walk me through the last time you used [feature]. What were you trying to accomplish?"
- "What made you start using [feature] in the first place?"
- "How has using [feature] changed how you work?"
- "If [feature] disappeared tomorrow, what would you do instead?"
- "What do you wish [feature] could do that it can't do now?"
For Tried-But-Stopped Users
- "Tell me about when you first tried [feature]. What were you hoping it would do?"
- "What happened? Walk me through what you experienced."
- "At what point did you decide not to continue using it?"
- "What would have needed to be different for it to become part of your regular workflow?"
- "Are you solving the problem [feature] was meant to solve a different way? How?"
For Never-Tried Users
- "Have you noticed [feature] in [product]? What do you think it does?"
- "When you [relevant workflow], what does that process look like for you today?"
- "What does [feature] need to do for it to be worth trying? What would make you give it a shot?"
Using Structured Questions for Quantitative Signals
Qualitative interviews give you the why — but pairing them with structured quantitative questions gives you the how many. Koji supports 6 structured question types that work alongside conversational questions:
| Question Type | Feature Adoption Use Case |
|---|---|
| Scale (1–10) | "How aware were you that [feature] existed?" / "How useful is [feature] to your workflow?" |
| Yes/No | "Have you ever tried [feature]?" (screening/segmentation) |
| Single Choice | "What's the main reason you haven't tried [feature] yet?" |
| Multiple Choice | "Which of these best describes your experience with [feature]?" |
| Ranking | "Rank these barriers from most to least relevant to your experience" |
For feature adoption research, a typical Koji study might look like:
- Yes/No: "Have you tried [feature]?" → segments the participant
- Scale: "Rate your awareness of [feature] before today (1–5)"
- Open_ended: "Walk me through [relevant workflow]"
- Open_ended: "What happened when you tried [feature]?" (if tried) OR "What would need to be true for you to try it?" (if not tried)
- Single choice: "What most describes your current relationship with [feature]?"
This mix produces both the quantitative distribution you need for stakeholder reporting and the qualitative depth you need to diagnose the real barrier.
Diagnosing the Adoption Barrier
After collecting interviews across all three segments, Koji's analysis engine auto-clusters responses by theme. Common themes to look for:
Discovery Gap Signals
- "I didn't know that existed"
- "I've never seen that before"
- "Where is that?"
Recommended intervention: UI placement, onboarding spotlight, in-product tooltip
Comprehension Gap Signals
- "I'm not sure what it does"
- "I tried it but I didn't know what was supposed to happen"
- "The name is confusing"
Recommended intervention: Microcopy rewrite, rename the feature, add examples, improve empty states
Motivation Gap Signals
- "It seems like it's for power users, not me"
- "I don't think I have that problem"
- "It seems like a lot of work for what I get"
Recommended intervention: Reposition the feature, find a simpler entry point, or target a different segment
Workflow Mismatch Signals
- "I tried it but it doesn't fit how we work"
- "I'd have to change too much to use it"
- "It solves a problem I don't have but misses the one I do"
Recommended intervention: Redesign the feature, create workflow integrations, or reconsider scope
Feature Adoption Research at Scale
For large products with many segments and multiple low-adoption features, running manual interviews is prohibitive. AI-powered interviewing changes the economics:
- Segment separately — Create a Koji study for each segment (active, tried-stopped, never-tried) and target specific user cohorts from your product analytics tool
- Run concurrently — All three studies can run simultaneously, collecting responses 24/7
- Compare themes across segments — Koji's theme clustering lets you see patterns within each segment and differences across them
- Quantitative distributions — Structured questions generate charts you can drop directly into stakeholder presentations
A feature adoption research program that would traditionally take 3–4 weeks (recruit, schedule, interview, transcribe, code, synthesize) can be completed in 5–7 days with Koji.
From Research to Recommendation
Feature adoption research should end with a clear recommendation, not just findings. Use this framework:
Diagnose first: Which adoption barrier type dominates the data? Quantify the opportunity: What's the potential impact if adoption increases? Recommend an intervention: Specific design, content, or product changes Define a success metric: How will you know the intervention worked?
A Koji report lets you share this directly — themes, quotes, and your recommendation in a single shareable link that doesn't require a meeting to consume.
Feature Adoption Research Checklist
- Pulled usage data — identified 3 user segments (active, tried-stopped, never-tried)
- Written separate question guides for each segment
- Added structured questions (scale + choice) alongside open-ended questions
- Deployed studies targeting each cohort
- Collected 10–15 interviews per segment
- Reviewed Koji's theme clusters for each study
- Diagnosed adoption barrier type (discovery / comprehension / motivation / workflow)
- Written recommendation with specific intervention and success metric
- Shared report with product, design, and growth teams
Related Resources
- Structured Questions in AI Interviews — Use scale, choice, and yes/no questions to segment users and quantify adoption barriers
- How to Analyze Qualitative Data: From Raw Interviews to Actionable Insights — Techniques for synthesizing findings across your three adoption segments
- Generative vs. Evaluative Research: When to Use Each Method — Decide whether your adoption problem needs discovery research or validation research
- Understanding Themes & Patterns — How Koji auto-clusters adoption barrier signals across interview responses
- Product Discovery Research: How to Validate Ideas Before Building — Apply similar research methods before features are built
- How to Build a Continuous Product Feedback Loop — Turn one-off feature adoption research into a systematic feedback system
Related Articles
Understanding Themes & Patterns
Learn how Koji identifies recurring themes across interviews and how to use them for decision-making.
How to Analyze Qualitative Data: From Raw Interviews to Actionable Insights
A step-by-step guide to qualitative data analysis — from reviewing raw transcripts to synthesizing themes, generating insights, and presenting findings that teams act on.
Structured Questions in AI Interviews
Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.
Product Discovery Research: How to Validate Ideas Before Building
Learn how to run effective product discovery research — using AI interviews, problem interviews, concept testing, and JTBD techniques — to build products users actually want.
Power User Interviews: How to Learn from Your Best Customers to Drive Growth
Learn how to identify and interview your power users to understand what drives product mastery, advocacy, and expansion — and how AI interviews make this research scalable.
Generative vs. Evaluative Research: When to Use Each Method
Understand the difference between generative and evaluative research, when to use each, and how combining both leads to better product decisions. Includes a comparison table and decision framework.
How to Build a Continuous Product Feedback Loop
A step-by-step guide to building a durable product feedback loop — using trigger-based AI interviews, structured question trend tracking, and webhook integrations to keep your product decisions grounded in real user experience.
Research-Driven Roadmap Prioritization: How to Use Customer Interviews to Build Better Roadmaps
Learn how to combine qualitative customer interviews with structured ranking and scale questions to make roadmap decisions backed by real user evidence — not internal opinions.