New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Use Cases

Research-Driven Roadmap Prioritization: How to Use Customer Interviews to Build Better Roadmaps

Learn how to combine qualitative customer interviews with structured ranking and scale questions to make roadmap decisions backed by real user evidence — not internal opinions.

Research-Driven Roadmap Prioritization: How to Use Customer Interviews to Build Better Roadmaps

Roadmap prioritization is where product intuition meets political pressure. Engineering wants to tackle tech debt. Sales wants the feature that will close the next deal. Leadership wants the strategic bet. And somewhere in that conversation, the customer's actual priorities get lost.

Research-driven prioritization changes that dynamic. By running structured customer interviews before roadmap planning, product teams shift the conversation from "what do we think customers want?" to "here is what 25 customers told us, with distribution charts." That is a very different starting position.

This guide covers how to design and run roadmap prioritization research using qualitative interviews, structured question types, and AI-powered analysis — so your next planning cycle is grounded in evidence, not guesswork.

Why Traditional Prioritisation Methods Fall Short

Most teams prioritise using some combination of:

  • RICE scoring (Reach × Impact × Confidence × Effort) — but "Impact" and "Confidence" are often internal estimates, not customer data
  • Feature voting tools (like Productboard or Canny) — good for signal, but voting attracts your most engaged users, not your target segment
  • Sales-driven lists — weighted toward deals, not customer outcomes
  • Executive intuition — valuable, but prone to recency bias and HiPPO dynamics (Highest Paid Person's Opinion)

None of these methods are wrong on their own. But they all share the same problem: they tell you what features customers request, not what outcomes they are trying to achieve. When you understand the outcome, you can often find a better solution than the feature requested.

Customer interviews fix this by revealing the why behind requests.

What Research-Driven Prioritisation Looks Like

Here is the core approach:

  1. Define the decision. What planning horizon are you researching? A quarterly roadmap? An annual strategy? A specific product area?
  2. Recruit the right participants. Prioritize your target ICP (ideal customer profile), not just your loudest users. Mix of happy customers, struggling customers, and churned users.
  3. Run structured interviews that reveal pain magnitude, outcome importance, and current workaround costs.
  4. Aggregate the data. Ranking and scale questions give you distribution charts across respondents. Open-ended questions give you the why behind the numbers.
  5. Bring the report to planning. The research report becomes the anchor for the prioritization conversation — not a slide of feature requests, but evidence of outcome importance and pain severity.

The process takes 5–7 days with async research. With synchronous interviews, the same study would take 3–4 weeks.

Designing the Prioritization Interview

The key to prioritization research is understanding that customers do not think in terms of your roadmap. They think in terms of their problems, their workflows, and the outcomes they need. Your job is to map from their language to your roadmap categories.

Step 1: Define your outcome areas

Before building the study, list the 4–6 major outcome areas you are considering for your roadmap. These are not features — they are the results customers are trying to achieve.

For example, a project management tool might define:

  • "Reduce time spent on status updates and reporting"
  • "Improve visibility into what is blocked and why"
  • "Make it easier to onboard new team members"
  • "Better integration with the tools we already use"
  • "Reduce time spent in planning meetings"

These become the options in your ranking and multiple_choice questions.

Step 2: Structure the interview

Opening (qualitative context)

Start with open-ended questions to understand current pain before asking about priorities. Never lead with ranking — context first.

  • "Walk me through how your team uses [product] in a typical week. What parts work well and what parts are frustrating?" (open_ended, AI follows up on frustrations)
  • "What is the most time-consuming or painful thing you do in [product area] right now?" (open_ended, AI probes for magnitude: "How often does this happen? What does it cost you?")
  • "If you could change one thing about how [product] works, what would it be?" (open_ended, AI probes: "What outcome would that enable for you?")

Outcome ranking (structured)

Now that the participant has articulated their pain, introduce the outcome areas:

  • "Here are the main improvements we are considering. Please rank them from most to least important for your team:" (ranking question with your 4–6 outcome areas)
  • "Which of these would have the biggest positive impact on your team's work?" (single_choice — forces a definitive top priority)

Pain severity (scale)

  • "How painful is [their top-ranked outcome area] for you today? (1 = minor inconvenience, 10 = blocking work)" (scale)
  • "How satisfied are you with how [product] currently handles [their top-ranked area]?" (scale: 1 = very dissatisfied, 10 = very satisfied)

The gap between pain severity and satisfaction score is your opportunity signal. High pain + low satisfaction = strong case for prioritization.

Current workarounds (qualitative)

  • "How are you handling [top pain area] today? What workarounds or external tools are you using?" (open_ended, AI probes for cost: "How much time does that take? What does it cost you?")

Context for confidence scoring

  • "How long has this been a problem for your team?" (single_choice: just started / 3–6 months / 6–12 months / over a year)
  • "Has this affected any decisions about staying with or leaving [product]?" (yes_no, AI probes on "yes")

Step 3: Segment your participants

Not all customers have the same priorities. Build segmentation into your study by collecting:

  • Company size (single_choice)
  • Role/use case (single_choice)
  • Plan tier (single_choice)
  • Industry (single_choice, if relevant)

This lets you slice the report: "Enterprise customers rank onboarding as the top priority; SMB customers rank integrations first."

Reading the Results in Your Report

Koji's research report aggregates structured answers automatically:

Ranking question output: A ranked list with average position per outcome area, and a distribution showing how spread or concentrated the rankings are. Consensus on a top priority (most respondents rank the same item #1) is a stronger signal than a fractured distribution.

Scale question output: A distribution chart showing how pain severity and satisfaction cluster. Look for bimodal distributions — they often indicate two distinct user segments with different needs.

Open-ended synthesis: AI-generated themes from the qualitative probing, with representative quotes attached to each theme. These are the "whys" that explain the ranking data.

Quote library: Verbatim customer quotes, searchable by question and theme. These are the compelling evidence you bring to planning — stakeholders are moved by real customer voices in ways that spreadsheets are not.

Using Research Findings in Roadmap Planning

Bring the Koji report into your planning session, not a slide deck that summarises it. The interactive report lets stakeholders explore the data themselves — drilling into quotes, checking segment filters, and seeing the distribution behind any aggregate number.

Anchoring the conversation: Share the top-level ranking results first. "Across 25 customers, reducing status update time was the #1 outcome. Integration improvements were #2. Here is why..." then share the supporting quotes.

Handling requests that did not rank highly: When a sales-driven feature request ranks low in the research, you have evidence to redirect: "We tested this with 25 customers and it ranked 5th out of 6. However, 3 customers did mention it as their top need — all enterprise, all in the financial services segment. Should we consider this for a niche?"

Communicating confidence: Not all items in RICE are equal. For items grounded in research, your confidence score can be set objectively: "We interviewed 25 customers and 19 ranked this in their top 2. That justifies a high confidence score."

Finding the hidden opportunity: The gap analysis (high pain severity + low satisfaction) often reveals opportunities that were not on the team's radar. "Nobody requested a fix for [X], but 15 out of 25 customers rated their current experience 3/10 for satisfaction and 8/10 for pain importance. That gap is the biggest opportunity we found."

Continuous Prioritization Research

One planning-cycle study is better than nothing. But the teams that benefit most from research-driven prioritization make it a continuous practice:

Quarterly roadmap research: Run a 20-30 participant study 4–6 weeks before each planning cycle. Use the same outcome areas each quarter to track shifts in priority over time.

Ongoing feedback channel: Keep a standing Koji study open for customers to complete after key in-product events (feature usage milestones, support ticket resolution). Append incoming responses to your rolling insights.

Post-release validation: After shipping a high-priority item, run a shorter study (10–15 participants) to verify the outcome landed as expected. "Did shipping [feature] actually reduce your time on status updates?" closes the loop on your prioritization decisions.

Combining Research with Quantitative Product Data

Interview data is most powerful when it explains what quantitative data shows:

  • Feature adoption analytics reveal what customers are using. Interviews reveal why underused features are underused — a discovery that often leads to positioning or UX changes rather than feature removal.
  • Support ticket patterns show what is breaking. Interviews reveal whether those are workarounds for a deeper need or genuine bugs.
  • Churn data shows who is leaving. Interviews with churned customers (or customers at risk) reveal whether the root cause is in your roadmap.

Koji's structured question types let you collect the quantitative distribution data (ranking, scale, choice) alongside qualitative interviews — so you do not need a separate survey for the numbers.

Template: Prioritization Research Study

Here is a complete study structure you can replicate in Koji:

#QuestionTypePurpose
1Describe how your team uses [product] in a typical week. What works and what frustrates you?open_endedContext, friction discovery
2What is the most painful thing you experience in [product area]?open_endedPain identification
3Rank these outcomes from most to least important: [list]rankingPriority signal
4Which of these would your team benefit from most right now?single_choiceTop priority
5How painful is [top area] for you today? (1–10)scalePain magnitude
6How satisfied are you with how we currently handle [top area]? (1–10)scaleOpportunity gap
7How are you working around [top area] today?open_endedWorkaround cost
8Has this affected your decision to stay with [product]?yes_noChurn signal
9Company sizesingle_choiceSegmentation
10Rolesingle_choiceSegmentation

This study runs in 12–15 minutes and produces everything you need for a well-grounded planning cycle.

Related Resources