New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Survey & Study Templates

How to Run Feature Prioritization Surveys That Build Products Users Actually Want

Learn how to run feature prioritization surveys using RICE, Kano, MoSCoW, and opportunity scoring frameworks. Combine quantitative ranking with AI-driven qualitative depth to build what users truly need.

How to Run Feature Prioritization Surveys That Build Products Users Actually Want

Every product team has more ideas than engineering capacity. The backlog is overflowing. Stakeholders are lobbying for their pet features. Sales is forwarding customer requests. And somewhere in the noise, there are a handful of features that would genuinely move the needle for your users and your business.

Feature prioritization surveys cut through the politics and gut feelings. They give you a structured, data-driven way to understand what your users actually need, how badly they need it, and what they would trade off to get it. But traditional surveys only capture surface-level rankings. Users check boxes and drag items into order, but they never explain the reasoning behind their choices.

Koji transforms feature prioritization from a ranking exercise into a strategic research program. The AI interviewer presents structured prioritization questions, then follows up conversationally to understand the why behind every ranking, the workflows that would change, and the pain points that would disappear.

Why Feature Prioritization Surveys Matter

Building the wrong features is the most expensive mistake a product team can make. According to the Standish Group, 64% of software features are rarely or never used. That is wasted engineering time, wasted design effort, and missed opportunities to build what actually matters.

Feature prioritization surveys help you:

  • Validate assumptions before committing engineering resources
  • Quantify demand across your user base, not just the loudest voices
  • Understand context behind feature requests so you can design better solutions
  • Align stakeholders around user data rather than opinions
  • Identify segments that value different capabilities differently

The Major Prioritization Frameworks

RICE Scoring

RICE (Reach, Impact, Confidence, Effort) is a quantitative scoring framework developed at Intercom. Each feature candidate gets scored across four dimensions:

  • Reach: How many users will this feature affect in a given time period?
  • Impact: How much will this feature move the needle for each user? (Scored 0.25 to 3)
  • Confidence: How confident are you in your estimates? (Percentage)
  • Effort: How many person-months will this take?

RICE Score = (Reach x Impact x Confidence) / Effort

Your survey can directly feed the Reach and Impact components:

Reach Question (Single Choice): "How often would you use a [feature name] capability?"

  • Daily
  • Weekly
  • Monthly
  • Rarely
  • Never

Impact Question (Scale 1-5): "If [feature name] existed today, how much would it improve your workflow?"

  • 1: No improvement
  • 2: Slight improvement
  • 3: Moderate improvement
  • 4: Significant improvement
  • 5: Transformative improvement

With Koji, the AI interviewer asks these structured questions and then probes deeper: "You said this would be a significant improvement. Can you walk me through your current workflow and where this feature would fit in?" This qualitative layer turns a RICE score from a guess into an informed estimate.

The Kano Model

The Kano model classifies features into five categories based on how their presence or absence affects satisfaction:

  • Must-Be (Basic): Users expect these. Their presence does not delight, but their absence causes frustration. Example: a login page loading in under 3 seconds.
  • One-Dimensional (Performance): Satisfaction scales linearly with how well these are implemented. Example: search speed and relevance.
  • Attractive (Delighters): Users do not expect these, but they create disproportionate satisfaction. Example: AI-powered suggestions.
  • Indifferent: Users do not care either way.
  • Reverse: Some users actively do not want this feature.

Kano Survey Design:

For each feature, ask two questions:

Functional Question (Single Choice): "If [product] had [feature], how would you feel?"

  • I would love it
  • I expect it
  • I am neutral
  • I can tolerate it
  • I would dislike it

Dysfunctional Question (Single Choice): "If [product] did NOT have [feature], how would you feel?"

  • I would love it
  • I expect it
  • I am neutral
  • I can tolerate it
  • I would dislike it

Cross-reference the two answers on the Kano evaluation table to classify each feature. The magic of running this on Koji is the AI follow-up: when a user says they would "love" a feature, the AI asks what specific problem it solves. When they say they "expect" it, the AI explores whether competitors already offer it.

Opportunity Scoring (Outcome-Driven Innovation)

Developed by Tony Ulwick, opportunity scoring identifies underserved outcomes. For each user outcome (job-to-be-done), you measure:

Importance Question (Scale 1-10): "How important is it to you to [outcome]?"

Satisfaction Question (Scale 1-10): "How satisfied are you with your current ability to [outcome]?"

Opportunity Score = Importance + max(Importance - Satisfaction, 0)

Scores above 12 indicate underserved outcomes ripe for innovation. Scores below 6 indicate over-served areas where you should not invest further.

This framework is powerful because it focuses on outcomes rather than features. Users are not always good at imagining solutions, but they are excellent at describing their struggles. Koji amplifies this by following up on high-importance, low-satisfaction outcomes: "You rated your satisfaction with [outcome] as a 3 out of 10. What specifically makes this difficult today?"

MoSCoW Prioritization

MoSCoW categorizes features into:

  • Must Have: Non-negotiable for the next release
  • Should Have: Important but not critical
  • Could Have: Nice to have if time permits
  • Won't Have: Explicitly out of scope for now

Survey Implementation (Single Choice per feature): "For the next version of [product], how would you categorize [feature]?"

  • Must have - I cannot use the product effectively without this
  • Should have - This is important but I can work around it
  • Could have - This would be nice but is not essential
  • Won't need - This is not relevant to my use case

MoSCoW is simple and stakeholder-friendly. The survey data lets you compare internal MoSCoW rankings (from your team) against external rankings (from users) to identify misalignment.

Weighted Scoring with Custom Criteria

For teams that want maximum flexibility, create a weighted scoring matrix:

  1. Define 4-6 criteria (strategic alignment, user demand, revenue impact, technical feasibility, competitive advantage)
  2. Assign weights to each criterion (must sum to 100%)
  3. Score each feature 1-5 on each criterion
  4. Calculate weighted totals

Your survey captures the user-facing criteria (demand and satisfaction), while internal teams score technical and strategic criteria. The combined matrix gives you a holistic view.

Designing Your Feature Prioritization Survey

Step 1: Define the Feature Candidates

Do not dump 50 features into a survey. Users will disengage after evaluating 8-12 items. Pre-filter your list:

  • Remove features with clear technical blockers
  • Group similar requests into themes
  • Prioritize features where you genuinely have a decision to make

Step 2: Choose Your Framework

FrameworkBest ForComplexityOutput
RICEEngineering-driven teamsMediumNumerical scores
KanoUnderstanding feature categoriesMediumFeature classification
Opportunity ScoringJobs-to-be-done focused teamsLowOpportunity gaps
MoSCoWStakeholder alignmentLowPriority tiers
Weighted ScoringCustom decision criteriaHighComposite scores

Step 3: Structure the Koji Study

A well-designed Koji feature prioritization study combines:

  1. Screener questions to segment users by role, tenure, and usage patterns
  2. Framework-specific structured questions (scales, rankings, single choice)
  3. AI-driven qualitative follow-up on every structured response
  4. MaxDiff or forced-rank questions to identify relative priorities

Example Koji Study Configuration:

Opening (Scale 1-10): "Overall, how well does [product] meet your needs today?"

For each feature (3-5 features max per session):

Ranking: "Please rank these features from most to least important to your workflow: [Feature A], [Feature B], [Feature C], [Feature D]"

Scale (1-5) per feature: "How much would [Feature A] improve your daily workflow?"

Open-ended (AI follows up): "What problem would [Feature A] solve for you specifically?"

The AI interviewer will naturally probe: "You ranked Feature C first but rated Feature A as a 5 for workflow improvement. Can you help me understand that? Is there a reason Feature C is more important despite Feature A having more daily impact?"

Step 4: Segment Your Analysis

Never report feature prioritization as a single aggregate. Different user segments have different needs:

  • By role: Power users vs. casual users vs. administrators
  • By tenure: New users vs. established users
  • By plan/tier: Free vs. paid vs. enterprise
  • By use case: Different jobs-to-be-done
  • By satisfaction: Promoters vs. detractors

A feature that ranks #1 for enterprise users but #8 for SMB users tells a very different story than one that ranks #3 across all segments.

Common Mistakes in Feature Prioritization Surveys

Mistake 1: Asking Users to Design Solutions

Users are experts on their problems, not on solutions. Instead of "Would you want a Kanban board view?", ask "How difficult is it to track the status of your projects?" The former biases toward a specific solution. The latter reveals the underlying need.

Mistake 2: Leading with Your Preferred Feature

Randomize feature order. Do not put your CEO's pet project first. Present features with neutral descriptions that focus on the user benefit, not the technical implementation.

Mistake 3: Ignoring the "Why"

A ranking without reasoning is dangerous. Feature A might rank first because users associate it with a specific pain point that could actually be solved differently. Koji's conversational follow-up eliminates this risk by capturing the reasoning behind every ranking.

Mistake 4: Surveying the Wrong Users

Feature prioritization should target active users who have enough context to evaluate proposals. New signups or churned users have different (and equally valid) perspectives, but they should be surveyed separately with different framing.

Mistake 5: Analysis Paralysis

Do not run all five frameworks on the same feature set. Pick one primary framework and supplement with qualitative depth. Koji's AI follow-ups give you the qualitative richness that usually requires a separate interview study.

Analyzing Feature Prioritization Data

Quantitative Analysis

  1. Calculate framework scores (RICE, opportunity scores, or weighted totals)
  2. Rank features within each segment
  3. Identify consensus features that rank highly across all segments
  4. Identify polarizing features that rank very differently across segments
  5. Create a 2x2 matrix: User demand (survey data) vs. Strategic value (internal assessment)

Qualitative Analysis from Koji AI Follow-ups

Koji automatically generates themes from the conversational follow-ups. Look for:

  • Recurring workflows mentioned across multiple users
  • Emotional language indicating strong pain points
  • Workaround descriptions showing users actively compensating for missing features
  • Competitive mentions revealing features users get elsewhere
  • Conditional statements like "I would use this if..." that reveal adoption requirements

Building the Business Case

For each high-priority feature, your Koji report gives you:

  • Quantitative demand data (percentage of users, segment breakdown)
  • Qualitative evidence (direct quotes, workflow descriptions)
  • Impact estimates (satisfaction improvement predictions)
  • Risk factors (adoption barriers mentioned by users)

This is dramatically more compelling than a spreadsheet of rankings. Stakeholders can read actual user stories alongside the numbers.

Feature Prioritization with Koji: Step-by-Step

  1. Create your Koji study with 3-5 feature concepts, each with a one-sentence neutral description
  2. Configure structured questions: Use ranking for relative priority, scales for individual impact, and single-choice for framework-specific questions (Kano, MoSCoW)
  3. Set the AI follow-up context: Tell Koji to probe for current workflows, pain points, and workarounds
  4. Recruit from your active user base: Send the Koji interview link via in-app notification or email
  5. Collect 50-100 responses: Enough for segment-level analysis
  6. Review the Koji report: Quantitative rankings + AI-synthesized qualitative themes
  7. Present to stakeholders: Use the combined quant/qual evidence to make decisions

When to Run Feature Prioritization Surveys

  • Quarterly planning: Before each planning cycle, validate the proposed roadmap against user priorities
  • After major launches: Recalibrate what users want next now that the landscape has changed
  • When entering new markets: New segments may have entirely different priority hierarchies
  • During resource constraints: When you can only build 2 of 10 proposed features
  • After competitive moves: When a competitor launches something, re-evaluate your priorities

The Bottom Line

Feature prioritization surveys transform product decisions from political battles into evidence-based discussions. The best teams combine quantitative frameworks (RICE, Kano, opportunity scoring) with qualitative depth to understand not just what users want, but why they want it and how they would use it.

Koji makes this possible at scale. Instead of choosing between a survey of 500 users (quantitative but shallow) or interviews with 15 users (deep but narrow), you get both: structured prioritization data from every participant, enriched by AI-driven conversational follow-up that captures the context behind every ranking.

The result is a feature roadmap built on evidence, not opinions.

Related Articles

How to Measure Product-Market Fit with the Sean Ellis Test (and Go Deeper)

The complete guide to measuring product-market fit. Learn how to run the Sean Ellis "very disappointed" test, combine it with qualitative interviews, and use Koji to understand not just whether you have PMF but why.

How to Run Usability Testing Surveys That Improve Your Product

The complete guide to usability testing surveys and post-task questionnaires. Learn how to combine SUS scores, task success rates, and conversational feedback to identify exactly where your UX breaks down.

How to Build an Onboarding Survey That Reduces Time-to-Value

The complete guide to user onboarding surveys and experience feedback. Learn how to identify friction points, measure activation milestones, and optimize the first-run experience using Koji's conversational feedback.

How to Run Pricing Research Surveys: Van Westendorp, Gabor-Granger, and Conjoint Analysis

The complete guide to pricing research methodologies. Learn how to determine optimal price points using Van Westendorp, test price sensitivity with Gabor-Granger, and combine quantitative pricing data with qualitative value perception using Koji.

How to Run Win/Loss Analysis That Improves Your Close Rate

The complete guide to win/loss analysis interviews. Learn how to systematically understand why deals are won or lost, identify patterns in buyer decisions, and use insights to improve sales strategy, messaging, and product positioning.

How to Build an NPS Survey That Actually Drives Action

A comprehensive guide to designing, deploying, and acting on Net Promoter Score surveys. Learn the best practices that separate vanity metrics from actionable insights, and how Koji's conversational approach unlocks the "why" behind every score.