New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Research Methods

RICE Prioritization Framework: How to Score and Rank Product Ideas

Master the RICE scoring framework (Reach, Impact, Confidence, Effort) for product prioritization. Includes the formula, worked examples, free template, and how customer research transforms Confidence scores.

RICE Prioritization Framework: How to Score and Rank Product Ideas

Bottom line: RICE is a product prioritization framework that scores ideas using four factors — Reach, Impact, Confidence, and Effort — to produce a single comparable number. The formula is (Reach × Impact × Confidence) ÷ Effort. RICE forces teams to expose hidden assumptions, especially in the Confidence variable, which is where customer research delivers the biggest scoring upgrade.

Product teams ship features that nobody uses for one reason: they prioritized based on opinion instead of evidence. Pendo's 2019 Feature Adoption Report found that 80% of features in the average software product are rarely or never used — a staggering misallocation of engineering investment that the RICE framework was designed to prevent.

RICE was created by Sean McBride at Intercom in 2016 and published on the Intercom blog, where it became one of the most widely adopted prioritization scoring models in product management. Unlike its predecessor ICE (Impact, Confidence, Ease), RICE adds Reach — forcing teams to quantify how many users will actually be affected before they invest engineering time.

The RICE Formula Explained

RICE produces a comparable score across very different initiatives. A new onboarding flow can be scored against a developer-facing API change against a redesign of billing pages — and the framework will surface which one returns the most value per unit of engineering effort.

RICE Score = (Reach × Impact × Confidence) ÷ Effort

The score is unitless. It is meaningful only relative to other RICE scores in the same backlog. A RICE score of 40 is twice as good as a score of 20 inside the same roadmap exercise — it has no meaning outside that comparison.

Reach — How Many People Will This Affect?

Reach is the number of users, sessions, or events the initiative will touch in a defined time period. Pick a consistent unit and a consistent window — usually "users per quarter" or "transactions per month" — and use it across every initiative being scored.

For example: a checkout-page improvement might reach 30,000 customers per quarter. A power-user feature in the admin dashboard might reach 400 customers per quarter.

The most common mistake is conflating "active users" with "users who will encounter this feature." A push-notification opt-in flow only reaches the slice of users who already enable notifications. Always estimate reach for the realistic addressable audience, not your total MAU.

Impact — How Much Will It Move the Needle Per User?

Impact uses a standardized multiple-choice scale rather than a free-form number, which prevents teams from gaming the math:

  • 3.0 — Massive impact
  • 2.0 — High impact
  • 1.0 — Medium impact
  • 0.5 — Low impact
  • 0.25 — Minimal impact

Anchor each level to a measurable outcome. "Massive" should mean something like "doubles conversion on this step" or "removes the top support ticket category." If every team member scores impact differently, the framework fails — the scoring scale must be calibrated together before any RICE session.

Confidence — How Sure Are You?

Confidence is expressed as a percentage and is multiplied into the score as a penalty for uncertainty:

  • 100% — High confidence. Backed by quantitative data, recent user research, or a tested prototype.
  • 80% — Medium confidence. Some research, but key assumptions remain.
  • 50% — Low confidence. Mostly intuition or anecdotal evidence.
  • Below 50% — McBride recommends treating these as "moonshots" — capture them, but de-prioritize them until you can validate.

Confidence is the single most powerful lever in the RICE formula, because it punishes unvalidated assumptions. A massive-impact, broad-reach idea with 50% confidence will score lower than a more modest idea with 100% confidence — which is exactly the bias RICE is designed to create.

This is where customer research is non-negotiable. Confidence scores above 80% require evidence, not opinions. Teams that conduct continuous customer discovery routinely score 30–50% higher RICE numbers on validated ideas, because their evidence base supports it.

Effort — What Will This Cost?

Effort is measured in person-months — the total work one team member can complete in a month. Estimate it collaboratively with engineering, design, and QA. A two-person team working for three months is 6 person-months.

Person-months are deliberately concrete. Story points, T-shirt sizes, and Fibonacci estimates encourage hand-waving, which is exactly the behavior RICE tries to eliminate.

A Worked Example

InitiativeReachImpactConfidenceEffortRICE Score
New mobile onboarding flow25,0002.080%66,667
Self-serve refund automation8,0003.0100%212,000
Power-user keyboard shortcuts1,2001.0100%11,200
AI-powered dashboard recommendations40,0002.050%85,000

Self-serve refund automation wins despite smaller reach. Why? Tight scope, validated impact, and high confidence — the small, certain bet beats the big, uncertain one. This is the kind of insight RICE forces into the open.

Where Most Teams Get RICE Wrong

After analyzing how product teams apply RICE, three failure modes account for nearly every scoring dispute:

1. Inconsistent time windows for Reach. One PM scores per quarter, another scores per year. Standardize before scoring.

2. Confidence inflation. Teams routinely assign 80%+ confidence to ideas with zero research evidence. Adopt the rule: "If you cannot name the data source, your confidence ceiling is 50%."

3. Effort estimates from a single person. Engineering must co-own the Effort estimate. Designer-only or PM-only estimates skew low and crash sprints.

How Customer Research Drives RICE Confidence

The Confidence variable is RICE's biggest leverage point — and the only way to move it above 50% honestly is through evidence. Teams that conduct ongoing customer research have a permanent scoring advantage.

"The PMs that get the most out of RICE are the ones who pair it with continuous discovery. Without that habit, every confidence score is a guess." — Teresa Torres, Continuous Discovery Habits (2021)

Traditional research approaches make this difficult. Recruiting 20 customer interviews, scheduling them, conducting them, and synthesizing themes can take 4–6 weeks per RICE cycle. For roadmaps that re-prioritize quarterly, this means evidence is always one cycle behind decisions.

The Modern Approach: AI-Native Research for RICE Confidence

AI-native research platforms like Koji collapse the timeline. Instead of scheduling traditional moderated interviews, teams launch AI-moderated customer interviews that run continuously in the background. Recent industry data shows teams using AI-assisted research tools report 60% faster time-to-insight compared to traditional moderated methods.

Here is how each RICE variable benefits when research is automated:

Reach validation through structured questions. Koji's six structured question types (open_ended, scale, single_choice, multiple_choice, ranking, yes_no) let you quantify how many customers face the problem you are scoring. A ranking question across 200 customer interviews tells you exactly which feature lands in the top three for which segments — directly feeding your Reach estimate.

Impact validation through scale questions. Pair an open_ended interview prompt ("walk me through the last time this slowed you down") with a scale rating ("how disruptive is this to your weekly workflow, 1–10?"). The quantitative anchor calibrates Impact scores; the qualitative depth explains why.

Confidence elevation through real evidence. Instead of a 50% gut score, you cite "47 of 60 interviews surfaced this as a top-three frustration; 38 said they would pay extra to solve it." Confidence moves to 90%+ with defensible data — and your roadmap survives executive scrutiny.

Effort validation through prototype testing. Use AI-moderated interviews to test prototypes before committing engineering. A two-day Koji study can save eight weeks of mis-scoped development.

The pattern is simple: AI-moderated research turns Confidence from a guess into a number you can defend with quotes, distributions, and ranked preferences. That is the difference between a RICE roadmap that ships winners and one that ships waste.

When to Use RICE — And When Not To

RICE works well when:

  • You have a backlog of comparable initiatives competing for the same engineering capacity.
  • You can estimate Reach and Effort with reasonable accuracy.
  • Stakeholders need a transparent, defensible ranking.

RICE breaks down when:

  • Initiatives are non-comparable (e.g., compliance work vs. growth experiments — use a separate track).
  • Strategy is the question, not prioritization. RICE ranks ideas within a strategy; it does not generate strategy.
  • Confidence is uniformly low across the backlog. Run customer discovery first; come back to RICE afterward.

The McKinsey 2024 State of Product Management report found that teams using structured prioritization frameworks like RICE were 2.3× more likely to hit roadmap commitments than teams using opinion-driven roadmaps. But the same study noted that frameworks without research inputs underperformed teams that paired frameworks with continuous discovery.

How to Run Your First RICE Session

  1. Pre-work: Standardize the Reach time window (e.g., "users per quarter") and the Impact scale anchors. Distribute to participants 48 hours before.
  2. Workshop (90 minutes): Score 10–15 initiatives as a cross-functional team — PM, engineering, design, customer-facing rep. One person owns the scoring template; everyone votes silently on each variable, then discusses gaps.
  3. Confidence audit: For any score above 80% confidence, the proposer must name the evidence source — research study, analytics dashboard, customer ticket cluster. If no source exists, drop to 50%.
  4. Effort sign-off: Engineering lead signs off on every Effort estimate before the score is final.
  5. Rank and review: Sort by RICE score. Review the top 5 with leadership. Top items go to next quarter's roadmap.
  6. Re-score quarterly: RICE is not one-and-done. Reach changes, confidence changes, effort changes. Re-score every cycle.

RICE vs. Other Prioritization Frameworks

  • RICE vs. ICE: ICE drops Reach. Use ICE for early-stage startups with unclear addressable markets; use RICE once you can quantify customer cohorts.
  • RICE vs. MoSCoW: MoSCoW categorizes (Must/Should/Could/Won't); RICE ranks numerically. Use MoSCoW for release scoping inside a timebox; use RICE for cross-quarter backlog ranking.
  • RICE vs. Kano: Kano asks should we build this at all by classifying features by emotional impact. RICE asks which validated ideas have the best return. Run Kano upstream; run RICE downstream.
  • RICE vs. Opportunity Solution Tree: OST is a discovery framework for mapping opportunities to solutions; RICE is a scoring framework for ranking solutions. They complement each other.

Free RICE Scoring Template

A simple spreadsheet works. Columns:

| Initiative | Hypothesis | Reach | Impact | Confidence (%) | Effort (person-months) | Evidence | RICE Score | Owner |

Add an "Evidence" column. It is the single most useful change you can make to a stock RICE template — it forces teams to name the source behind every Confidence score, which is where prioritization theatre normally hides.

Related Resources

Related Articles

How to Prioritize Customer Feedback: A Framework for Product Teams

A complete guide to triaging, scoring, and acting on customer feedback. Compare RICE, MoSCoW, Kano, and the Opportunity Solution Tree — and learn how AI-native research turns raw feedback into prioritized opportunities in minutes.

Structured Questions in AI Interviews

Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.

Kano Model: How to Prioritize Features Using Customer Research

A complete guide to the Kano Model — the feature prioritization framework that maps customer emotions to product decisions. Learn how to run Kano surveys, classify features, and build products customers love.

Opportunity Solution Tree: The Complete Guide to Continuous Product Discovery

Learn how to build and use the Opportunity Solution Tree (OST) framework — Teresa Torres' visual map for connecting business outcomes to validated customer solutions through continuous discovery. Includes step-by-step instructions, templates, and how Koji automates the evidence-collection process.

How to Run Feature Prioritization Surveys That Build Products Users Actually Want

Learn how to run feature prioritization surveys using RICE, Kano, MoSCoW, and opportunity scoring frameworks. Combine quantitative ranking with AI-driven qualitative depth to build what users truly need.

Research-Driven Roadmap Prioritization: How to Use Customer Interviews to Build Better Roadmaps

Learn how to combine qualitative customer interviews with structured ranking and scale questions to make roadmap decisions backed by real user evidence — not internal opinions.