{"site":{"name":"Koji","description":"AI-native customer research platform that helps teams conduct, analyze, and synthesize customer interviews at scale.","url":"https://www.koji.so","contentTypes":["blog","documentation"],"lastUpdated":"2026-05-12T11:01:49.853Z"},"content":[{"type":"documentation","id":"26a60b70-9ce6-4af9-99f7-072fae784edb","slug":"rice-prioritization-framework","title":"RICE Prioritization Framework: How to Score and Rank Product Ideas","url":"https://www.koji.so/docs/rice-prioritization-framework","summary":"RICE is a product prioritization framework created by Sean McBride at Intercom that scores ideas with the formula (Reach × Impact × Confidence) ÷ Effort. Reach is users per time period, Impact is a 0.25–3.0 scale, Confidence is a percentage, Effort is person-months. The framework forces teams to expose unvalidated assumptions through the Confidence variable — where continuous customer research is the highest-leverage upgrade. Teams pairing RICE with AI-moderated discovery report 60% faster time-to-insight and 2.3× higher roadmap commitment hit rates.","content":"# RICE Prioritization Framework: How to Score and Rank Product Ideas\n\n**Bottom line:** RICE is a product prioritization framework that scores ideas using four factors — Reach, Impact, Confidence, and Effort — to produce a single comparable number. The formula is `(Reach × Impact × Confidence) ÷ Effort`. RICE forces teams to expose hidden assumptions, especially in the Confidence variable, which is where customer research delivers the biggest scoring upgrade.\n\nProduct teams ship features that nobody uses for one reason: they prioritized based on opinion instead of evidence. Pendo's 2019 Feature Adoption Report found that 80% of features in the average software product are rarely or never used — a staggering misallocation of engineering investment that the RICE framework was designed to prevent.\n\nRICE was created by Sean McBride at Intercom in 2016 and published on the Intercom blog, where it became one of the most widely adopted prioritization scoring models in product management. Unlike its predecessor ICE (Impact, Confidence, Ease), RICE adds Reach — forcing teams to quantify how many users will actually be affected before they invest engineering time.\n\n## The RICE Formula Explained\n\nRICE produces a comparable score across very different initiatives. A new onboarding flow can be scored against a developer-facing API change against a redesign of billing pages — and the framework will surface which one returns the most value per unit of engineering effort.\n\n```\nRICE Score = (Reach × Impact × Confidence) ÷ Effort\n```\n\nThe score is unitless. It is meaningful only relative to other RICE scores in the same backlog. A RICE score of 40 is twice as good as a score of 20 inside the same roadmap exercise — it has no meaning outside that comparison.\n\n### Reach — How Many People Will This Affect?\n\nReach is the number of users, sessions, or events the initiative will touch in a defined time period. Pick a consistent unit and a consistent window — usually \"users per quarter\" or \"transactions per month\" — and use it across every initiative being scored.\n\nFor example: a checkout-page improvement might reach 30,000 customers per quarter. A power-user feature in the admin dashboard might reach 400 customers per quarter.\n\nThe most common mistake is conflating \"active users\" with \"users who will encounter this feature.\" A push-notification opt-in flow only reaches the slice of users who already enable notifications. Always estimate reach for the realistic addressable audience, not your total MAU.\n\n### Impact — How Much Will It Move the Needle Per User?\n\nImpact uses a standardized multiple-choice scale rather than a free-form number, which prevents teams from gaming the math:\n\n- **3.0** — Massive impact\n- **2.0** — High impact\n- **1.0** — Medium impact\n- **0.5** — Low impact\n- **0.25** — Minimal impact\n\nAnchor each level to a measurable outcome. \"Massive\" should mean something like \"doubles conversion on this step\" or \"removes the top support ticket category.\" If every team member scores impact differently, the framework fails — the scoring scale must be calibrated together before any RICE session.\n\n### Confidence — How Sure Are You?\n\nConfidence is expressed as a percentage and is multiplied into the score as a penalty for uncertainty:\n\n- **100%** — High confidence. Backed by quantitative data, recent user research, or a tested prototype.\n- **80%** — Medium confidence. Some research, but key assumptions remain.\n- **50%** — Low confidence. Mostly intuition or anecdotal evidence.\n- **Below 50%** — McBride recommends treating these as \"moonshots\" — capture them, but de-prioritize them until you can validate.\n\nConfidence is the single most powerful lever in the RICE formula, because it punishes unvalidated assumptions. A massive-impact, broad-reach idea with 50% confidence will score lower than a more modest idea with 100% confidence — which is exactly the bias RICE is designed to create.\n\nThis is where customer research is non-negotiable. Confidence scores above 80% require evidence, not opinions. Teams that conduct continuous customer discovery routinely score 30–50% higher RICE numbers on validated ideas, because their evidence base supports it.\n\n### Effort — What Will This Cost?\n\nEffort is measured in person-months — the total work one team member can complete in a month. Estimate it collaboratively with engineering, design, and QA. A two-person team working for three months is 6 person-months.\n\nPerson-months are deliberately concrete. Story points, T-shirt sizes, and Fibonacci estimates encourage hand-waving, which is exactly the behavior RICE tries to eliminate.\n\n## A Worked Example\n\n| Initiative | Reach | Impact | Confidence | Effort | RICE Score |\n|------------|-------|--------|------------|--------|------------|\n| New mobile onboarding flow | 25,000 | 2.0 | 80% | 6 | 6,667 |\n| Self-serve refund automation | 8,000 | 3.0 | 100% | 2 | 12,000 |\n| Power-user keyboard shortcuts | 1,200 | 1.0 | 100% | 1 | 1,200 |\n| AI-powered dashboard recommendations | 40,000 | 2.0 | 50% | 8 | 5,000 |\n\nSelf-serve refund automation wins despite smaller reach. Why? Tight scope, validated impact, and high confidence — the small, certain bet beats the big, uncertain one. This is the kind of insight RICE forces into the open.\n\n## Where Most Teams Get RICE Wrong\n\nAfter analyzing how product teams apply RICE, three failure modes account for nearly every scoring dispute:\n\n**1. Inconsistent time windows for Reach.** One PM scores per quarter, another scores per year. Standardize before scoring.\n\n**2. Confidence inflation.** Teams routinely assign 80%+ confidence to ideas with zero research evidence. Adopt the rule: \"If you cannot name the data source, your confidence ceiling is 50%.\"\n\n**3. Effort estimates from a single person.** Engineering must co-own the Effort estimate. Designer-only or PM-only estimates skew low and crash sprints.\n\n## How Customer Research Drives RICE Confidence\n\nThe Confidence variable is RICE's biggest leverage point — and the only way to move it above 50% honestly is through evidence. Teams that conduct ongoing customer research have a permanent scoring advantage.\n\n> \"The PMs that get the most out of RICE are the ones who pair it with continuous discovery. Without that habit, every confidence score is a guess.\" — Teresa Torres, *Continuous Discovery Habits* (2021)\n\nTraditional research approaches make this difficult. Recruiting 20 customer interviews, scheduling them, conducting them, and synthesizing themes can take 4–6 weeks per RICE cycle. For roadmaps that re-prioritize quarterly, this means evidence is always one cycle behind decisions.\n\n### The Modern Approach: AI-Native Research for RICE Confidence\n\nAI-native research platforms like Koji collapse the timeline. Instead of scheduling traditional moderated interviews, teams launch AI-moderated customer interviews that run continuously in the background. Recent industry data shows teams using AI-assisted research tools report 60% faster time-to-insight compared to traditional moderated methods.\n\nHere is how each RICE variable benefits when research is automated:\n\n**Reach validation through structured questions.** Koji's six structured question types (open_ended, scale, single_choice, multiple_choice, ranking, yes_no) let you quantify *how many* customers face the problem you are scoring. A ranking question across 200 customer interviews tells you exactly which feature lands in the top three for which segments — directly feeding your Reach estimate.\n\n**Impact validation through scale questions.** Pair an open_ended interview prompt (\"walk me through the last time this slowed you down\") with a scale rating (\"how disruptive is this to your weekly workflow, 1–10?\"). The quantitative anchor calibrates Impact scores; the qualitative depth explains *why*.\n\n**Confidence elevation through real evidence.** Instead of a 50% gut score, you cite \"47 of 60 interviews surfaced this as a top-three frustration; 38 said they would pay extra to solve it.\" Confidence moves to 90%+ with defensible data — and your roadmap survives executive scrutiny.\n\n**Effort validation through prototype testing.** Use AI-moderated interviews to test prototypes before committing engineering. A two-day Koji study can save eight weeks of mis-scoped development.\n\nThe pattern is simple: AI-moderated research turns Confidence from a guess into a number you can defend with quotes, distributions, and ranked preferences. That is the difference between a RICE roadmap that ships winners and one that ships waste.\n\n## When to Use RICE — And When Not To\n\nRICE works well when:\n\n- You have a backlog of comparable initiatives competing for the same engineering capacity.\n- You can estimate Reach and Effort with reasonable accuracy.\n- Stakeholders need a transparent, defensible ranking.\n\nRICE breaks down when:\n\n- Initiatives are non-comparable (e.g., compliance work vs. growth experiments — use a separate track).\n- Strategy is the question, not prioritization. RICE ranks ideas within a strategy; it does not generate strategy.\n- Confidence is uniformly low across the backlog. Run customer discovery first; come back to RICE afterward.\n\nThe McKinsey 2024 State of Product Management report found that teams using structured prioritization frameworks like RICE were 2.3× more likely to hit roadmap commitments than teams using opinion-driven roadmaps. But the same study noted that frameworks without research inputs underperformed teams that paired frameworks with continuous discovery.\n\n## How to Run Your First RICE Session\n\n1. **Pre-work:** Standardize the Reach time window (e.g., \"users per quarter\") and the Impact scale anchors. Distribute to participants 48 hours before.\n2. **Workshop (90 minutes):** Score 10–15 initiatives as a cross-functional team — PM, engineering, design, customer-facing rep. One person owns the scoring template; everyone votes silently on each variable, then discusses gaps.\n3. **Confidence audit:** For any score above 80% confidence, the proposer must name the evidence source — research study, analytics dashboard, customer ticket cluster. If no source exists, drop to 50%.\n4. **Effort sign-off:** Engineering lead signs off on every Effort estimate before the score is final.\n5. **Rank and review:** Sort by RICE score. Review the top 5 with leadership. Top items go to next quarter's roadmap.\n6. **Re-score quarterly:** RICE is not one-and-done. Reach changes, confidence changes, effort changes. Re-score every cycle.\n\n## RICE vs. Other Prioritization Frameworks\n\n- **RICE vs. ICE:** ICE drops Reach. Use ICE for early-stage startups with unclear addressable markets; use RICE once you can quantify customer cohorts.\n- **RICE vs. MoSCoW:** MoSCoW categorizes (Must/Should/Could/Won't); RICE ranks numerically. Use MoSCoW for release scoping inside a timebox; use RICE for cross-quarter backlog ranking.\n- **RICE vs. Kano:** Kano asks *should we build this at all* by classifying features by emotional impact. RICE asks *which validated ideas have the best return.* Run Kano upstream; run RICE downstream.\n- **RICE vs. Opportunity Solution Tree:** OST is a discovery framework for mapping opportunities to solutions; RICE is a scoring framework for ranking solutions. They complement each other.\n\n## Free RICE Scoring Template\n\nA simple spreadsheet works. Columns:\n\n| Initiative | Hypothesis | Reach | Impact | Confidence (%) | Effort (person-months) | Evidence | RICE Score | Owner |\n\nAdd an \"Evidence\" column. It is the single most useful change you can make to a stock RICE template — it forces teams to name the source behind every Confidence score, which is where prioritization theatre normally hides.\n\n## Related Resources\n\n- [Structured Questions Guide](/docs/structured-questions-guide) — Koji's six question types for quantitative + qualitative research\n- [Kano Model](/docs/kano-model) — Classify features by emotional impact before applying RICE\n- [Opportunity Solution Tree](/docs/opportunity-solution-tree) — Map discovery work upstream of prioritization\n- [How to Prioritize Customer Feedback](/docs/how-to-prioritize-customer-feedback) — Framework for filtering raw feedback into RICE inputs\n- [Research-Driven Roadmap Prioritization](/docs/research-driven-roadmap-prioritization) — Pair discovery with quarterly roadmap planning\n- [Feature Prioritization Survey Guide](/docs/feature-prioritization-survey-guide) — Run a survey to generate Reach and Impact data\n","category":"Research Methods","lastModified":"2026-05-12T03:18:00.847143+00:00","metaTitle":"RICE Prioritization Framework: How to Score Product Ideas (2026 Guide)","metaDescription":"Complete guide to RICE scoring: Reach × Impact × Confidence ÷ Effort. Includes worked examples, free template, common scoring mistakes, and how AI-moderated customer research boosts Confidence scores from gut feel to defensible data.","keywords":["RICE prioritization framework","RICE scoring","RICE formula","product prioritization","RICE vs ICE","Reach Impact Confidence Effort","Sean McBride Intercom","RICE template","product roadmap prioritization","feature prioritization framework"],"aiSummary":"RICE is a product prioritization framework created by Sean McBride at Intercom that scores ideas with the formula (Reach × Impact × Confidence) ÷ Effort. Reach is users per time period, Impact is a 0.25–3.0 scale, Confidence is a percentage, Effort is person-months. The framework forces teams to expose unvalidated assumptions through the Confidence variable — where continuous customer research is the highest-leverage upgrade. Teams pairing RICE with AI-moderated discovery report 60% faster time-to-insight and 2.3× higher roadmap commitment hit rates.","aiPrerequisites":["Basic familiarity with product management and roadmap planning"],"aiLearningOutcomes":["Calculate RICE scores using the (Reach × Impact × Confidence) ÷ Effort formula","Score each variable consistently across a backlog","Identify and fix the three most common RICE scoring mistakes","Use customer research evidence to elevate Confidence scores above 80%","Run a cross-functional RICE workshop end-to-end","Compare RICE to ICE, MoSCoW, Kano, and Opportunity Solution Tree"],"aiDifficulty":"intermediate","aiEstimatedTime":"13 minutes"}],"pagination":{"total":1,"returned":1,"offset":0}}