The Quick Answer
RICE is a prioritization framework that scores ideas by four factors: Reach, Impact, Confidence, and Effort. The formula is:
RICE Score = (Reach × Impact × Confidence%) / Effort
A higher score means greater expected value per unit of work. RICE was developed at Intercom to replace subjective prioritization with a repeatable, quantitative method.
It works well because it forces you to think about each dimension separately instead of relying on gut instinct or whoever argues loudest.
Why Prioritization Is Hard
Every product team has more ideas than capacity. The backlog grows faster than the team can ship. Without a system, prioritization defaults to:
- Loudest voice wins — the most persuasive stakeholder gets their feature built
- Recency bias — whatever was discussed last feels most urgent
- Pet projects — leadership favorites get priority regardless of value
- Squeaky wheel — the angriest customer drives the roadmap
These patterns produce roadmaps that feel busy but don't move metrics. A scoring framework like RICE replaces opinion with estimation. It's not perfect — no prioritization method is — but it makes the reasoning explicit and comparable.
The Four Factors
Reach
What it measures: How many people will this affect in a given time period?
Reach is the number of users, customers, or transactions that the feature will touch within a defined window — typically per month or per quarter.
Examples:
- A homepage redesign reaches every visitor — say 50,000 per month
- An admin dashboard improvement reaches 200 internal users per month
- A checkout flow optimization reaches 8,000 paying customers per month
The key rule: use the same time period for every feature you're comparing. Mixing monthly and quarterly reach numbers produces meaningless comparisons.
Where to find reach estimates:
- Analytics tools (page views, unique users per feature)
- Customer segments (how many users match the criteria?)
- Sales data (transaction counts, active users)
- Support tickets (how many people report this problem?)
When you don't have data, make your best estimate and lower your Confidence score.
Impact
What it measures: How much will this improve the experience for each person it reaches?
Impact uses a fixed scale to standardize estimation across features:
| Score | Label | Meaning |
|---|---|---|
| 3 | Massive | Transforms the experience or removes a critical blocker |
| 2 | High | Significant improvement that users will clearly notice |
| 1 | Medium | Noticeable improvement, solid quality-of-life upgrade |
| 0.5 | Low | Minor improvement, slightly better than before |
| 0.25 | Minimal | Barely noticeable, cosmetic or marginal |
The scale is intentionally coarse. Trying to distinguish between 1.3 and 1.7 creates false precision. Stick to the five defined values.
Impact estimation tips:
- Think about what changes for the user, not what changes in the code
- A feature that saves 30 seconds on a daily task has high impact
- A feature that fixes a rare edge case has low impact, even if the fix is elegant
- If you can't articulate the user benefit in one sentence, the impact might be lower than you think
Confidence
What it measures: How sure are you about your Reach and Impact estimates?
Confidence is expressed as a percentage:
| Score | Meaning |
|---|---|
| 100% | You have solid data — analytics, research, experiments |
| 80% | You have supporting evidence but some assumptions |
| 50% | You're mostly guessing — intuition with limited data |
Confidence acts as a penalty for uncertainty. A high-Reach, high-Impact idea with 50% Confidence scores the same as a moderate idea with 100% Confidence. This prevents speculative moonshots from automatically dominating the roadmap.
Honesty matters here. Inflating Confidence undermines the entire framework. If you're guessing, say so.
Effort
What it measures: How much work is required? Measured in person-months.
Effort includes everything needed to ship the feature:
- Design time
- Development time
- Testing and QA
- Documentation
- Deployment and monitoring setup
One person-month means one person working full-time for one month. A feature requiring a designer for 2 weeks and a developer for 6 weeks is approximately 2 person-months of effort.
Important: Effort is the denominator in the RICE formula. Doubling the effort halves the score. This naturally favors smaller, focused improvements over large, risky projects — which is often the right instinct.
The Formula in Practice
Example 1: Checkout Optimization
A/B test data suggests the current checkout flow loses 12% of users at the payment step. You want to simplify it.
| Factor | Estimate | Reasoning |
|---|---|---|
| Reach | 8,000/month | Monthly paying customers who reach checkout |
| Impact | 2 (High) | Removing a conversion blocker for paying customers |
| Confidence | 80% | Have A/B test data, but not certain about the fix |
| Effort | 1.5 person-months | Designer + developer + QA |
RICE = (8,000 × 2 × 0.80) / 1.5 = 8,533
Example 2: Dark Mode
Users have been requesting dark mode for a year. It's the most-upvoted feature request.
| Factor | Estimate | Reasoning |
|---|---|---|
| Reach | 15,000/month | All active users benefit |
| Impact | 0.5 (Low) | Nice to have, but doesn't change core functionality |
| Confidence | 100% | Well-understood scope, standard implementation |
| Effort | 3 person-months | Full UI audit, all screens, testing across platforms |
RICE = (15,000 × 0.5 × 1.0) / 3 = 2,500
Despite high demand, dark mode scores lower because the per-user impact is low relative to the effort involved.
Example 3: API Rate Limiting
Engineering wants to add rate limiting to prevent abuse. Currently, a few heavy users occasionally degrade performance for everyone.
| Factor | Estimate | Reasoning |
|---|---|---|
| Reach | 25,000/month | All API users are affected when performance degrades |
| Impact | 1 (Medium) | Prevents intermittent slowdowns, not a daily pain point |
| Confidence | 80% | Know the problem exists, solution is standard |
| Effort | 0.5 person-months | Infrastructure change, well-scoped |
RICE = (25,000 × 1 × 0.80) / 0.5 = 40,000
Low effort + broad reach produces a high score. This is exactly the kind of high-leverage improvement that RICE is designed to surface.
Example 4: Mobile App
Building a native mobile app from scratch.
| Factor | Estimate | Reasoning |
|---|---|---|
| Reach | 10,000/month | Estimated from mobile web traffic |
| Impact | 2 (High) | Native experience is significantly better |
| Confidence | 50% | No validation, large assumption about adoption |
| Effort | 12 person-months | Full app build, two platforms, backend changes |
RICE = (10,000 × 2 × 0.50) / 12 = 833
Despite potentially high value, the low confidence and massive effort produce a modest score. RICE says: get more data before committing 12 person-months.
How to Run a RICE Scoring Session
Step 1: Gather the Candidates
Collect 10–30 features, improvements, or ideas. They should be roughly at the same level of granularity — don't compare "fix a typo" with "rebuild the entire platform."
Step 2: Define the Time Period
Choose one time period for all Reach estimates. Monthly is most common. Stick with it.
Step 3: Score Independently First
Have each team member score the features individually before discussing. This prevents anchoring, where the first number mentioned influences everyone else.
Step 4: Discuss and Converge
Compare scores. Large disagreements often reveal different assumptions about user behavior or technical scope. Use the discussion to surface these — that's where RICE adds the most value, even beyond the final number.
Step 5: Sort and Review
Rank by RICE score. Look at the top 5 and the bottom 5. Do they feel right? If something seems clearly wrong, check the inputs — a wrong Reach number by an order of magnitude will distort everything.
Step 6: Use It as Input, Not Law
RICE scores inform decisions. They don't make them. Strategic priorities, dependencies, and team expertise all matter. If your top-scoring feature requires skills your team doesn't have, that's relevant context the score doesn't capture.
Common Mistakes
Inflating Confidence
The most common error. Everyone is more certain than they should be. A rule of thumb: if you have no user data, your Confidence is 50%. If you have anecdotal evidence (support tickets, sales calls), it's 80%. Reserve 100% for data you've actually measured.
Inconsistent Reach Estimates
Comparing a feature that reaches "all users" (using total registered accounts) against one that reaches "active daily users" (a much smaller number) skews the comparison. Define what counts as "reached" and apply it consistently.
Underestimating Effort
Effort estimates are almost always too low. Include design, testing, documentation, and the context-switching cost of starting something new. A common correction: take your initial estimate and multiply by 1.5.
Scoring at Different Granularities
"Improve onboarding" is not the same granularity as "change button color on signup page." Break large initiatives into shippable increments and score those instead. This also produces more accurate effort estimates.
Using RICE for Everything
RICE works best for comparing features and improvements within a similar category. It's less useful for:
- Choosing between fundamentally different business strategies
- Evaluating technical debt (where "Reach" is hard to define)
- Urgent bug fixes (which bypass prioritization entirely)
RICE Compared to Other Frameworks
RICE vs. ICE
ICE uses three factors: Impact, Confidence, and Ease (the inverse of Effort). ICE drops Reach entirely, which means it can't distinguish between a feature that helps 100 people and one that helps 100,000 — if impact per person is the same, they score equally.
RICE's inclusion of Reach makes it more useful for products with varied audience sizes per feature.
RICE vs. Weighted Scoring
Weighted scoring lets you define custom criteria (strategic alignment, revenue potential, customer satisfaction) and assign weights. It's more flexible than RICE but also more complex and more susceptible to weight manipulation.
RICE's fixed structure is both its strength and limitation — it's simpler to use but less customizable.
RICE vs. MoSCoW
MoSCoW (Must have, Should have, Could have, Won't have) is a categorization method, not a scoring model. It works for broad prioritization tiers but doesn't help you rank items within a tier. RICE complements MoSCoW — use MoSCoW to filter, then RICE to rank.
RICE vs. Value/Effort Matrix
The 2×2 matrix (value vs. effort) is a simpler version of the same idea. RICE refines it by splitting "value" into Reach, Impact, and Confidence. If you find the matrix too imprecise but full weighted scoring too complex, RICE sits in the sweet spot.
When to Re-Score
RICE scores are snapshots, not permanent truths. Re-evaluate when:
- New data arrives — user research, analytics, experiment results
- Scope changes — "just a small API change" becomes a larger project
- The market shifts — a competitor launches something similar, changing urgency
- Time passes — scores from 6 months ago reflect different priorities
A quarterly review of your top 20 features is a practical cadence for most teams.
Limitations
RICE is a useful tool, not a universal answer. Its limitations include:
- Reach is hard to estimate for new markets — if you're building something for users you don't have yet, Reach is speculative
- Impact is subjective — two people can reasonably disagree on whether something is "High" or "Medium" impact
- It favors incremental improvements — large, transformative bets often score poorly because of high effort and low confidence
- It doesn't capture strategic value — a feature that positions you in a new market may be strategically vital even with a low RICE score
- Dependencies aren't modeled — Feature B might be useless without Feature A, but RICE scores them independently
Acknowledge these gaps. Use RICE alongside strategic judgment, not instead of it.
FAQ
What is the RICE framework?
RICE is a prioritization scoring model that evaluates features by four factors: Reach (how many users it affects), Impact (how much it improves their experience), Confidence (how certain you are about your estimates), and Effort (how much work it takes). The score is calculated as (Reach × Impact × Confidence%) / Effort. Higher scores indicate higher expected value per unit of work.
Who created the RICE framework?
RICE was developed by the product team at Intercom and published in 2016. It was designed to bring consistency and objectivity to feature prioritization decisions. The framework has since been adopted widely across product management teams at companies of all sizes.
What is a good RICE score?
There is no universal "good" RICE score because scores depend on your specific inputs — a company with 1 million monthly users will produce much larger scores than a startup with 1,000 users. The value of RICE is in the relative ranking: compare features against each other, not against an absolute benchmark.
How do I estimate Reach if I don't have analytics data?
Use the best proxy available. If you know your total active users but not feature-level usage, estimate the percentage who would use the feature based on the target persona. Support ticket volume, survey responses, and feature request counts can also inform Reach estimates. Set Confidence to 50% to reflect the uncertainty.
Should I include internal users in Reach?
Yes, if the feature improves their workflow. An internal tool that saves 50 operations staff 30 minutes each per day has real, quantifiable reach. Use the same time period as your customer-facing features for consistency.
Can I customize the Impact scale?
The standard 0.25–3 scale works well for most teams. You can adjust it, but consistency matters more than the exact values. If you change the scale, apply the same scale to all features — otherwise scores aren't comparable.
How do I handle features that are prerequisites for other features?
Score the dependent features separately. If Feature A is required for Feature B, note the dependency in your discussion, but don't inflate Feature A's score to account for B's value. Some teams add a "strategic dependency" flag alongside the RICE score to capture this.
What is the difference between Effort and complexity?
Effort measures the total work in person-months. Complexity describes how difficult the work is. A feature can be simple but time-consuming (lots of straightforward pages to build) or complex but quick (a clever algorithm change). RICE uses Effort, not complexity, because Effort directly determines resource allocation.
How often should I re-run RICE scoring?
Score new features as they're proposed. Re-evaluate the full backlog quarterly, or whenever significant new data becomes available (user research findings, experiment results, major scope changes). Scores more than 6 months old should be considered stale.
Does RICE work for small teams or solo founders?
Yes. In fact, RICE may be even more valuable when resources are extremely limited. A solo founder who can only build one thing this month benefits from a clear, structured way to compare options. Simplify the process — you don't need a committee meeting. A spreadsheet with honest estimates takes 15 minutes and can prevent weeks of wasted effort.
What should I do when two features have nearly identical RICE scores?
If scores are within 10–15% of each other, treat them as effectively tied. Use additional context to decide: team expertise, strategic alignment, technical dependencies, or which one reduces risk faster. RICE gets you to a shortlist — judgment takes it from there.
Can RICE be used for bug prioritization?
RICE works less well for bugs because Impact is harder to quantify (a bug either exists or doesn't), and urgent bugs bypass prioritization entirely. For non-critical bugs, RICE can help rank them alongside features, but most teams handle critical bugs through severity-based triage instead.
What tools can I use to calculate RICE scores?
A spreadsheet works for most teams. For quick individual calculations, an online RICE score calculator lets you enter features and sort them by score instantly. Project management tools like Productboard and Airfocus also include built-in RICE scoring.
How do I prevent gaming the RICE system?
The main defense is honest Confidence scores. If someone inflates Reach or Impact, their Confidence should be lower to reflect the speculation. Calibration sessions — where the team reviews and challenges each other's estimates — also help. Transparency is key: when scores are visible to the whole team, unreasonable estimates get questioned.