Simple value-risk-speed-cost scoring model
Time to read: 10 minutes
Time to apply: 1-2 hours
Prioritisation models are often over-engineered. Leaders end up with weighted spreadsheets that no one remembers or trusts. The simplest models are usually the strongest: clear criteria, consistent scoring, and trade-offs visible to all.
What You'll Learn
- How to score initiatives using four consistent factors
- Detailed scoring criteria for value, risk, speed, and cost
- How to compare initiatives fairly and transparently
- Common pitfalls and how to avoid them
- Real examples with actual scoring decisions
Why Most Scoring Models Fail
Complex scoring models collapse under their own weight. Here's why:
Too Many Factors
A model with 15 weighted criteria becomes a black box. No one understands why Initiative A scored 73.4 while Initiative B scored 71.8. The numbers feel arbitrary, trust evaporates, and leaders ignore the model.
Inconsistent Application
When criteria are vague ("strategic alignment," "innovation potential"), different people score the same initiative differently. Without clear definitions, scoring becomes subjective debate instead of objective comparison.
Hidden Assumptions
Weighting factors (e.g., "value is 40%, risk is 30%") obscures trade-offs. Leaders can't see whether they're choosing high-value-high-risk or low-value-low-risk. The math hides the decision instead of revealing it.
The 4-Factor Model
Clarity.'s OnePlan uses a four-factor scoring framework. Each initiative is scored 1-5 on each factor. No weighting. No complex math. Just transparent trade-offs.
Factor 1: Value
Question: What measurable outcome does this deliver, and how important is it to our goals?
Value Scoring Criteria (1-5)
5 - Critical Impact: Directly delivers on top strategic priority. Measurable revenue impact £1M+ or cost savings 20%+. Affects entire customer base or major market segment.
4 - High Impact: Supports strategic priority. Measurable impact £500k-£1M or 10-20% improvement. Affects significant customer segment.
3 - Moderate Impact: Contributes to goals. Measurable impact £100k-£500k or 5-10% improvement. Affects specific customer segment.
2 - Low Impact: Minor contribution. Measurable impact £10k-£100k or 1-5% improvement. Affects small customer group.
1 - Minimal Impact: Negligible contribution. No clear measurable outcome or impact below £10k. Nice-to-have feature.
Factor 2: Risk
Question: What is the likelihood we can deliver this successfully, and how confident are we in the estimates?
Risk Scoring Criteria (1-5, where 5 is lowest risk)
5 - Very Low Risk: Proven approach. Team has done this before. Clear requirements. No dependencies. High confidence in delivery.
4 - Low Risk: Mostly proven approach. Some team experience. Requirements mostly clear. Few external dependencies. Good confidence.
3 - Moderate Risk: Mix of proven and new. Some team gaps. Requirements need refinement. Some external dependencies. Moderate confidence.
2 - High Risk: Mostly new approach. Significant team gaps. Unclear requirements. Multiple dependencies. Low confidence.
1 - Very High Risk: Unproven approach. No team experience. Vague requirements. Critical dependencies uncertain. Very low confidence.
Factor 3: Speed
Question: How quickly will benefits be realised once work begins?
Speed Scoring Criteria (1-5, where 5 is fastest)
5 - Immediate: Benefits realised within 1 month of starting work. Quick win with fast feedback.
4 - Fast: Benefits realised within 1-3 months. Short cycle with early value delivery.
3 - Moderate: Benefits realised within 3-6 months. Standard project timeline.
2 - Slow: Benefits realised within 6-12 months. Long project with delayed value.
1 - Very Slow: Benefits realised after 12+ months. Multi-year effort before value delivery.
Factor 4: Cost
Question: What resources, money, or effort are required?
Cost Scoring Criteria (1-5, where 5 is lowest cost)
5 - Very Low Cost: Less than £50k total or 1-2 person-months. Minimal resource commitment.
4 - Low Cost: £50k-£200k or 2-4 person-months. Light resource commitment.
3 - Moderate Cost: £200k-£500k or 4-8 person-months. Standard resource commitment.
2 - High Cost: £500k-£1M or 8-16 person-months. Significant resource commitment.
1 - Very High Cost: Over £1M or 16+ person-months. Major resource commitment.
Why This Model Works
- Defensible: Clear criteria anyone can follow. No hidden assumptions or black-box calculations.
- Repeatable: Same framework works across projects, portfolios, and decision cycles.
- Transparent: Trade-offs are visible. You can see whether you're choosing high-value-high-risk vs low-value-safe.
- Fast: Simple scoring means faster decisions without endless debate over decimal points.
- Comparable: Initiatives scored consistently can be ranked and compared fairly.
Linking to the Value Map Framework
The Value Map Framework structures decisions into WHY, WHAT, HOW, and NOW. The 4-factor model strengthens the "HOW" by quantifying trade-offs across risk, speed, and cost — while keeping the focus on the outcome (the WHY).
This ensures prioritisation is not just consistent, but also defensible and simple to explain. When you present your scoring alongside the Value Map, stakeholders can see both the numbers and the narrative.
Example in Practice: Three Competing Initiatives
Context: Imagine a £15M ARR SaaS company with capacity for one major initiative this quarter. Three options compete for funding:
Initiative A: Enterprise SSO Integration
Value: 5 - Unlocks £2M in enterprise deals currently blocked. Affects 30% of pipeline.
Risk: 4 - Proven technology. Team has some SSO experience. Clear requirements from sales.
Speed: 4 - 6-8 weeks to delivery. Fast feedback from enterprise customers.
Cost: 4 - £80k (2 engineers, 2 months). Minimal ongoing maintenance.
Total Score: 17/20 - High value, low risk, fast, affordable. Strong candidate.
Initiative B: AI-Powered Analytics Platform
Value: 5 - Revolutionary feature. Could differentiate from competitors. Estimated £5M revenue impact if successful.
Risk: 1 - Unproven ML approach. No team experience. Unclear customer demand. Multiple technical unknowns.
Speed: 1 - 18+ months to beta. Additional 12 months to realise value.
Cost: 1 - £1.5M (full team, 18 months). Ongoing ML infrastructure costs.
Total Score: 8/20 - High potential but massive risk, slow delivery, expensive. Weak candidate for now.
Initiative C: Mobile App Performance Optimisation
Value: 2 - Reduces app crashes by 50%. Affects 5% of users who reported issues. No direct revenue impact.
Risk: 5 - Well-understood problem. Team expertise. Clear scope. No dependencies.
Speed: 5 - 3-4 weeks to completion. Immediate user feedback.
Cost: 5 - £30k (1 engineer, 1 month). No ongoing costs.
Total Score: 17/20 - Low value but zero risk, fast, cheap. Safe choice but limited impact.
The Decision:
Both A and C score 17/20, but the trade-offs are clear:
- Initiative A (SSO): High value, slightly higher risk, unlocks major revenue
- Initiative C (Performance): Low value, zero risk, quick win
- Initiative B (AI): High potential but too risky/slow/expensive for current capacity
In this scenario, leadership would choose Initiative A because £2M in pipeline justifies the slightly higher risk. Initiative C could be added as a quick win in parallel since it only requires 1 engineer. Initiative B would be deferred to R&D phase for proof-of-concept validation.
Common Pitfalls and How to Avoid Them
Pitfall 1: Scoring Without Evidence
Problem: Teams score initiatives based on gut feel or wishful thinking instead of data.
Solution: Require evidence for each score. Value scores need user research or market data. Risk scores need technical assessment. Speed and cost need estimates from delivery teams.
Pitfall 2: Gaming the Scores
Problem: Teams inflate scores to get their projects prioritised (everyone scores their initiative 5/5/5/5).
Solution: Use relative scoring in group sessions. Force rank initiatives within each factor. Make trade-offs explicit ("if this is a 5, what does that make the other one?").
Pitfall 3: Ignoring the Total Score
Problem: Teams only look at total scores (17 vs 16) without understanding the trade-offs underneath.
Solution: Always review the factor breakdown. Two initiatives with score 16 might have completely different profiles (5/5/3/3 vs 2/2/5/5). Choose based on trade-offs, not just totals.
Pitfall 4: Scoring Once and Forgetting
Problem: Initiatives get scored at inception and never revisited as circumstances change.
Solution: Re-score quarterly or when major assumptions change. Market shifts, team changes, and new evidence should update scores.
How to Apply This in Your Organisation
Step 1: Gather Your Initiatives
List all initiatives competing for resources. Include projects, features, tech debt, and capability building.
Step 2: Score in a Group Session
Bring together stakeholders from delivery, product, and leadership. Score each initiative using the criteria above. Debate the scores but require evidence to support each rating.
Step 3: Review Trade-Offs
Don't just sort by total score. Look at the factor breakdown. Identify high-value-high-risk vs low-value-safe options. Discuss which trade-offs align with your current strategy and risk appetite.
Step 4: Document and Communicate
Use the Value Map Framework to present the decision on one page. Show the WHY (strategic context), WHAT (chosen initiative), HOW (the scoring trade-offs), and NOW (next steps).
Quick Scoring Template
Initiative: _________________________________
Value (1-5): ___ Evidence: _________________________________
Risk (1-5): ___ Evidence: _________________________________
Speed (1-5): ___ Evidence: _________________________________
Cost (1-5): ___ Evidence: _________________________________
Total Score: ___ / 20
Summary
By applying the same 4-factor scoring model every time, leaders build consistency, reduce debate, and make faster decisions. This is how Clarity. ensures confident choices are simple, defensible, and easy to explain across delivery, finance, and leadership.
The key is keeping it simple: value, risk, speed, cost. No weighting. No black boxes. Just transparent trade-offs that everyone can see and discuss.
Ready to Apply This?
Gather your current initiatives. Score them using the 4-factor model. Review the trade-offs. Make the call based on what matters most to your strategy right now.
Need Help Applying This to Your Situation?
We use this framework with our clients every month to make priority decisions in 30 days (or less).