{"site":{"name":"Koji","description":"AI-native customer research platform that helps teams conduct, analyze, and synthesize customer interviews at scale.","url":"https://www.koji.so","contentTypes":["blog","documentation"],"lastUpdated":"2026-05-12T11:01:53.108Z"},"content":[{"type":"documentation","id":"093208bf-72cc-4762-9caf-10f79ed0a54e","slug":"moscow-prioritization-method","title":"MoSCoW Method: How to Prioritize Features with Must, Should, Could, and Won't Have","url":"https://www.koji.so/docs/moscow-prioritization-method","summary":"MoSCoW is a prioritization framework that sorts requirements into Must Have, Should Have, Could Have, and Won't Have within a fixed timebox. Developed by Dai Clegg at Oracle in 1994 and donated to DSDM, MoSCoW's core rule is the 60-20-20 effort allocation: Must Have effort capped at 60% of capacity, with Should and Could splitting the remaining 40% as contingency. The framework fails when Must Have inflates beyond 60%, when effort is not estimated, or when timeboxes are missing. Customer research validates Must Have classifications — Koji's structured questions and AI-moderated interviews turn stakeholder opinion into customer-backed evidence within 48 hours.","content":"# MoSCoW Method: How to Prioritize Features with Must, Should, Could, and Won't Have\n\n**Bottom line:** MoSCoW is a prioritization framework that sorts requirements into four categories — Must Have, Should Have, Could Have, and Won't Have — within a fixed delivery timebox. Developed by Dai Clegg at Oracle in 1994 and adopted as a core practice of the Dynamic Systems Development Method (DSDM), MoSCoW is built for release planning, not strategic prioritization. Its most important rule — and the one most teams skip — is the 60-20-20 effort allocation that protects delivery against scope creep.\n\nThe Standish Group's Chaos Report (2023) found that 64% of features in delivered software are rarely or never used by end users. MoSCoW is one of the oldest and most enduring frameworks designed to prevent that waste — by forcing teams to admit, before delivery starts, which features they will *not* deliver if time runs out.\n\n## Where MoSCoW Came From\n\nDai Clegg created MoSCoW in 1994 while working at Oracle, originally for use in rapid application development (RAD) projects. The method was donated to the Dynamic Systems Development Method (DSDM) Consortium and became one of the foundational techniques of agile project management from 2002 onward.\n\nThe lowercase \"o\"s in MoSCoW are silent — they exist only to make the acronym pronounceable. The four categories are:\n\n- **M** — Must Have\n- **S** — Should Have\n- **C** — Could Have\n- **W** — Won't Have (this time)\n\n## The Four Categories Defined\n\n### Must Have (M)\n\nMust Haves are non-negotiable. If a Must Have is missing at delivery, the release is a failure — legally, contractually, or functionally. They form the Minimum Usable Subset (MUS): the smallest collection of requirements that produces a usable, valuable outcome.\n\nThe DSDM test: \"What happens if this requirement is not delivered?\" If the answer is \"we cancel the release\" or \"the product is unusable,\" it is a Must Have. Anything less rigorous than that does not qualify.\n\n**Examples:** Authentication on a banking app. PCI compliance on a checkout flow. The single primary workflow the product was built to enable.\n\n### Should Have (S)\n\nShould Haves are important but not vital. The release will succeed without them, but the team will work hard to deliver them inside the timebox. Workarounds may exist, even if inconvenient.\n\nThe test: \"Would the product still launch if this slipped?\" If yes — and there is a workaround — it is a Should Have.\n\n**Examples:** Password recovery flow if SSO exists as fallback. Bulk-import feature if single-record import works. Performance optimizations that improve, but do not enable, the main flow.\n\n### Could Have (C)\n\nCould Haves are desirable but disposable. They are contingency. When the timeline tightens, Could Haves are dropped first to protect Must and Should. Many never ship; that is by design.\n\nThe test: \"If we never deliver this, is anyone seriously upset?\" If no — and the gap is acceptable — it is a Could Have.\n\n**Examples:** Quality-of-life UI polish. Secondary integrations. \"Nice to have\" features that emerged from internal stakeholder requests.\n\n### Won't Have (W) — *This Time*\n\nWon't Haves are explicitly out of scope for the current timebox — but recorded so the team is clear about what is *not* being built. The \"this time\" qualifier matters: a Won't Have can move into Should or Must in a future release.\n\nWon't Have is the most misunderstood category. Many teams skip it. That is a mistake — explicit Won't Have prevents scope-creep arguments mid-sprint and gives stakeholders a place to \"park\" requests without rejection.\n\n**Examples:** Mobile app version (web-only this release). Multi-language support (English first). Enterprise SSO (deferred to v2).\n\n## The 60-20-20 Rule (The One Most Teams Skip)\n\nThis is the heart of MoSCoW and the part most \"guide to MoSCoW\" articles leave out.\n\nDSDM's guidance is unambiguous: in any given timebox, Must Have effort should not exceed 60% of total team capacity. Should Have effort gets approximately 20%. Could Have effort gets approximately 20% — and acts as the deliberate contingency buffer.\n\n```\nMust Have effort:    ≤ 60% of capacity\nShould Have effort:    ~ 20% of capacity\nCould Have effort:    ~ 20% of capacity (contingency)\n```\n\nThe percentages are measured in *effort*, not feature count. Twenty small Must Haves can still violate the 60% rule if they consume engineering time disproportionately.\n\n### Why 60%?\n\nIf 100% of your timebox is consumed by Must Haves and reality intrudes — sickness, scope discovery, technical blockers — you ship late or ship broken. Capping Must at 60% gives the team 40% of buffer to absorb the unknowns that every project produces. Could Haves are dropped first; the timebox holds.\n\nA project where Must Haves consume 90% of effort is not a MoSCoW project. It is a fixed-scope project pretending to be agile.\n\n## Running a MoSCoW Workshop\n\n### Step 1: Define the Timebox First\n\nMoSCoW only works inside a fixed time horizon. \"Next quarter,\" \"the August release,\" \"before our Series B pitch.\" Without a timebox, every requirement drifts to Must.\n\n### Step 2: List All Candidate Requirements\n\nGather every feature, fix, and improvement under consideration. Aim for 20–60 items. Smaller backlogs do not benefit from MoSCoW; larger ones overwhelm the workshop.\n\n### Step 3: Score Initial Categories Silently\n\nEach participant scores every item silently. Silent scoring prevents the loudest voice from anchoring the room.\n\n### Step 4: Discuss Disagreements\n\nDisagreements are where MoSCoW earns its keep. When engineering says \"Must\" and marketing says \"Could,\" the discussion surfaces hidden assumptions — usually customer-impact assumptions.\n\n### Step 5: Validate Effort Allocation\n\nSum effort estimates for each category. If Must Have effort exceeds 60% of timebox capacity, demote items to Should or Could until it does. Refusing to demote is refusing to do MoSCoW.\n\n### Step 6: Document Won't Haves\n\nExplicitly list deferred items. Share them with stakeholders. The Won't Have list is the most underused conflict-prevention artifact in product management.\n\n## Where MoSCoW Goes Wrong\n\nFour failure modes account for nearly every botched MoSCoW exercise:\n\n**1. Must-Have inflation.** Every stakeholder wants their feature in Must. Without the 60% effort cap as a circuit breaker, Must Have becomes a wishlist and the framework fails.\n\n**2. No effort estimation.** MoSCoW without effort estimates is fiction. The 60-20-20 rule cannot apply if nobody knows how long anything takes.\n\n**3. No timebox.** Open-ended MoSCoW is meaningless. The whole framework presupposes a fixed delivery window.\n\n**4. Customer impact treated as opinion.** Stakeholders argue about Must vs. Should based on their personal beliefs about what users want. The fix is research-driven Must categorization (see below).\n\n## How Customer Research Calibrates MoSCoW Categories\n\nThe single most common dispute in a MoSCoW workshop is \"is this a Must Have or a Should Have?\" — and almost always, the disagreement is rooted in unvalidated assumptions about customer needs. Without customer evidence, the loudest opinion wins. With customer evidence, the data wins.\n\n> \"The most important question in agile prioritization is not what stakeholders want — it is what users will reject if absent. MoSCoW only works when that question is answered with evidence, not assertion.\" — Marty Cagan, Silicon Valley Product Group\n\nTraditional research approaches are too slow to keep up. A typical pre-release research cycle (recruit, interview, synthesize, present) takes 4–6 weeks — long after MoSCoW decisions are locked. By the time evidence arrives, scope is committed.\n\n### Modern Approach: AI-Native Research for MoSCoW Calibration\n\nAI-native research platforms like Koji change the math. Customer research can run *during* the MoSCoW workshop week, not after release. Recent industry data shows teams using AI-assisted research tools report 60% faster time-to-insight compared to traditional moderated methods — fast enough to feed prioritization decisions in real time.\n\nHere is how each MoSCoW category benefits when customer research runs continuously:\n\n**Must Have validation via Kano-style critical-feature interviews.** Use Koji's scale and yes_no question types to test \"would you stop using this product if X were missing?\" across 60+ customers in 48 hours. Anything below an 80% \"yes\" threshold is not a Must — full stop. Move it to Should.\n\n**Should Have ranking via ranking questions.** Koji's ranking question type forces customers to order features by importance. Aggregated rankings across cohorts surface the true Should-Have priority order — no more arguing in the room.\n\n**Could Have testing via concept validation.** Run a quick AI-moderated concept test on Could Haves before committing engineering. If a Could Have lands flat with customers, demote to Won't Have without burning a sprint to discover it.\n\n**Won't Have documentation via open_ended interviews.** Use AI-moderated interviews to capture *why* a feature is being deferred — and use the same nugget to communicate the decision to customers who request it later. Closing the loop with evidence prevents stakeholder relitigation.\n\nKoji's six structured question types (open_ended, scale, single_choice, multiple_choice, ranking, yes_no) cover every input MoSCoW needs: binary Must-Have validation, ranked Should-Have ordering, multi-select feature interest, and qualitative reasoning behind every preference.\n\n## MoSCoW vs. Other Prioritization Frameworks\n\n- **MoSCoW vs. RICE:** RICE produces a numeric ranking; MoSCoW produces categorical buckets within a timebox. Use RICE for cross-quarter backlog ranking; use MoSCoW for inside-the-timebox release scoping. They are complementary, not competing.\n- **MoSCoW vs. Kano:** Kano classifies features by emotional impact on customers (Must-Be, Performance, Delighter). MoSCoW classifies by delivery priority within a release. Kano feeds MoSCoW: Must-Be features in Kano almost always become Must Have in MoSCoW.\n- **MoSCoW vs. Eisenhower Matrix:** Eisenhower sorts by urgency × importance, useful for personal task management. MoSCoW is built for team-scale release planning with effort constraints. Different problem, different tool.\n- **MoSCoW vs. Weighted Scoring:** Weighted scoring multiplies dimensions for a single score. MoSCoW is faster and more legible to non-quantitative stakeholders. Use weighted scoring for high-stakes single decisions; use MoSCoW for release planning across many features.\n\n## When to Use MoSCoW (And When Not To)\n\n**Use MoSCoW when:**\n\n- You have a fixed release timebox (date, demo, contract).\n- The team is delivery-focused, not strategy-focused.\n- Stakeholders need a transparent, categorical decision they can sign off on.\n- 20–60 candidate requirements need triaging.\n\n**Avoid MoSCoW when:**\n\n- You are doing discovery or strategic prioritization. MoSCoW is for execution, not exploration.\n- The backlog is fewer than 10 items. Just rank them.\n- There is no timebox. MoSCoW degenerates without one.\n- The team cannot estimate effort. Without effort estimates, 60-20-20 is impossible.\n\n## MoSCoW Checklist for Your Next Release\n\n- [ ] Timebox is defined and committed.\n- [ ] 20–60 candidate requirements are listed.\n- [ ] Each requirement has an effort estimate signed off by engineering.\n- [ ] Initial scoring done silently by each participant.\n- [ ] Must Have effort ≤ 60% of capacity.\n- [ ] Should and Could combined ≈ 40% of capacity.\n- [ ] Won't Have list is explicit and shared with stakeholders.\n- [ ] Customer evidence supports every Must Have classification.\n- [ ] Re-score at sprint reviews if scope shifts.\n\n## Related Resources\n\n- [Structured Questions Guide](/docs/structured-questions-guide) — Koji's six question types feed MoSCoW with binary, ranked, and scale evidence\n- [Kano Model](/docs/kano-model) — Classify features by emotional impact before applying MoSCoW\n- [RICE Prioritization Framework](/docs/rice-prioritization-framework) — Numeric scoring for cross-quarter backlog ranking\n- [Opportunity Solution Tree](/docs/opportunity-solution-tree) — Discovery framework upstream of MoSCoW\n- [How to Prioritize Customer Feedback](/docs/how-to-prioritize-customer-feedback) — Turn raw feedback into MoSCoW inputs\n- [Feature Prioritization Survey Guide](/docs/feature-prioritization-survey-guide) — Run a survey to validate Must Have classifications\n","category":"Research Methods","lastModified":"2026-05-12T03:20:50.168223+00:00","metaTitle":"MoSCoW Method: The Complete Guide to Must/Should/Could/Won't Prioritization (2026)","metaDescription":"Master MoSCoW prioritization with the 60-20-20 effort rule from DSDM. Includes workshop steps, common failure modes, MoSCoW vs RICE vs Kano comparison, and how AI-moderated customer research validates Must Have classifications in real time.","keywords":["MoSCoW method","MoSCoW prioritization","Must Should Could Won't Have","DSDM prioritization","agile prioritization","release planning","60-20-20 rule","Dai Clegg MoSCoW","MoSCoW vs RICE","MoSCoW workshop"],"aiSummary":"MoSCoW is a prioritization framework that sorts requirements into Must Have, Should Have, Could Have, and Won't Have within a fixed timebox. Developed by Dai Clegg at Oracle in 1994 and donated to DSDM, MoSCoW's core rule is the 60-20-20 effort allocation: Must Have effort capped at 60% of capacity, with Should and Could splitting the remaining 40% as contingency. The framework fails when Must Have inflates beyond 60%, when effort is not estimated, or when timeboxes are missing. Customer research validates Must Have classifications — Koji's structured questions and AI-moderated interviews turn stakeholder opinion into customer-backed evidence within 48 hours.","aiPrerequisites":["Basic understanding of agile project management or product roadmaps"],"aiLearningOutcomes":["Define and apply the four MoSCoW categories rigorously","Apply the 60-20-20 effort allocation rule","Run a MoSCoW workshop end-to-end","Identify the four most common MoSCoW failure modes","Use customer research to validate Must Have classifications","Compare MoSCoW with RICE, Kano, and weighted scoring frameworks"],"aiDifficulty":"intermediate","aiEstimatedTime":"14 minutes"}],"pagination":{"total":1,"returned":1,"offset":0}}