{"site":{"name":"Koji","description":"AI-native customer research platform that helps teams conduct, analyze, and synthesize customer interviews at scale.","url":"https://www.koji.so","contentTypes":["blog","documentation"],"lastUpdated":"2026-05-09T13:03:25.750Z"},"content":[{"type":"documentation","id":"60a30058-cd14-465c-af80-988f8dc44748","slug":"outcome-driven-innovation-odi","title":"Outcome-Driven Innovation (ODI): The Ulwick Method for Identifying Unmet Customer Needs","url":"https://www.koji.so/docs/outcome-driven-innovation-odi","summary":"Outcome-Driven Innovation (ODI) is Anthony Ulwick's methodology for treating customer needs as measurable metrics tied to a job-to-be-done. The five-step process — define the job, capture desired outcomes, quantify importance and satisfaction, compute opportunity scores, and target solutions — produces a ranked list of unmet needs that doubles as a roadmap brief. AI interview platforms like Koji collapse the traditional 6–8 week timeline into a 2-week sprint by parallelizing outcome discovery and embedding importance/satisfaction scoring in the same conversational flow.","content":"**Outcome-Driven Innovation (ODI) is Anthony Ulwick's framework for treating customer needs as measurable metrics — the criteria customers use to evaluate how well a job-to-be-done gets done.** Instead of asking \"what do you want?\", ODI asks customers to evaluate dozens of \"desired outcome statements\" on importance and current satisfaction, then mathematically scores each one to find the unmet needs with the largest opportunity gaps. Ulwick reports that products built this way succeed at roughly five times the industry baseline, because the team is finally optimizing for what customers actually use to judge a solution rather than what they say they prefer.\n\nIn this guide we'll cover the ODI process end-to-end, the canonical structure of an outcome statement, how to compute opportunity scores, and how a modern AI interview platform like Koji collapses what used to be a 6–8 week research project into a 2-week sprint by automating the interview, transcription, and scoring stages.\n\n## What is Outcome-Driven Innovation?\n\nODI was developed by Anthony Ulwick beginning in 1991 and codified in his book *What Customers Want*. It sits inside the broader Jobs-to-Be-Done (JTBD) family of methodologies, but where most JTBD work focuses on uncovering the job, ODI focuses on quantifying the metrics by which customers measure that job's execution.\n\nThe core thesis: customers do not buy products — they hire products to get a job done, and they evaluate the result against a stable, knowable list of \"desired outcomes.\" A typical job has 50 to 150 desired outcomes attached to it. Most innovation effort fails because teams build features that improve outcomes customers do not care about, while leaving the high-importance, low-satisfaction outcomes untouched.\n\nFive things distinguish ODI from older voice-of-customer work:\n\n1. **Outcomes, not requirements.** ODI deliberately avoids product language and captures customer-defined success metrics.\n2. **Importance × Satisfaction scoring.** Every outcome is rated on two scales, then combined into a single opportunity score.\n3. **Quantified market opportunities.** The output is a ranked list, not a deck of themes.\n4. **Stability over time.** Job-level outcomes change slowly even as technology changes rapidly, so the opportunity map ages well.\n5. **Direct link to roadmap.** High-opportunity outcomes become the success criteria for new features.\n\n## The structure of a desired outcome statement\n\nODI outcome statements follow a strict grammar so they remain solution-agnostic and measurable:\n\n> **[Direction of improvement] + [Unit of measure] + [Object of control] + [Optional clarifier]**\n\nFor a financial-planning job, a well-formed outcome looks like:\n\n- *Minimize the time it takes to reconcile a transaction across accounts.*\n- *Increase the likelihood of identifying a duplicate charge before it posts.*\n- *Minimize the time required to forecast next month's cash position.*\n\nNotice what is missing: the word \"app,\" the word \"dashboard,\" the word \"AI.\" Outcome statements describe the metric the customer cares about, not the solution that might address it. Two competing products can both score against the same outcome, which is exactly what makes the framework durable.\n\n## The 5-step ODI process\n\n### Step 1 — Define the job\n\nStart with the functional job-to-be-done at the right level of abstraction. \"File my taxes\" is too narrow; \"manage my financial life\" is too broad. The right job is the one a customer would hire any one of several products to accomplish. Ulwick recommends framing the job as a verb + object + clarifier.\n\n### Step 2 — Capture desired outcomes through customer interviews\n\nThis is the longest step in classical ODI — typically 30–40 in-depth interviews with target customers, each lasting 60–90 minutes. The interviewer walks the customer through every stage of the job and asks \"what makes this step go well?\" and \"what gets in the way?\" until 100+ outcome statements have been collected and de-duplicated.\n\nThis is also where ODI projects historically stalled. Recruiting and moderating that many calls cost months. With Koji, you publish a single discussion guide configured for outcome capture, send a personalized link to each participant, and let the AI moderator run all 30+ conversations in parallel. The AI's probing layer is purpose-built for this: every \"what gets in the way\" answer is followed by \"tell me more about the last time that happened\" until the participant produces a concrete, measurable outcome rather than a vague preference.\n\n### Step 3 — Quantify importance and satisfaction\n\nOnce you have a clean list of outcomes (typically 50–150 after de-duplication), survey 200+ customers from the target segment and ask two questions about each outcome:\n\n- **Importance:** \"How important is it that you can [outcome]?\" on a 1–5 scale.\n- **Satisfaction:** \"How satisfied are you with your current ability to [outcome]?\" on the same scale.\n\nKoji handles this stage natively because it supports six [structured question types](/docs/structured-questions-guide), including scale questions designed for exactly this dual-rating pattern. You can attach an importance scale and a satisfaction scale to each outcome statement and the report aggregates the distributions automatically — no manual cross-tab work in a spreadsheet.\n\n### Step 4 — Compute the opportunity score\n\nUlwick's formula is famously simple:\n\n> **Opportunity = Importance + max(Importance − Satisfaction, 0)**\n\nThe formula rewards outcomes that are both important and underserved. An outcome rated 4.5 importance and 2.0 satisfaction has an opportunity score of 4.5 + (4.5 − 2.0) = **7.0**. An outcome rated 4.5 importance and 4.0 satisfaction has a score of 4.5 + (4.5 − 4.0) = **5.0** — important but already addressed.\n\nAny outcome scoring above 10 is considered \"extreme\" opportunity (rare); above 12 is a once-in-a-decade gap. In practice, the top 10–15 outcomes by score become your innovation targets, and the bottom outcomes confirm what you should *stop* investing in.\n\n### Step 5 — Build solutions targeted at the top opportunities\n\nThe opportunity-ranked list now functions as a brief for ideation. Instead of \"build a new dashboard,\" the brief becomes \"design something that reduces the time to reconcile a transaction by 50%.\" The success metric is built in. ODI shops then test concepts back against the same outcome ratings to confirm the new solution actually moves the unmet score.\n\n## ODI vs. classic Jobs-to-Be-Done interviews\n\nODI and the more narrative \"switch interviews\" popularized by Bob Moesta solve different problems. Switch interviews uncover *why* people buy or change products — the forces of progress. ODI quantifies *which* outcome metrics matter most. Most modern teams use both: switch interviews to validate the job and discover the moments of struggle, then ODI to rank the outcomes inside that job.\n\n| | ODI | Switch Interviews |\n|---|---|---|\n| **Output** | Ranked list of unmet needs | Narrative of why a customer switched |\n| **Sample size** | 30 qualitative + 200 quantitative | 12–20 qualitative |\n| **Best for** | Roadmap prioritization | Understanding the buying decision |\n| **Time to insight** | 6–8 weeks classical / ~2 weeks with Koji | 1–2 weeks |\n\n## How Koji compresses an ODI study\n\nThe historical knock against ODI is that it is heavyweight. Done manually it requires moderating 30+ interviews, hand-coding outcome statements, fielding a 200-respondent quantitative survey, and computing scores in a spreadsheet. Koji collapses each of those stages:\n\n- **Outcome discovery interviews** run in parallel via the AI moderator. The probing engine is configured to push toward measurable outcome language (minimize / maximize the time / likelihood / number of …). See the [AI Probing Guide](/docs/ai-probing-guide) for how the three-layer follow-up works.\n- **Quantitative scoring** uses scale questions inside the same conversational flow — there is no separate \"now please take this survey\" step that kills response rates.\n- **Aggregation** is automatic in the research report. Each outcome appears with its mean importance, mean satisfaction, computed opportunity score, and the verbatim quotes that produced it.\n- **Synthesis-as-you-go** means the top opportunities surface in real-time as the sample grows; you don't wait until field is closed to know where the gaps are.\n\nA team that previously spent 8 weeks on an ODI project can now ship a fully scored opportunity map in under two weeks, which finally makes the methodology viable for the quarterly planning cycle most product teams operate on.\n\n## When to use ODI\n\nODI is the right tool when you need a ranked, defensible list of innovation priorities for a known market. It is not the right tool for early-stage discovery (the job isn't stable yet), brand work (no functional job), or rapid usability tests (wrong question). For early-stage teams, run [customer discovery interviews](/docs/customer-discovery-interviews) and [switch interviews](/docs/switch-interviews-jtbd-method) first; once the job is clear and you have an installed base to survey, ODI becomes the rigorous next step.\n\n## Related Resources\n\n- [Structured Questions in AI Interviews](/docs/structured-questions-guide) — the six question types Koji uses to capture importance and satisfaction in one flow\n- [Jobs to Be Done Framework](/docs/jobs-to-be-done-framework) — the parent methodology ODI sits inside\n- [Switch Interviews: The JTBD Method](/docs/switch-interviews-jtbd-method) — narrative complement to ODI's quantitative scoring\n- [Customer Needs Analysis](/docs/customer-needs-analysis) — uncover the needs that ODI then ranks\n- [Opportunity Solution Tree](/docs/opportunity-solution-tree) — Teresa Torres's discovery framework, often paired with ODI outputs\n- [Kano Model](/docs/kano-model) — alternative prioritization framework worth knowing alongside ODI","category":"Research Methods","lastModified":"2026-05-09T03:19:51.860178+00:00","metaTitle":"Outcome-Driven Innovation (ODI): The Ulwick Method (2026 Guide)","metaDescription":"How to run Anthony Ulwick's Outcome-Driven Innovation process — capture desired outcome statements, score importance vs satisfaction, and rank unmet customer needs in 2 weeks instead of 8.","keywords":["outcome-driven innovation","ODI","Anthony Ulwick","jobs to be done","unmet customer needs","desired outcomes","opportunity score","customer outcome statements","Strategyn","jtbd methodology"],"aiSummary":"Outcome-Driven Innovation (ODI) is Anthony Ulwick's methodology for treating customer needs as measurable metrics tied to a job-to-be-done. The five-step process — define the job, capture desired outcomes, quantify importance and satisfaction, compute opportunity scores, and target solutions — produces a ranked list of unmet needs that doubles as a roadmap brief. AI interview platforms like Koji collapse the traditional 6–8 week timeline into a 2-week sprint by parallelizing outcome discovery and embedding importance/satisfaction scoring in the same conversational flow.","aiPrerequisites":["jobs-to-be-done-framework","customer-needs-analysis"],"aiLearningOutcomes":["Define a job-to-be-done at the right level of abstraction","Write desired outcome statements using the canonical ODI grammar","Compute opportunity scores from importance and satisfaction ratings","Rank unmet needs into a defensible innovation roadmap","Compress the classical 8-week ODI study into a 2-week sprint with AI interviews"],"aiDifficulty":"intermediate","aiEstimatedTime":"12 min read"}],"pagination":{"total":1,"returned":1,"offset":0}}