Outcome-Driven Innovation (ODI): The Ulwick Method for Identifying Unmet Customer Needs
A practical guide to Anthony Ulwick's Outcome-Driven Innovation methodology — how to capture desired outcome statements, prioritize unmet needs, and turn JTBD theory into a measurable roadmap.
Outcome-Driven Innovation (ODI) is Anthony Ulwick's framework for treating customer needs as measurable metrics — the criteria customers use to evaluate how well a job-to-be-done gets done. Instead of asking "what do you want?", ODI asks customers to evaluate dozens of "desired outcome statements" on importance and current satisfaction, then mathematically scores each one to find the unmet needs with the largest opportunity gaps. Ulwick reports that products built this way succeed at roughly five times the industry baseline, because the team is finally optimizing for what customers actually use to judge a solution rather than what they say they prefer.
In this guide we'll cover the ODI process end-to-end, the canonical structure of an outcome statement, how to compute opportunity scores, and how a modern AI interview platform like Koji collapses what used to be a 6–8 week research project into a 2-week sprint by automating the interview, transcription, and scoring stages.
What is Outcome-Driven Innovation?
ODI was developed by Anthony Ulwick beginning in 1991 and codified in his book What Customers Want. It sits inside the broader Jobs-to-Be-Done (JTBD) family of methodologies, but where most JTBD work focuses on uncovering the job, ODI focuses on quantifying the metrics by which customers measure that job's execution.
The core thesis: customers do not buy products — they hire products to get a job done, and they evaluate the result against a stable, knowable list of "desired outcomes." A typical job has 50 to 150 desired outcomes attached to it. Most innovation effort fails because teams build features that improve outcomes customers do not care about, while leaving the high-importance, low-satisfaction outcomes untouched.
Five things distinguish ODI from older voice-of-customer work:
- Outcomes, not requirements. ODI deliberately avoids product language and captures customer-defined success metrics.
- Importance × Satisfaction scoring. Every outcome is rated on two scales, then combined into a single opportunity score.
- Quantified market opportunities. The output is a ranked list, not a deck of themes.
- Stability over time. Job-level outcomes change slowly even as technology changes rapidly, so the opportunity map ages well.
- Direct link to roadmap. High-opportunity outcomes become the success criteria for new features.
The structure of a desired outcome statement
ODI outcome statements follow a strict grammar so they remain solution-agnostic and measurable:
[Direction of improvement] + [Unit of measure] + [Object of control] + [Optional clarifier]
For a financial-planning job, a well-formed outcome looks like:
- Minimize the time it takes to reconcile a transaction across accounts.
- Increase the likelihood of identifying a duplicate charge before it posts.
- Minimize the time required to forecast next month's cash position.
Notice what is missing: the word "app," the word "dashboard," the word "AI." Outcome statements describe the metric the customer cares about, not the solution that might address it. Two competing products can both score against the same outcome, which is exactly what makes the framework durable.
The 5-step ODI process
Step 1 — Define the job
Start with the functional job-to-be-done at the right level of abstraction. "File my taxes" is too narrow; "manage my financial life" is too broad. The right job is the one a customer would hire any one of several products to accomplish. Ulwick recommends framing the job as a verb + object + clarifier.
Step 2 — Capture desired outcomes through customer interviews
This is the longest step in classical ODI — typically 30–40 in-depth interviews with target customers, each lasting 60–90 minutes. The interviewer walks the customer through every stage of the job and asks "what makes this step go well?" and "what gets in the way?" until 100+ outcome statements have been collected and de-duplicated.
This is also where ODI projects historically stalled. Recruiting and moderating that many calls cost months. With Koji, you publish a single discussion guide configured for outcome capture, send a personalized link to each participant, and let the AI moderator run all 30+ conversations in parallel. The AI's probing layer is purpose-built for this: every "what gets in the way" answer is followed by "tell me more about the last time that happened" until the participant produces a concrete, measurable outcome rather than a vague preference.
Step 3 — Quantify importance and satisfaction
Once you have a clean list of outcomes (typically 50–150 after de-duplication), survey 200+ customers from the target segment and ask two questions about each outcome:
- Importance: "How important is it that you can [outcome]?" on a 1–5 scale.
- Satisfaction: "How satisfied are you with your current ability to [outcome]?" on the same scale.
Koji handles this stage natively because it supports six structured question types, including scale questions designed for exactly this dual-rating pattern. You can attach an importance scale and a satisfaction scale to each outcome statement and the report aggregates the distributions automatically — no manual cross-tab work in a spreadsheet.
Step 4 — Compute the opportunity score
Ulwick's formula is famously simple:
Opportunity = Importance + max(Importance − Satisfaction, 0)
The formula rewards outcomes that are both important and underserved. An outcome rated 4.5 importance and 2.0 satisfaction has an opportunity score of 4.5 + (4.5 − 2.0) = 7.0. An outcome rated 4.5 importance and 4.0 satisfaction has a score of 4.5 + (4.5 − 4.0) = 5.0 — important but already addressed.
Any outcome scoring above 10 is considered "extreme" opportunity (rare); above 12 is a once-in-a-decade gap. In practice, the top 10–15 outcomes by score become your innovation targets, and the bottom outcomes confirm what you should stop investing in.
Step 5 — Build solutions targeted at the top opportunities
The opportunity-ranked list now functions as a brief for ideation. Instead of "build a new dashboard," the brief becomes "design something that reduces the time to reconcile a transaction by 50%." The success metric is built in. ODI shops then test concepts back against the same outcome ratings to confirm the new solution actually moves the unmet score.
ODI vs. classic Jobs-to-Be-Done interviews
ODI and the more narrative "switch interviews" popularized by Bob Moesta solve different problems. Switch interviews uncover why people buy or change products — the forces of progress. ODI quantifies which outcome metrics matter most. Most modern teams use both: switch interviews to validate the job and discover the moments of struggle, then ODI to rank the outcomes inside that job.
| ODI | Switch Interviews | |
|---|---|---|
| Output | Ranked list of unmet needs | Narrative of why a customer switched |
| Sample size | 30 qualitative + 200 quantitative | 12–20 qualitative |
| Best for | Roadmap prioritization | Understanding the buying decision |
| Time to insight | 6–8 weeks classical / ~2 weeks with Koji | 1–2 weeks |
How Koji compresses an ODI study
The historical knock against ODI is that it is heavyweight. Done manually it requires moderating 30+ interviews, hand-coding outcome statements, fielding a 200-respondent quantitative survey, and computing scores in a spreadsheet. Koji collapses each of those stages:
- Outcome discovery interviews run in parallel via the AI moderator. The probing engine is configured to push toward measurable outcome language (minimize / maximize the time / likelihood / number of …). See the AI Probing Guide for how the three-layer follow-up works.
- Quantitative scoring uses scale questions inside the same conversational flow — there is no separate "now please take this survey" step that kills response rates.
- Aggregation is automatic in the research report. Each outcome appears with its mean importance, mean satisfaction, computed opportunity score, and the verbatim quotes that produced it.
- Synthesis-as-you-go means the top opportunities surface in real-time as the sample grows; you don't wait until field is closed to know where the gaps are.
A team that previously spent 8 weeks on an ODI project can now ship a fully scored opportunity map in under two weeks, which finally makes the methodology viable for the quarterly planning cycle most product teams operate on.
When to use ODI
ODI is the right tool when you need a ranked, defensible list of innovation priorities for a known market. It is not the right tool for early-stage discovery (the job isn't stable yet), brand work (no functional job), or rapid usability tests (wrong question). For early-stage teams, run customer discovery interviews and switch interviews first; once the job is clear and you have an installed base to survey, ODI becomes the rigorous next step.
Related Resources
- Structured Questions in AI Interviews — the six question types Koji uses to capture importance and satisfaction in one flow
- Jobs to Be Done Framework — the parent methodology ODI sits inside
- Switch Interviews: The JTBD Method — narrative complement to ODI's quantitative scoring
- Customer Needs Analysis — uncover the needs that ODI then ranks
- Opportunity Solution Tree — Teresa Torres's discovery framework, often paired with ODI outputs
- Kano Model — alternative prioritization framework worth knowing alongside ODI
Related Articles
How Koji's AI Follow-Up Probing Works: Going Deeper Than Any Survey
Understand how Koji's AI interviewer automatically asks follow-up questions to go deeper on every answer — and how to configure probing depth, custom instructions, and anchor behavior for scale questions.
Structured Questions in AI Interviews
Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.
Generative Research: How to Uncover User Needs You Didn't Know Existed
A complete guide to generative (exploratory) user research — what it is, when to use it, which methods work best, and how AI-powered platforms like Koji make it faster and more scalable than ever.
Switch Interviews: The JTBD Method for Understanding Why Customers Buy (and Leave)
Switch interviews uncover the four forces of progress that cause customers to switch from one product to another. Learn the Bob Moesta playbook and how to run switch interviews with AI at scale.
Customer Needs Analysis: How to Uncover What Customers Actually Want
A practical guide to customer needs analysis — how to identify, prioritize, and act on what customers genuinely need, with frameworks, research methods, and real-world examples.
Jobs-to-Be-Done Interview Guide
Learn the JTBD interview methodology to uncover why customers switch products and what progress they're trying to make.
Kano Model: How to Prioritize Features Using Customer Research
A complete guide to the Kano Model — the feature prioritization framework that maps customer emotions to product decisions. Learn how to run Kano surveys, classify features, and build products customers love.
Jobs to Be Done Framework: The Complete Guide
The definitive guide to the Jobs to Be Done (JTBD) framework — its history, two schools of thought, how to write JTBD statements, famous examples, how to conduct JTBD research, and how AI interviews enable JTBD at scale.
Customer Discovery Interviews: The Complete Guide
Learn how to conduct customer discovery interviews to validate your product ideas before building. Covers Steve Blank methodology, question frameworks, sample sizes, and common mistakes.
Opportunity Solution Tree: The Complete Guide to Continuous Product Discovery
Learn how to build and use the Opportunity Solution Tree (OST) framework — Teresa Torres' visual map for connecting business outcomes to validated customer solutions through continuous discovery. Includes step-by-step instructions, templates, and how Koji automates the evidence-collection process.