New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Research Methods

Kano Model: How to Prioritize Features Using Customer Research

A complete guide to the Kano Model — the feature prioritization framework that maps customer emotions to product decisions. Learn how to run Kano surveys, classify features, and build products customers love.

Kano Model: How to Prioritize Features Using Customer Research

Bottom line: The Kano Model classifies product features by their emotional impact on customers — revealing which features prevent dissatisfaction, which drive delight, and which waste engineering resources entirely. Teams that skip Kano analysis consistently over-invest in features customers do not want.

80% of features in the average software product are rarely or never used (Pendo, 2019). The Kano Model exists to prevent that waste.

Developed by Japanese professor Noriaki Kano in his landmark 1984 paper "Attractive Quality and Must-Be Quality," the Kano Model is one of the most cited frameworks in product development history — with over 3,600 academic citations. Its central insight: customer satisfaction is not linear. Building more of a feature does not proportionally increase satisfaction. Different types of features have fundamentally different relationships with how customers feel.

The 5 Kano Categories

The model maps features on two axes: how well a feature is implemented (absent to fully present) and how customers feel as a result (dissatisfied to delighted). The resulting curves are non-linear — which is what makes the framework powerful.

Must-Be (Basic) Features

These are non-negotiable baseline requirements. When present, they produce no positive emotional reaction — satisfaction stays neutral. When absent, customers are severely dissatisfied. They are the price of entry to a market.

No investment in Must-Be features creates delight. It only prevents dissatisfaction.

Examples: A car that starts reliably. A mobile app that does not crash. Secure login on a banking platform.

Performance (One-Dimensional) Features

These have a direct, linear relationship with satisfaction. More equals better, less equals worse. Every incremental improvement produces a measurable, proportional increase in customer satisfaction.

Examples: Laptop battery life. Camera resolution. E-commerce delivery speed. Page load time.

Attractive (Delighter) Features

These are unexpected features that produce disproportionate jumps in delight when present. When absent, customers do not miss them because they do not expect them. When present and well-executed, they generate excitement and brand loyalty — this is where differentiation lives.

Examples: The pinch-to-zoom gesture on the original iPhone (2007). Netflix personalized recommendations. A hotel leaving handwritten notes for guests.

As Nielsen Norman Group notes in their prioritization methods guide: "The Attractive category shows a disproportionate increase in satisfaction to functionality, and users may not even notice their absence — but with good-enough implementation, user excitement can grow exponentially."

Indifferent Features

Customers genuinely do not care whether these are present or absent. They produce no movement in satisfaction either way. These are a significant source of wasted engineering investment — research suggests 30-40% of the average SaaS backlog maps to this category.

Reverse Features

These actively decrease satisfaction when present. Some users are harmed by, confused by, or resentful of features built for other user segments — especially common when teams fail to segment respondents by user type.

Example: Overaggressive push notifications. Mandatory onboarding tutorials for power users.

How to Run a Kano Survey

The Kano survey uses paired question sets — one functional (feature present) and one dysfunctional (feature absent) — for each feature being evaluated.

The Question Structure

For every feature, ask two questions:

Functional question (feature present): "How would you feel if this product had [Feature X]?"

Dysfunctional question (feature absent): "How would you feel if this product did NOT have [Feature X]?"

Response Scale

Both questions use the same 5-point scale:

  1. I like it that way
  2. I expect it that way (it is a given)
  3. I am neutral
  4. I can tolerate it
  5. I dislike it that way

Kano Classification Matrix

Pair the functional and dysfunctional answers to classify each response:

FunctionalLikeExpectNeutralTolerateDislike
Dislike (Dysfunc.)PerformanceMust-BeMust-BeMust-BeQuestionable
TolerateAttractiveIndifferentIndifferentIndifferentMust-Be
NeutralAttractiveIndifferentIndifferentIndifferentMust-Be
ExpectAttractiveIndifferentIndifferentIndifferentMust-Be
LikeQuestionableAttractiveAttractiveAttractiveReverse

Survey Design Best Practices

  • Sample size: Minimum 30 responses per customer segment
  • Feature count: 15-25 features per survey; beyond 30 causes fatigue
  • Framing: Use "How would you feel if...?" — not "Would you use...?" Feelings, not utility
  • Order randomization: Randomize whether functional or dysfunctional question appears first
  • Add importance: After each pair, ask "How important is this feature to you?" on a 1-9 scale
  • Segment separately: Different user types must be analyzed as distinct cohorts

Calculating Satisfaction Coefficients

After classifying features, calculate two coefficients for each:

  • Satisfaction (CS): (Attractive + Performance) / (Attractive + Performance + Must-Be + Indifferent)
  • Dissatisfaction (DS): -1 x (Must-Be + Performance) / (Attractive + Performance + Must-Be + Indifferent)

Plot features on a CS/DS grid: high CS + low DS signals Delighters worth investing in; high DS signals Must-Be baseline requirements to protect.

The Kano Category Lifecycle

One of Kano's most important predictions: categories shift over time as market expectations evolve.

The iPhone example: When Apple launched the iPhone in 2007, pinch-to-zoom and visual voicemail were Attractive Delighters — unexpected features customers had not known to ask for. The audience reaction was textbook Kano: disproportionate delight from an unanticipated capability. Today, those same gestures are Must-Be features. Any smartphone missing them would be perceived as broken.

Dark Mode: An Attractive feature for developers in 2016 became a Performance attribute by 2019 and a Must-Be expectation by 2022.

This lifecycle migration means Kano surveys need to be re-run regularly — annually, or after major competitor releases.

Kano vs. Other Prioritization Frameworks

FrameworkCore QuestionBest For
KanoShould we build this at all?Discovery — eliminating features customers do not want
RICEWhich validated ideas have the best return?Backlog scoring — comparing impact vs. effort
MoSCoWWhat fits into this release?Scope finalization — aligning teams on what ships now

Recommended combined workflow:

  1. Use Kano during discovery to eliminate Indifferent features and identify Delighters
  2. Use RICE to score the shortlisted Kano output against business impact and effort
  3. Use MoSCoW to finalize what fits given capacity constraints

Each framework answers a different question. Combining them produces a more complete prioritization system than any single framework alone.

The Scale Problem: Why Traditional Kano Studies Are Too Slow

Running a rigorous Kano study with traditional human-moderated interviews is expensive and slow:

  • A 200-interview qualitative Kano study through a research agency costs approximately $100,000 and takes 8-12 weeks
  • Static Kano surveys scale easily but cannot probe the "why" behind a classification
  • Product cycles move faster than traditional research timelines allow

This is the gap that AI-powered research platforms are closing.

How Koji Accelerates Kano Research

Koji's AI-moderated interviews enable product teams to run Kano-quality research at quantitative scale — in days, not months.

Scale with depth: Conduct Kano functional/dysfunctional question pairs across 200+ participants while the AI dynamically follows up on unexpected responses. A feature classified as "Reverse" by 40 participants? Koji asks each one why — capturing the qualitative reasoning that static surveys permanently miss.

Structured question types for Kano surveys: Koji's structured question framework supports six question types — including scale questions (perfect for the 1-5 Kano response scale), single-choice questions (for classification), and open-ended follow-ups — enabling a complete Kano survey with integrated qualitative probing in a single study.

Speed: What previously took 8-12 weeks now completes in 72 hours. Teams can run quarterly Kano re-surveys to track category migration as the market evolves.

AI-generated themes: After classification, Koji's analysis automatically surfaces the most common reasons behind each category assignment — translating raw Kano data into actionable design direction.

Marty Cagan, author of Inspired, frames the problem Kano solves: "The purpose of product discovery is to quickly separate the good ideas from the bad. The output of discovery is a validated product backlog." Kano is the discovery instrument that operationalizes exactly this. AI-moderated interviews make it economically viable at the speed modern product teams require.

Common Kano Mistakes to Avoid

Not segmenting by user type. A feature classified as "Attractive" for power users may be "Indifferent" or "Reverse" for casual users. Always analyze cohorts separately.

Treating category assignments as permanent. Re-run annually, or after major competitor releases. Categories shift as market expectations evolve.

Asking about utility instead of feelings. "Would you use X?" captures rational assessment. "How would you feel if X were missing?" captures the emotional reaction the Kano model is calibrated to measure.

Surveying too many features at once. Beyond 25-30 features, respondents experience fatigue. This compresses variance and pushes too many features toward "Indifferent."

Ignoring the Questionable category. A respondent who says they like a feature both when present and absent is giving a logically inconsistent answer. Filter these responses out — do not reclassify them.

Using Kano in isolation without the "why." Kano gives quantitative category assignments but cannot explain why a feature lands where it does. Pair the survey with qualitative follow-up interviews to get the reasoning behind the data.

Key Statistics

  • 80% of features in the average software product are rarely or never used (Pendo 2019 Feature Adoption Report)
  • 64% of features are "rarely or never used" — 45% never used at all (Standish Group, via Mountain Goat Software)
  • 35% of companies add features primarily to close deals rather than improve customer value (ITONICS)
  • Teams with comprehensive customer-insight testing protocols report failure rate reductions of 30-50% (We Are Tenet)

Related Resources