New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Research Methods

Conjoint Analysis: The Complete Guide to Trade-Off Research (2026)

A complete guide to choice-based conjoint analysis (CBC) for pricing, feature bundling, and competitive simulation — plus how AI-native research platforms make conjoint accessible without specialist consultants.

Conjoint Analysis: The Complete Guide to Trade-Off Research (2026)

Conjoint analysis is a quantitative research method that reveals how customers value different attributes of a product or service by forcing them to choose between realistic bundles. Instead of asking "how important is price?", conjoint shows respondents complete product configurations — Brand A at $49 with Feature X vs Brand B at $39 with Feature Y — and uses their choices to calculate the relative weight of every attribute. The result is a model that can predict market share for any product configuration you have not even built yet.

If you have ever debated whether to add a feature, raise a price, or change a tier, you have needed conjoint analysis. It is the gold standard for pricing decisions, feature bundling, and competitive positioning — and it is the method behind most major consumer product launches in the last 30 years.

This guide explains the four main types of conjoint analysis, when each one fits, how to design a study that produces reliable utilities, and how AI-native research platforms like Koji are making conjoint accessible to product and marketing teams without dedicated research budgets.


What Is Conjoint Analysis?

Conjoint analysis (sometimes called "trade-off analysis") is built on a simple premise from microeconomics: people reveal their true preferences through choices, not stated importance. When you ask "how important is price?", everyone says "very." When you force a choice between $39 and $49 with a feature added, the trade-off shows what the feature is actually worth.

The method, developed by Paul Green and Vithala Rao at Wharton in 1971, decomposes complex multi-attribute decisions into the part-worth utilities of each attribute level. From those utilities you can:

  • Calculate willingness-to-pay for any feature
  • Simulate market share for any product configuration
  • Identify the optimal feature bundle for a price point
  • Quantify the price elasticity of every attribute

The four types of conjoint

TypeBest forComplexity
Choice-Based Conjoint (CBC)Pricing, packaging, competitive scenariosStandard for most studies
Adaptive Choice-Based Conjoint (ACBC)6+ attributes, screening + prioritizationHigh
Menu-Based Conjoint (MBC)"Build your own" configuratorsHigh
Adaptive Conjoint Analysis (ACA)Many attributes (10+), B2B researchLegacy — replaced by ACBC

For 80% of product and pricing decisions, Choice-Based Conjoint (CBC) is the right tool. It is the method most agencies and platforms default to, and it is the focus of this guide.


When to Use Conjoint Analysis

Use conjoint when you need to:

  • Set the price of a new product or feature
  • Decide which features to bundle into Pro / Enterprise tiers
  • Predict market share against named competitors
  • Quantify willingness-to-pay for a roadmap item
  • Optimize a product configuration for a specific segment

Skip conjoint when:

  • You only need to prioritize a flat list of features (use MaxDiff analysis instead)
  • You are testing fewer than 3 attributes (a simple A/B test is enough)
  • You do not have at least 200-300 respondents
  • The decision is qualitative — why the trade-off matters more than what the trade-off is (start with generative research)

How to Run a Choice-Based Conjoint Study

Step 1 — Identify attributes and levels

Attributes are the dimensions of the decision (price, brand, feature set). Levels are the discrete values within each attribute ($29 / $49 / $69; Slack / Teams / Discord; Monthly / Annual / Lifetime).

Best practice:

  • Limit attributes to 4-6. Above 7, respondents start using shortcuts and utilities become unreliable.
  • Use 2-5 levels per attribute. Price typically gets 4-5 levels; binary features get 2.
  • Make levels realistic. A "free" level alongside "$99" creates extreme dominance that distorts the model.
  • Include brand as an attribute when measuring competitive share — it captures all the unmeasured equity in one place.

Step 2 — Generate the choice tasks

A standard CBC design shows 8-15 choice tasks per respondent, each containing 3-4 product profiles plus an optional "None" option. The "None" option is critical — without it, you cannot model demand elasticity.

Choice tasks are generated via an efficient experimental design (D-optimal or random with overlap) that ensures every attribute level appears across the design and the parameters are estimable. Modern research platforms automate this — you specify attributes and levels, the software generates the tasks.

Step 3 — Field the survey

Conjoint requires:

  • Minimum 200 respondents for aggregate utilities
  • 300-400 respondents for segment-level analysis
  • 500+ respondents for individual-level (HB) utilities

Critically, your sample must be representative of your target market — conjoint utilities are projectable only to the population sampled. If you are testing pricing for SMBs, recruit SMBs.

Step 4 — Estimate utilities

Standard estimation uses Hierarchical Bayes (HB) to produce individual-level part-worth utilities. The output is a utility score for every level of every attribute, on an arbitrary scale where higher = more preferred.

Example output for a SaaS pricing study:

Price:        $29 → +1.4    $49 → +0.2    $69 → -1.6
Plan:         Solo → -0.8   Team → +0.3   Enterprise → +0.5
AI included:  No  → -1.1    Yes → +1.1

From these utilities, you derive importances (how much each attribute drives choice as a percentage of total choice variance) and willingness-to-pay (how much money equals one utility point).

Step 5 — Run market simulations

The output of conjoint is a market simulator — a tool that predicts share-of-preference for any product configuration. You can ask:

  • "If we launch at $49 with the AI feature, what share do we win against Competitor X at $39 without AI?"
  • "What price would maximize revenue while keeping share above 30%?"
  • "Which feature, if added, would shift the most share away from Competitor Y?"

This simulator is the actual deliverable of a conjoint study. Reports without a simulator leave most of the value on the table.


Conjoint vs. MaxDiff vs. Van Westendorp

These three methods are often confused. Here is the cleanest distinction:

  • Conjoint analysis = trade-offs between bundles of attributes. Best when you need to set prices or design tiers.
  • MaxDiff = relative importance of single items in isolation. Best when you need to prioritize a flat list.
  • Van Westendorp PSM = acceptable price range for a single product. Best for early pricing exploration without competitor context.

For a complete pricing study, you might use Van Westendorp early to set a range, conjoint to optimize within that range against competitors, and MaxDiff to prioritize which features to bundle. See our pricing research guide for the full workflow.


How AI-Native Platforms Are Changing Conjoint

Traditional conjoint requires three things startups and product teams rarely have: a specialist research vendor (typical engagement: $40K-$120K), 6-10 weeks of fieldwork, and a sample large enough to support HB estimation.

Modern AI-native platforms collapse this. With Koji:

  • Choice tasks render natively in chat — voice or text — instead of via clunky survey grids
  • The AI follows up on every choice with a probing question, capturing the qualitative why behind each trade-off
  • Quality scoring automatically excludes low-effort responses (Koji's built-in quality gate scores each interview 1-5; only conversations scoring 3+ count toward your credits)
  • Time to insight collapses from 6 weeks to 72 hours

The qualitative layer matters more than most teams realize. Standard conjoint tells you that 60% would choose Plan B at $49. Koji's AI follow-up reveals that 40% of those choosing Plan B did so because they misread the AI feature description — a finding that changes the launch messaging entirely.

This combination — projectable quantitative utilities + segment-level qualitative reasoning — is what AI-native conversational research enables and traditional conjoint cannot.


Conjoint Best Practices

  • Pretest your attribute and level wording. Use 5-10 qualitative interviews to make sure respondents understand each level the way you intend.
  • Include "None of these" as an option. Without it you cannot model demand elasticity or screen out indifferent respondents.
  • Avoid prohibited combinations. If $29 + Enterprise tier does not exist in reality, mark it prohibited so the design does not waste tasks on impossible scenarios.
  • Validate with a holdout. Reserve one or two choice tasks for validation — your model should predict them accurately.
  • Report importances and willingness-to-pay. Importances tell you where to focus; WTP tells you what to charge.

Common Mistakes to Avoid

  1. Too many attributes. Above 7, cognitive overload sets in and utilities become noise.
  2. Unrealistic price ranges. A $99 vs $999 contrast produces extreme dominance and inflates price importance artificially.
  3. No "None" option. You cannot measure category demand without it.
  4. Aggregate-only analysis. Sub-group simulations almost always reveal that "the market" hides two distinct segments with opposite preferences.
  5. Ignoring the qualitative. Numbers without reasoning lead to features that win simulations and lose markets.

What Conjoint Cannot Tell You

Conjoint is excellent at quantifying preferences within a defined choice space. It cannot:

  • Reveal attributes you did not include (run generative research first)
  • Predict adoption of a category that does not exist yet
  • Measure attributes that are not communicable in a short product description
  • Capture emotional or social drivers — these need qualitative AI interviews to surface

The best conjoint studies are bookended by qualitative research: discovery interviews to identify the right attributes and levels, then conversational follow-ups inside the conjoint study to understand the reasoning behind the choices.


Related Resources

Related Articles

AI-Moderated Interviews: How Automated Research Works (And Why It Works Better)

Understand how AI-moderated interviews work, when to use them over human-moderated sessions, and how to get the most from automated qualitative research.

Structured Questions in AI Interviews

Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.

Van Westendorp Price Sensitivity Meter: The Four-Question Pricing Research Method

The Van Westendorp Price Sensitivity Meter uses four questions to identify the optimal price for any product. Learn how to run the PSM with AI interviews at scale and combine the four numbers with qualitative reasoning.

Generative Research: How to Uncover User Needs You Didn't Know Existed

A complete guide to generative (exploratory) user research — what it is, when to use it, which methods work best, and how AI-powered platforms like Koji make it faster and more scalable than ever.

MaxDiff Analysis: The Complete Guide to Maximum Difference Scaling (2026)

Learn how MaxDiff (Maximum Difference Scaling) produces sharper feature and message prioritization than rating scales — and how to pair it with conversational AI interviews to capture the why behind every score.

Kano Model: How to Prioritize Features Using Customer Research

A complete guide to the Kano Model — the feature prioritization framework that maps customer emotions to product decisions. Learn how to run Kano surveys, classify features, and build products customers love.

How to Run Pricing Research Surveys: Van Westendorp, Gabor-Granger, and Conjoint Analysis

The complete guide to pricing research methodologies. Learn how to determine optimal price points using Van Westendorp, test price sensitivity with Gabor-Granger, and combine quantitative pricing data with qualitative value perception using Koji.

How to Run Feature Prioritization Surveys That Build Products Users Actually Want

Learn how to run feature prioritization surveys using RICE, Kano, MoSCoW, and opportunity scoring frameworks. Combine quantitative ranking with AI-driven qualitative depth to build what users truly need.