New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Research Methods

Mixed Methods Research: How to Combine Qualitative and Quantitative Data

Learn how to design and run mixed methods research that combines the statistical power of quantitative data with the depth of qualitative insight — including how AI interview platforms like Koji make mixed methods accessible to every research team.

Mixed Methods Research: How to Combine Qualitative and Quantitative Data

The bottom line: Mixed methods research combines quantitative data (numbers, ratings, frequencies) with qualitative data (stories, themes, explanations) to produce findings that are both statistically credible and deeply human. It answers not just what is happening but why — and modern AI interview platforms make it accessible without a PhD in research methodology.

Quantitative research tells you that 42% of users abandon checkout at step 3. Qualitative research tells you they abandon because the shipping cost estimate appeared unexpectedly at the last moment, triggering a trust breakdown they haven't consciously articulated. You need both to understand your users — and to fix real problems.

Mixed methods research is the formal framework for combining these two types of data in principled, reproducible ways. It's practiced by academic researchers, used by leading product teams at companies like Google and Airbnb, and increasingly automated by AI tools that embed quantitative structure into conversational interviews.


Why Mixed Methods?

The Limits of Quantitative Research Alone

Quantitative research measures at scale. Surveys, analytics, A/B tests, and usage data tell you what users do and how many of them do it. But numbers are silent on causation.

When conversion drops 15% after a redesign, your analytics tell you where in the funnel users are leaving. They don't tell you what cognitive or emotional event triggered the exit. Acting on quantitative data alone leads to interventions that address symptoms rather than root causes.

The Limits of Qualitative Research Alone

Qualitative research generates deep understanding from small samples. Interviews, ethnographic observation, and open-ended feedback reveal the why, the nuance, and the unexpected. But qualitative research can't tell you how widespread a finding is. A pattern you observe in 8 interviews might represent 80% of your users or 8%.

Acting on qualitative data alone leads to changes optimized for vocal minorities. The insight is rich; the statistical confidence is absent.

Mixed Methods Combines the Strengths

Mixed methods research systematically combines both, using each type of data to compensate for the other's weaknesses:

  • Quantitative data establishes what is happening and at what scale
  • Qualitative data explains why it is happening and what it means
  • Together, they support action: confident, specific, and grounded in real user experience

The Four Core Mixed Methods Designs

Research methodology offers several established patterns for combining qualitative and quantitative data. Understanding these gives you a toolkit for structuring any complex research project.

1. Explanatory Sequential Design

Run quantitative first → then qualitative to explain

This is the most common pattern in product research. You start with quantitative data (survey ratings, analytics, A/B results) that surface a pattern, then run qualitative interviews to understand why the pattern exists.

Example: Your CSAT scores drop after a new feature launch. Survey data shows the drop is concentrated among power users. You then run AI-powered depth interviews with power users to understand exactly how the new feature disrupted their workflow.

When to use: When you have quantitative signals you need to explain. When stakeholders need both scale ("this affects 40% of users") and depth ("here's what they experience") to act.

2. Exploratory Sequential Design

Run qualitative first → then quantitative to validate

You start with qualitative research to generate hypotheses — themes, concepts, candidate explanations — then design quantitative instruments to test whether those hypotheses hold at scale.

Example: You run discovery interviews to understand how managers experience performance review season. The interviews surface five distinct approaches to feedback delivery. You then design a survey to quantify which approach is most common and how it correlates with team satisfaction scores.

When to use: When you don't know enough about the problem to design a valid survey. When you need qualitative research to identify the right questions before measuring at scale.

3. Convergent Parallel Design

Run quantitative and qualitative simultaneously → then merge findings

You collect both types of data at the same time, analyzing them independently before comparing and integrating. Convergence between the datasets builds confidence; divergence flags areas needing further investigation.

Example: While running a large NPS survey, you simultaneously run AI interviews with a subset of survey respondents. You analyze survey ratings and interview themes separately, then look for where they converge (interviews confirm what the scores suggest) and diverge (interviews reveal nuance the scores missed).

When to use: When you have the resources to run both simultaneously. When you want to validate quantitative findings with qualitative depth, or vice versa.

4. Embedded Design

One method nested inside the other

A qualitative component is embedded within a larger quantitative study (or vice versa). Common in product research as open-ended questions embedded within surveys, or quantitative screening data collected before qualitative interviews.

Example: A 20-question satisfaction survey includes three open-ended questions. The quantitative items provide scale measurements; the open-ended items capture the explanatory stories behind the ratings.

When to use: When you want the efficiency of a single data collection effort that produces both types of insight. This is the most practical design for most product teams.


Koji's Structured Questions: Mixed Methods Built In

Here's something that often surprises researchers: Koji's structured questions make every AI interview inherently mixed methods.

When you configure a Koji study, you have access to six question types:

  • Open-ended — qualitative, conversational, captures narrative and nuance
  • Scale — quantitative, measures intensity or agreement (1-7 ratings, NPS, etc.)
  • Single-choice — quantitative, mutually exclusive categories
  • Multiple-choice — quantitative, multi-select from a set
  • Ranking — quantitative, relative prioritization
  • Yes/No — quantitative, binary confirmation

In a single Koji interview, you might:

  1. Ask a scale question: "On a scale of 1-7, how satisfied are you with our reporting features?"
  2. Immediately follow with an open-ended probe: "Tell me what specifically made you give that rating."
  3. Follow with a multiple-choice: "Which of these would most improve your reporting experience?"
  4. Then an open-ended follow-up: "Tell me more about why that stands out."

The result is an embedded mixed methods design in every interview — scale data for statistical analysis, open-ended data for thematic understanding, all in the same conversation. Koji's AI automatically generates both quantitative summaries (average ratings, response distributions) and qualitative themes (synthesized from open-ended responses) in the same report.

This means teams that previously needed separate survey and interview phases — with the analysis overhead of two datasets — can now run a single AI interview study and get both.


Designing a Mixed Methods Study

Step 1: Define Your Research Questions

Mixed methods works best when you have questions that genuinely require both types of data. Good mixed methods questions have a quantitative component ("how many," "how often," "to what degree") and a qualitative component ("why," "what does it mean," "what is the experience of").

Example: "How common is account abandonment during onboarding (quantitative), and what are the emotional and functional drivers that trigger it (qualitative)?"

Step 2: Choose Your Design

Based on your timeline, resources, and existing data:

  • Have quantitative data already? → Explanatory sequential
  • Building from scratch in a new domain? → Exploratory sequential
  • Have resources to run both simultaneously? → Convergent parallel
  • Want efficiency in a single data collection? → Embedded design

Step 3: Design Your Instruments

For quantitative components:

  • Ensure scales are validated and consistent (see our Survey Design Best Practices guide)
  • Sample size: aim for 100+ responses for reliable quantitative conclusions
  • Pre-test for bias and clarity

For qualitative components:

  • Design open-ended questions that follow up on quantitative items
  • Prepare probing questions for expected themes
  • Keep samples smaller but purposefully selected (8-20 participants typically)

Step 4: Collect and Analyze Separately First

The most common mixed methods mistake is contaminating qualitative analysis with quantitative preconceptions. Analyze each dataset independently before integrating:

Quantitative analysis first:

  • Run descriptive statistics on scale items
  • Create frequency distributions for choice items
  • Identify segments showing different patterns
  • Form hypotheses about what qualitative data might explain

Qualitative analysis second:

  • Code open-ended responses thematically
  • Identify dominant and marginal themes
  • Note where participants diverge from expected patterns
  • Map themes to quantitative segments where possible

Integration last:

  • Where do findings converge? This builds confidence
  • Where do they diverge? This flags complexity or nuance
  • What do the qualitative themes explain about quantitative patterns?
  • What quantitative data would test the qualitative themes further?

Step 5: Report Findings Together

Mixed methods findings are most powerful when reported together. Don't deliver a quantitative slide deck and a separate qualitative summary — weave them into integrated narratives:

"42% of users rate the checkout experience below 5/7 (quantitative). Interviews reveal that the primary driver is unexpected shipping costs appearing at the final step — a moment participants describe as 'bait and switch' (qualitative). This pattern is consistent across all segments but is most acute among first-time buyers (quantitative + qualitative convergence)."

This format gives stakeholders the statistical credibility to act ("42% of users") and the human understanding to act wisely ("here's what they're experiencing and why it matters to them").


Practical Mixed Methods Templates

Template 1: Feature Evaluation

  1. Scale: "How would you rate your current experience with [feature]? (1-7)"
  2. Open: "What's one thing about [feature] that works well for you?"
  3. Open: "What's the single biggest frustration with [feature]?"
  4. Multiple-choice: "Which improvement would make the biggest difference to you?"
  5. Open: "Tell me more about why you chose that."

Template 2: Onboarding Audit

  1. Scale: "How easy was it to get started with the product? (1-7)"
  2. Open: "Walk me through the moment you first realized how to use the product."
  3. Yes/No: "Did you feel like you needed additional help during setup?"
  4. Open: "What would have made the setup process easier?"
  5. Ranking: "Rank these potential improvements in order of impact for you."

Template 3: Competitive Research

  1. Single-choice: "Which alternative did you consider most seriously before choosing us?"
  2. Open: "What was the deciding factor in your choice?"
  3. Scale: "How satisfied are you with that decision now? (1-7)"
  4. Open: "Is there anything about the alternative you chose over that you still miss?"
  5. Multiple-choice: "Which of these would most strengthen your commitment to staying?"

Common Mixed Methods Mistakes

Treating qualitative as the "explanatory" afterthought. Both methods have equal standing. Sometimes qualitative themes will challenge quantitative conclusions. Take divergence seriously rather than forcing qualitative data to confirm quantitative patterns.

Under-sampling qualitative components. Eight interviews is not enough to establish patterns; it's enough to generate hypotheses. If your qualitative component is producing high variance with no clear themes, increase your sample.

Over-claiming statistical significance from mixed methods surveys. An embedded survey within an interview study is not a representative sample. Apply appropriate confidence levels.

Siloed analysis teams. When the person who analyzes the survey is different from the person who analyzes the interviews, integration rarely happens. Assign the same researcher (or AI synthesis tool) to both.


Key Takeaways

Mixed methods research closes the gap between what users do and why they do it — producing findings that are both statistically credible and deeply human. The right design depends on your research questions, timeline, and existing data, but the embedded design (qualitative and quantitative in a single data collection) is the most practical for most teams.

Koji's structured questions embed this mixed methods logic into every AI interview — combining scale ratings, choice questions, and open-ended conversation in a single study. The result is richer data collected faster, with AI-powered synthesis producing both quantitative summaries and qualitative themes in one report.


Related Resources

Related Articles

How to Analyze Qualitative Data: From Raw Interviews to Actionable Insights

A step-by-step guide to qualitative data analysis — from reviewing raw transcripts to synthesizing themes, generating insights, and presenting findings that teams act on.

AI Interviews vs. Surveys — Why Conversations Beat Forms

Traditional surveys give you data. AI-powered interviews give you understanding. Compare response quality, completion rates, insight depth, and cost-effectiveness between survey tools and AI interview platforms like Koji.

Structured Questions in AI Interviews

Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.

Qualitative vs. Quantitative Research: When to Use Each Method

A clear breakdown of qualitative and quantitative research — what each method reveals, when to use each, and how to combine them for the most complete picture of your users.

UX Research Plan Template: How to Structure Any Research Project

A UX research plan aligns your team on what you are studying, why it matters, and what you will do with the findings. This guide provides a complete template and instructions for writing a research plan that stakeholders will actually read and act on.

Survey Design Best Practices: From Question Writing to Data Collection

Learn how to design effective surveys with proven best practices for question writing, flow, bias reduction, and data collection — including when to go beyond surveys to AI-powered interviews.