New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Research Operations

User Research Maturity Model: 5 Stages from Ad-Hoc to Strategic (2026 Framework)

A practical 5-stage user research maturity model — from ad-hoc to strategic — with assessment criteria, common roadblocks at each stage, and the playbook for advancing your team's research practice. Modeled on the Nielsen Norman Group framework with a 2026-era AI-native lens.

A user research maturity model is a framework for diagnosing how systematically your organization uses customer research to drive decisions, and what specifically needs to change to advance. The most influential version, Nielsen Norman Group's six-stage model, ranges from "Absent" (UX is invisible) to "User-Driven" (research shapes strategy). For most product organizations, the practical 5-stage version below — Ad-Hoc, Reactive, Operational, Embedded, Strategic — is more actionable, because it maps to changes you can actually make this quarter.

This guide gives you the assessment rubric to place your team on the curve, the symptoms of being stuck at each stage, and the specific moves that advance you to the next level. AI-native research platforms like Koji compress the time it takes to climb — most teams can move two stages in a year when the operational friction (recruiting, scheduling, analysis) collapses to near zero.

TL;DR — the 5-stage framework

StageOne-line descriptionResearch cadenceWho runs research
1. Ad-HocResearch happens when someone insists<1 study / quarterWhoever has time
2. ReactiveResearch validates decisions already made1–2 / quarterA part-time PM or designer
3. OperationalResearch has its own process and peopleMonthly+Dedicated researcher(s)
4. EmbeddedEvery product decision has research inputWeekly continuousResearchers + democratized teams
5. StrategicResearch shapes roadmap and strategyAlways-onWhole org, with research leadership

Most teams are stuck at Stage 2 or 3. The leap from Stage 3 to Stage 4 is the one most worth making — it is where research starts changing the product instead of describing it.

Why a maturity model matters

Research budgets are easy to defend after they have produced clear wins. They are hard to defend before. A maturity model gives leadership a vocabulary for two things that are otherwise hard to articulate:

  1. Where we are. A specific stage with specific symptoms ("we ship and then go ask if users like it") is harder to argue with than "we should do more research."
  2. What we should invest in next. Climbing the model is a sequence — you cannot skip stages. Knowing where you are tells you what to fix first.

According to Nielsen Norman Group's research on UX maturity, organizations advance through stages "Absent, Limited, Emergent, Structured, Integrated, and User-Driven," and the six factors that move them up are strategy, culture, process, outcomes, leadership support, and longevity. The model below collapses those into five stages with sharper operational criteria, because most product teams find the 6-stage version too granular for self-assessment.

The five stages

Stage 1: Ad-Hoc

Symptoms: Research happens when an executive demands it or a launch goes badly. There is no research backlog, no participant pipeline, and no synthesis discipline. Insights live in the head of whoever did the study and disappear when they leave.

What's missing: A standing assumption that decisions deserve evidence. Most Stage-1 organizations are not against research — they have just never built the muscle.

Telltale quote: "We should probably talk to some users about this before launch."

The path out: Pick a single recurring research question (e.g., "why do new signups churn in week one?") and commit to a small monthly study answering it. Create one re-usable artifact — a slack channel, a Notion page — where every insight is filed. The win is consistency, not volume.

Stage 2: Reactive

Symptoms: Research is run, but mostly to validate decisions that have already been made. The output is justification, not direction. Studies are typically usability tests on near-final designs and quick surveys after launch.

What's missing: Research that is upstream of design decisions. At Stage 2, the team is using research to confirm what they already wanted to do — which means it can never disagree with them, which means it never changes the product.

Telltale quote: "Can you run a quick test to make sure this is fine?"

The path out: Move at least one study per cycle to before the design phase. Discovery interviews, Mom Test conversations, problem-space exploration. The criterion: if the study cannot change the design direction, it is not real research.

Stage 3: Operational

Symptoms: Research has its own roster, recruiting flow, study templates, and quarterly cadence. There is at least one full-time researcher (or a dedicated PM-researcher hybrid). Stakeholders submit research requests through an intake system.

What's missing: Speed. Stage-3 organizations have rigor but not velocity — every study takes 4–8 weeks from request to insight, which means the questions move faster than the answers. Research becomes a bottleneck the rest of the org learns to route around.

Telltale quote: "Can we get this added to next quarter's research roadmap?"

The path out: Two parallel investments. First, research democratization: give PMs, designers, and CSMs the tooling and templates to run their own routine studies, with researchers reviewing for quality. Second, AI-native tooling: replace the 6-week study cycle with a 6-day one by automating recruiting, moderation, and synthesis.

Stage 4: Embedded

Symptoms: Every product squad runs at least one customer interview per week. Continuous discovery is the default, not the exception. Research insights flow directly into roadmap discussions. Stakeholders bring questions to research instead of being chased for them.

What's missing: Strategic influence. At Stage 4, research informs every decision but rarely initiates one. The roadmap is still set by leadership and merely informed by research, instead of being driven by it.

Telltale quote: "What did this week's interviews tell us about the upcoming release?"

The path out: Invest in research synthesis and storytelling at the executive level. The research function needs to be in roadmap and strategy meetings, not as a service provider but as a contributor. This requires a research lead with the seniority to shape strategy, and infrastructure (a research repository, a clear synthesis cadence, recurring leadership briefings) that makes findings legible to non-researchers.

Stage 5: Strategic

Symptoms: Research is upstream of the roadmap. New product bets are sized using customer evidence, not just market data. Senior leadership cites specific customer interviews in board meetings. The research function reports to the CEO or CPO and has a seat at strategic planning.

What's missing: Nothing structural — Stage 5 is the steady-state goal. The risk at Stage 5 is complacency; mature research orgs need to keep questioning their own methods, expanding into adjacent jobs to be done, and refreshing their participant panels.

Telltale quote: "We're not committing to that bet until we run discovery interviews against our top three customer segments."

Sustaining behavior: Annual customer research strategy review. Researcher career ladders that retain senior talent. Leadership evangelism — every executive can name a recent insight and how it changed a decision.

Self-assessment rubric

For each dimension, score 1 (Stage 1 behavior) to 5 (Stage 5 behavior). The lowest score is your true stage — climbing requires advancing the weakest dimension, not the average.

DimensionStage 1Stage 3Stage 5
Cadence<1 study/qtrMonthlyContinuous, weekly+
TimingAfter launchBefore designBefore strategy
OwnershipWhoever has timeDedicated researcherSenior research leader
SynthesisLives in one headDocumented per studyLiving repository
Influence on roadmapNoneInforms designShapes strategy
Stakeholder buy-in"Why bother?""Add to backlog""We can't decide without this"

A team scoring 1, 1, 2, 3, 2, 2 across these dimensions is at Stage 1, regardless of the higher scores. Climb the lowest.

What stalls progression?

The most common reasons teams plateau, in rough order of frequency:

Operational friction. When recruiting takes 2 weeks and analysis takes another week, the cadence required for Stage 4 (weekly continuous discovery) is mathematically impossible. According to Maze's 2023 Continuous Research Report, 64% of companies now have a democratized research culture to cope with increasing demand — and the bottleneck for the rest is operational, not philosophical.

Single point of failure. The team has one researcher who is fully booked on intake. They cannot do strategic work because they cannot turn down tactical requests. Solution: democratize the tactical work to free up the researcher for strategic studies.

No repository. Insights from past studies are not findable, so every new question starts from zero. This is the single biggest preventable waste in research operations.

No leadership champion. Without an executive who articulates why research matters, every budget cycle becomes a fight to justify what is already there. Stage 4+ requires a Chief Customer Officer, CPO, or similar who treats customer evidence as a strategic input.

How AI-native platforms accelerate the climb

The maturity model assumes a 2010s research stack: panels recruited by hand, studies run synchronously, transcripts coded by analysts, reports written manually. In that stack, the operational cost of running research scales nearly linearly with research volume — which means Stage 4 (weekly continuous discovery) is genuinely expensive.

AI-native platforms like Koji change the unit economics. Specifically:

  • Recruiting moves from days to minutes via in-product intercepts and conversational invitations.
  • Moderation runs 24/7 with no human in the loop — interviews complete asynchronously without scheduling.
  • Analysis runs the moment the last interview ends — themes, quotes, and quality scores are pre-aggregated.
  • Synthesis is the human-in-the-loop step, on top of pre-organized inputs instead of raw transcripts.

The practical implication: Stage 4 cadence (3–10 customer interviews per week, per product team) becomes affordable for organizations that are currently at Stage 2. Teams using AI-assisted research tools report 60% faster time-to-insight, and that compression is precisely what makes the leap to embedded research feasible.

For most organizations the biggest bottleneck is not insight quality — it is insight throughput. AI-native tooling does not replace researcher judgment (it cannot, and shouldn't); it replaces the operational drag that keeps researchers stuck on tactical intake instead of strategic studies.

A 12-month progression playbook

If you are at Stage 2 today, here is the sequence to reach Stage 4 in a year:

Month 0–2: Pick three recurring research questions that come up every quarter. Build research interview templates for each. Set up a research repository — even a structured Notion page is enough.

Month 2–4: Move one study per cycle from "validation after design" to "discovery before design." This is the single highest-leverage change in the entire model.

Month 4–6: Adopt an AI-native research platform. The goal is to compress study cycle time from 4 weeks to 4 days.

Month 6–9: Democratize. Give PMs and designers self-serve access to run their own discovery interviews, with researchers in a coaching/QA role. Establish a weekly insights digest to all stakeholders.

Month 9–12: Cement the cadence. Every squad runs at least one interview per week. Research insights appear in roadmap reviews. The research function is asked to opine on strategy, not just tactics.

By month 12 you are at the bottom of Stage 4. Stage 5 takes another 12–24 months and depends primarily on leadership — not tooling.

Related Resources

Related Articles

How to Build a UX Research Repository: The Complete Guide

A research repository transforms scattered insights into a searchable organizational asset. Learn how to build one that teams actually use.

Structured Questions in AI Interviews

Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.

The Mom Test: How to Talk to Customers Without Being Misled

Learn Rob Fitzpatrick's Mom Test methodology to ask questions that even your mother can't lie to you about.

UX Research Operations (ResearchOps): The Complete Guide to Scaling Your Research Practice

ResearchOps is the infrastructure behind great user research — the people, processes, and tools that let research teams scale their impact without scaling their headcount. Learn what ResearchOps is, its six core pillars, when your team needs it, and how to build a practice from scratch.

How to Get Stakeholder Buy-In for User Research: The Complete 2026 Playbook

A practical, evidence-backed playbook for winning executive and cross-functional support for user research — with templates, ROI math, and modern AI-powered workflows that make research impossible to ignore.

Customer Interview Cadence: How Often Should You Talk to Users? (2026)

Set the right customer interview cadence for your team — from one a week (Teresa Torres' baseline) to daily continuous discovery — and how AI moderation makes higher cadences sustainable.

Continuous Discovery: How to Run Weekly Customer Interviews Without Burning Out

Continuous discovery is the practice of conducting customer interviews every week as part of your normal workflow. This guide explains how to build an always-on research practice that actually scales.