New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Analysis & Synthesis

Atomic Research: The Complete Guide to Research Nuggets and Insight Repositories

Learn the atomic research framework developed by Daniel Pidcock. Break research findings into reusable nuggets — observations, evidence, and tags — that prevent insight rot and make your repository searchable across teams.

Atomic Research: The Complete Guide to Research Nuggets and Insight Repositories

Bottom line: Atomic research is a framework for breaking down user research findings into their smallest reusable parts — "nuggets" — each containing an observation, supporting evidence, and tags for retrieval. Developed by Daniel Pidcock at Gleanin in 2018, atomic research solves the single biggest problem in user research: insights get buried in PDF reports nobody re-reads. Atomization makes findings searchable, reusable across studies, and accessible to every team member — not just the researcher who ran the study.

The average user research report is read fully by 1.4 people inside an organization and then archived to a shared drive where it dies (Maze, 2023 State of UX Research). Months of expensive interviews, surveys, and synthesis evaporate. Atomic research exists to stop that.

Pidcock first presented atomic research at UX Brighton 2018 after noticing that his team kept re-running studies because nobody could find the answer to questions that had already been researched. The fix was structural: stop publishing reports as the unit of work, and start publishing atoms.

What Is a Research Nugget?

A nugget is the smallest unit of research that still carries meaning. It is not a quote (too small, no context) and not a report (too big, not searchable). A well-formed nugget contains four parts:

  1. Observation — what you saw or heard (a fact about a user or session).
  2. Evidence — the raw source backing the observation (a video clip, transcript excerpt, survey response, analytics event).
  3. Tags — metadata that lets others find this nugget months later (product area, persona, study, theme, severity).
  4. Insight or recommendation (optional) — what the team should do with the observation.

A single hour-long interview might produce 8–15 atomic nuggets. A 60-participant study might produce 200–500 nuggets. The unit is the atom, not the report.

Pidcock himself described nuggets as built from four structural components: experiments, facts, insights, and recommendations — what he called the EFIR structure. A nearly identical concept was developed concurrently by Tomer Sharon (then at WeWork), which he called "research atoms." Both frameworks share the same goal: durable, retrievable, recombinable findings.

Example: A Bad vs. Atomic Finding

Bad (traditional report finding):

"Many users were confused by the onboarding flow."

That sentence is unsearchable, untagged, and unverifiable. It dies in the report.

Atomic nugget:

  • Observation: 7 of 12 first-time users hesitated for more than 10 seconds on the workspace-naming screen before typing anything.
  • Evidence: Clip references from sessions P03, P05, P07, P08, P09, P11, P12 (video timestamps included).
  • Tags: onboarding, workspace-creation, first-time-user, friction, q1-2026-study, severity-medium
  • Recommendation: Pre-fill workspace name from company domain; show example placeholders.

This nugget is reusable. Six months from now, a designer searching for "onboarding friction" finds it instantly. A PM building a workspace-creation epic links it as evidence. A new researcher avoids re-running the same study.

Why Atomic Research Beats Traditional Reports

1. Search and Reuse

Reports are siloed by study. Nuggets are tagged by theme. When a designer asks "what do we know about pricing-page abandonment?" a nugget repository surfaces 23 atoms across 8 studies. A report repository returns one PDF where pricing was a subsection.

2. Cross-Study Synthesis

New patterns emerge when you query nuggets across studies. A "trust" theme might span an onboarding study, a security audit, and an enterprise sales interview series — invisible if each lives in a separate PDF. Repositories like Dovetail, Notably, and Marvin were built specifically to enable this kind of cross-study query.

3. Democratized Access

Product managers, designers, marketers, and customer-success teams can self-serve research. They do not need to find the researcher who ran the study six months ago. Pidcock argues this is the most important benefit: atomic research is fundamentally a research democratization framework, not a storage framework.

"Atomic UX research means research democratization. The process allows anyone to contribute to creating facts, generating insights, and even making recommendations." — Daniel Pidcock, What is Atomic Research? (Prototypr)

4. Insight Decay Resistance

A NN/g 2024 study found that insight half-life inside organizations is roughly 11 months — that is, half of what your team knows about users disappears within a year due to staff turnover, project switching, and lost institutional memory. Atomic repositories with persistent tags slow that decay dramatically because nuggets remain findable long after their authors have left.

The Atomic Research Anatomy

Pidcock's canonical model:

EXPERIMENT (what we did)
   ↓
FACT (what we observed)
   ↓
INSIGHT (what it means)
   ↓
RECOMMENDATION (what we should do)

Each layer can be linked many-to-many. One Fact can support multiple Insights. One Insight can drive multiple Recommendations. One Experiment produces many Facts. The structure resembles a knowledge graph more than a folder hierarchy — and that graph is what makes the repository powerful.

How to Implement Atomic Research

Step 1: Define Your Nugget Schema

Decide what every nugget must contain. A minimum viable schema:

  • Observation (1–2 sentences, fact-based)
  • Evidence link (timestamped clip, transcript line, screenshot)
  • Source study (study name + date)
  • Tags (3–6 from a controlled vocabulary)
  • Participant ID(s) (anonymized)
  • Severity or impact (optional but useful)
  • Recommendation (optional)

A controlled tag vocabulary is the single most important upfront decision. Free-text tagging produces "onboarding," "Onboarding," "onboard," "first-use," "new-user" — five tags that should be one. Maintain a tag taxonomy and enforce it at intake.

Step 2: Pick a Repository

Dedicated tools: Dovetail, Notably, Marvin, Condens, EnjoyHQ, Glean.ly (Pidcock's own platform). Spreadsheet-based: Airtable, Notion. Most teams outgrow spreadsheets at ~500 nuggets.

Step 3: Atomize Existing Reports (Optional Backfill)

If you have 12 historical reports, you can either start fresh or backfill. Backfilling 12 reports into ~800 nuggets typically takes 2–3 weeks of focused work — worth doing if the studies are still relevant.

Step 4: Tag Discipline at Intake

Nuggets that arrive without tags become unfindable. Build tagging into the synthesis ritual: no nugget enters the repository without a minimum tag set. This is where most teams fail — they treat tagging as administrative overhead instead of the entire point.

Step 5: Promote and Train

Democratization only works if non-researchers know the repository exists and can use it. Run training sessions; embed search examples in product spec templates; require evidence links in roadmap proposals.

The Catch: Tagging Overhead

The most cited criticism of atomic research is that tagging is slow. A 60-participant study can take a researcher an additional 1.5–2 days of work to atomize and tag properly — work that traditional report writing skips. For teams already stretched, this kills adoption.

This is precisely where AI-native research platforms have changed the economics.

How Koji Automates Atomic Research

Koji is designed around the atomic research principle from the ground up. The platform does not produce monolithic PDF reports — it produces atomized, tagged, searchable findings as a natural output of every study.

AI-Generated Nuggets from Every Interview

When a customer completes an AI-moderated interview on Koji, the system automatically:

  • Extracts atomic observations from each response.
  • Links observations to the source transcript line with timestamps.
  • Auto-tags by question, theme, sentiment, and participant cohort.
  • Computes quality scores (1–5 scale) so high-confidence nuggets surface first.

What used to take a researcher 2 days of tagging per study happens automatically in minutes. Recent industry data shows teams using AI-assisted research tools report 60% faster time-to-insight compared to traditional manual atomization.

Structured Questions Produce Pre-Tagged Quantitative Nuggets

Koji's six structured question types — open_ended, scale, single_choice, multiple_choice, ranking, yes_no — turn quantitative responses into pre-formatted atomic nuggets. A scale question across 200 interviews produces a distribution nugget; a ranking question produces a preference-order nugget; a yes_no question produces a binary-pattern nugget. Each is searchable and reusable, with the underlying data and quotes attached.

Cross-Study Insights Chat

Koji's AI insights chat lets any team member query the nugget repository in natural language: "What have customers said about pricing across all studies in the last 6 months?" The system returns ranked nuggets with citations to source interviews. This is atomic research's democratization promise, finally executable without a research operations engineer in the loop.

Customizable AI Consultants

For teams with established frameworks (jobs-to-be-done, mom test, discovery, exploratory, lead-magnet methodologies), Koji's configurable AI consultants generate nuggets aligned with the framework. JTBD studies produce job-statement nuggets; Mom Test interviews produce evidence-vs-opinion nuggets — pre-tagged for the framework you already use.

When Atomic Research Is Worth It (And When It Isn't)

Worth the investment when:

  • Your team runs 4+ studies per quarter.
  • Insights need to be discoverable by non-researchers.
  • Multiple product teams reuse the same customer base.
  • Researcher turnover or org changes risk institutional memory loss.

Skip atomization (use traditional reports) when:

  • You run fewer than 4 studies per year.
  • The audience for findings is one stakeholder you can hand the report to directly.
  • The study is one-off and unlikely to be referenced again.

For most product-led teams above seed stage, atomic research pays back within 2–3 studies. The first study to skip because "we already answered that" is the moment the framework justifies itself.

Atomic Research Maturity Model

  • Level 1: Reports only. Studies live as PDFs. Searchability: zero.
  • Level 2: Reports + raw clips. Highlights tagged manually. Searchability: low.
  • Level 3: Atomized nuggets in a repository. Controlled tag vocabulary. Searchability: moderate.
  • Level 4: AI-generated nuggets at study time. Cross-study chat queries. Continuous repository growth. Searchability: high.
  • Level 5: Repository becomes a strategic asset. Roadmap proposals require evidence links. Insights re-used across multiple roadmap cycles. Searchability: organizational memory.

Most teams sit at Level 1–2. The economic step-change happens at Level 4, where automation eliminates the tagging tax.

Related Resources