UX Research Deliverables: The Complete Guide to Research Outputs That Drive Action
A comprehensive guide to UX research deliverables — from research reports and user personas to journey maps and insight statements — with guidance on matching deliverables to audiences and using AI to eliminate manual production time.
UX Research Deliverables: The Complete Guide to Research Outputs That Drive Action
Bottom line: UX research deliverables are the tangible outputs that transform raw research data into team decisions. From research reports and user personas to journey maps and insight statements, the right deliverable for the right audience is the difference between research that gathers dust and research that ships products. This guide covers every major deliverable type, when to use each, and how modern AI tools are eliminating the hours of manual work that used to make research deliverables a bottleneck.
What Are UX Research Deliverables?
A UX research deliverable is any artifact produced from research that communicates findings to a team, stakeholder, or organization. Deliverables serve three core functions:
- Communicate findings — translate raw data into understandable insights
- Drive alignment — ensure cross-functional teams share the same understanding of user needs
- Inform decisions — provide evidence for product, design, and business choices
The right deliverable depends on three factors: your research method (interviews, surveys, usability tests), your audience (designers, engineers, executives), and your goal (discovery, validation, prioritization).
"Research is only as good as the actions it inspires." — Tomer Sharon, VP User Research & Metrics at Wix
According to Nielsen Norman Group's survey of UX practitioners, the three most commonly created research deliverables are research reports (used by 73% of researchers), personas (63%), and journey maps (57%). Yet despite their prevalence, a significant portion of research findings are never acted upon in product decisions. The gap between insights gathered and insights used is a deliverables problem as much as a research problem.
The Two Categories of Research Deliverables
Process Deliverables (Interim)
These are artifacts produced during the research process to organize and enable good research. They are typically internal-facing and disposable after the study concludes.
- Research plans and study briefs
- Discussion guides and interview guides
- Screener questionnaires
- Consent forms and data handling agreements
- Raw transcripts and session recordings
Insight Deliverables (Final)
These are the outputs of completed research — what you share with stakeholders to inform decisions. They should outlive the individual study and be stored in your research repository.
- Research reports and topline summaries
- User personas
- Customer journey maps
- Insight statements and atomic research nuggets
- Affinity maps and theme clusters
- Video highlight reels
- Presentations and stakeholder readouts
This guide focuses primarily on insight deliverables — the outputs that create organizational change.
The 10 Most Impactful UX Research Deliverables
1. Research Report
What it is: A comprehensive document summarizing methodology, key findings, supporting evidence, and prioritized recommendations.
When to use it: After completing any major research study — interviews, surveys, usability tests, or diary studies.
What to include:
- Executive summary (1 page maximum)
- Methodology overview (sample, questions, approach)
- Key findings with supporting evidence and direct participant quotes
- Recommendations, prioritized by impact
- Appendix with full transcripts or raw data
Common mistake: Writing reports that are too long. Stakeholders read executive summaries. Lead with your most important finding in the first paragraph — not with methodology.
Koji advantage: Koji auto-generates research reports from interview data, including an AI-written summary, theme clusters, structured question charts (scale distributions, choice frequency), and representative quotes — in minutes rather than days. Reports can be published with a shareable link, eliminating the need to build a slide deck for every stakeholder review.
2. User Personas
What it is: A composite representation of a user segment, typically including goals, behaviors, pain points, and usage context.
When to use it: After discovery research, to create shared organizational understanding of who you are building for.
What to include:
- Fictional name and photo
- Core goals and motivations (behavioral, not demographic)
- Frustrations and pain points in their own words
- Workflows and behaviors relevant to your product
- Direct quotes from actual research participants
- Demographic context (secondary to behavioral depth)
Evidence-based vs. assumption-based personas: Personas built from research interviews carry genuine organizational credibility. "Sara, Head of Content at a 50-person SaaS company who spends 4 hours per week reconciling campaign data in spreadsheets because no one trusts the dashboard" drives real product decisions. "Marketing Manager, 35, values efficiency" describes millions of people and drives nothing.
Common mistake: Building personas from internal team assumptions rather than research. Assumption-based personas feel authoritative but systematically mislead teams toward building for themselves rather than actual users.
3. Customer Journey Map
What it is: A visual timeline of a user's experience completing a task, showing actions, emotions, pain points, and opportunities at each stage.
When to use it: When you need to create team-wide empathy for the end-to-end experience, identify friction points across the customer journey, or align cross-functional teams on where users struggle most.
What to include:
- Journey stages and phases
- User actions per stage
- User thoughts, emotions, and mental models at each step
- Pain points and friction moments
- Moments of delight (if present)
- Opportunities for improvement
- Supporting evidence — quotes and data
Koji advantage: Koji's interview data naturally captures the customer journey because participants walk through their entire experience in conversation. This narrative data feeds directly into journey map creation — especially when combined with structured scale questions that capture satisfaction ratings at each stage of the journey.
4. Insight Statements
What it is: A concise, evidence-backed statement that captures a meaningful finding from research. Typically structured as: "We observed [X]. This suggests [Y]. Therefore [Z]."
When to use it: For research repositories, stakeholder Slack updates, Jira ticket descriptions, and anywhere you need to synthesize a finding into a single, portable, actionable statement.
Example:
"We observed that 8 of 12 participants started their checkout on mobile but abandoned before entering payment details. This suggests the payment form has a significant usability problem on small screens. Therefore, we recommend a mobile-specific usability test of the checkout flow before the next release."
Why it matters: Insight statements are the most portable unit of research. They can live in Notion, Slack, Jira, or a research repository. They are the building blocks of good product decisions and the antidote to research that sits in a PDF no one opens.
5. Affinity Map
What it is: A bottom-up clustering of observations, quotes, and data points into themes and categories — typically built collaboratively during a team analysis session.
When to use it: During analysis to organize large volumes of qualitative data before synthesis, particularly after a batch of interviews.
Process:
- Capture every observation on a separate note (physical sticky note or digital equivalent)
- Cluster related observations into groups without pre-deciding categories
- Name each cluster with a descriptive label derived from the data
- Identify relationships and hierarchies between clusters
- Draw conclusions from the patterns that emerge
Affinity mapping is especially powerful after running a batch of Koji interviews — the AI-generated themes provide a starting point, and your team refines them into a shared synthesis.
6. Usability Test Report
What it is: A structured summary of usability testing findings, typically including task completion rates, success criteria, severity ratings, and specific UX issues discovered.
When to use it: After any moderated or unmoderated usability test session.
What to include:
- Test objectives and tasks
- Participant profiles and recruitment criteria
- Issues discovered, each with a severity rating
- Task success and completion rates
- Prioritized recommendation list
Tip: Severity ratings make usability reports actionable. Without them, engineering teams cannot determine what to fix first. Use a standard four-point scale: Critical (blocks task completion), Major (significantly impairs), Minor (causes frustration without blocking), Cosmetic (low-impact appearance issues).
7. Video Highlights Reel
What it is: A short compilation (typically 3-7 minutes) of the most compelling moments from research sessions — multiple users struggling with the same issue, expressing similar frustrations, or describing the same unmet need in their own words.
When to use it: When written reports are not creating enough urgency or empathy in stakeholders who have not experienced the research firsthand.
Why it works: Executives who read "6 of 10 users struggled to find the settings page" respond with mild concern. Executives who watch six participants, in their own words, describe the same confusion and frustration respond with urgency. Video creates empathy that text cannot replicate — and it is nearly impossible for stakeholders to dismiss findings they have witnessed with their own eyes.
8. Research Synthesis Document
What it is: A cross-study document that synthesizes findings from multiple research studies conducted over time into an integrated, evolving picture of user needs, behaviors, and opportunities.
When to use it: Quarterly planning cycles, annual research summaries, or when preparing for a major product initiative that requires integrating evidence from multiple prior research efforts.
What to include:
- Theme clusters that appear consistently across studies
- Confidence levels per theme (high = appeared in 5+ studies; low = 1-2 studies)
- How patterns have evolved over time
- Remaining open questions and research gaps
9. Competitive Analysis Report
What it is: A structured comparison of competing products across defined dimensions, informed by user research about what dimensions actually matter to customers.
When to use it: During product strategy, positioning decisions, and roadmap planning.
Koji advantage: Competitive intelligence interviews run through Koji let you ask customers directly about competing tools — what they value, what frustrates them, what they wish existed. This gives you qualitative depth that feature matrices lack and positioning language you can use directly in marketing.
10. Research Brief (Pre-Study)
What it is: A planning document created before research begins, defining objectives, research questions, methods, participant criteria, and success criteria.
When to use it: Before every research study — without exception. A brief keeps scope contained and teams aligned on what you are trying to learn before anyone starts scheduling interviews.
Koji advantage: Koji's AI consultant generates a complete research brief — including methodology recommendation, interview guide, and structured question suggestions — from a simple description of what you want to learn. Teams that used to spend a day writing briefs can now have one in minutes.
Matching Deliverables to Audiences
The same research findings need different packaging for different audiences:
| Audience | What they need | Best deliverable format |
|---|---|---|
| Product Manager | Prioritized action items, evidence for roadmap | Research report, insight statements |
| Designer | Mental models, pain points, workflows | Personas, journey maps, affinity maps |
| Engineer | Specific issues, edge cases, technical constraints | Usability report, annotated screenshots |
| Executive | Business impact, trends, strategic implications | One-page executive summary, top findings slide |
| Sales / Marketing | Customer language, objections, value perception | VoC quote bank, personas |
Rule of thumb: Every deliverable should answer the question "What should we do next?" If it cannot answer that question, it is documentation, not a deliverable.
How Koji Eliminates the Deliverables Bottleneck
Traditional research deliverables are slow to produce. A research report from 20 interviews requires 2-3 days of manual coding, synthesis, writing, and design. A persona requires cross-study synthesis. A journey map often requires a facilitated workshop with the full team.
This production overhead is why research bottlenecks product teams. The insights exist in the transcripts — the deliverable is the barrier between evidence and action.
Koji eliminates the production bottleneck at every stage:
- Automated research reports: After interviews complete, Koji generates a full report including AI-written theme summaries, quantitative charts for structured questions, representative quotes per theme, and an executive summary. No manual coding or report writing required.
- Structured question data: Koji's six question types — open_ended, scale, single_choice, multiple_choice, ranking, and yes_no — produce clean, chartable data alongside qualitative conversation, giving every deliverable a quantitative evidence layer.
- Real-time insights: As interviews arrive, Koji updates theme clusters and quantitative distributions continuously. Analysis does not wait for the study to close.
- Shareable report links: Published Koji reports can be shared with a single URL — no exporting to PDF, no building a deck, no scheduling a readout meeting. Stakeholders view findings on demand.
- AI Insights Chat: Stakeholders can ask direct questions of the research data — "What did participants say about the pricing page?" — without waiting for a researcher to pull the answer.
Teams using Koji report spending significantly less time on analysis and deliverable production compared to manual methods, freeing researchers to focus on interpretation, recommendation, and stakeholder influence rather than production work.
Common Deliverable Mistakes to Avoid
1. Writing for yourself, not your audience Use language your stakeholders understand. The same finding needs different framing for a product designer versus a VP of Sales. Avoid research jargon in executive presentations.
2. Reporting without recommending Findings without recommendations are incomplete deliverables. Every significant finding needs a "therefore" statement that specifies what the team should do next.
3. Perfect over timely A good research report delivered the day before a decision is made is worth more than a perfect one delivered after the team has already committed to a direction. Calibrate effort to stakes and timing.
4. Leaving out the evidence Unsubstantiated insights get dismissed. Every key finding needs supporting evidence — direct quotes, quantitative data, or video clips that stakeholders can evaluate for themselves.
5. One-size-fits-all formatting A 30-page PDF works for documentation archives, not executive presentations or Slack updates. Modular deliverables — an executive summary, a detailed report, and a quote bank — that can be recombined for different audiences are far more effective.
Frequently Asked Questions
How long should a UX research report be? For most studies, 5-10 pages is optimal. Lead with a one-page executive summary. Use visuals, charts, and direct quotes to convey findings efficiently. Relegate full transcripts and raw data to an appendix. Stakeholders will read 5 focused pages; they will not read 30 dense ones.
What is the difference between a research deliverable and a design deliverable? Research deliverables communicate what users need, think, and do. Design deliverables (wireframes, prototypes, mockups) communicate proposed solutions. Research informs design. Both journey maps and personas appear in both worlds, but they serve different functions depending on whether they summarize research evidence or guide design exploration.
How do I make research deliverables more actionable? Structure every deliverable around decisions. Ask: "What decision does this research inform?" State the recommendation explicitly using concrete language — "Add a search bar to the account settings page" is more actionable than "improve settings discoverability." Assign owners and follow-up timelines.
How do I build research-backed personas? Look for behavioral patterns across participants — goals, frustrations, and workflows — rather than demographic commonalities. Group participants by similar behaviors. Write each persona as a composite of real participants, using their actual language. Validate with a minimum of 8-15 interviews before treating personas as reliable enough to drive major product decisions.
Should research deliverables be visually designed or plain? Match production quality to audience and purpose. Polished visual deliverables (designed personas, animated journey maps) work well for executive presentations and cross-functional workshops. Plain structured reports work well for internal documentation and engineering teams. Do not over-design at the expense of content quality or delivery speed.
How often should deliverables like personas and journey maps be refreshed? Personas and journey maps should be updated every 12-18 months or whenever significant new research reveals that user behaviors or needs have shifted. Research insight repositories should be updated continuously as new studies complete and new insights are added.
Related Resources
- Structured Questions in AI Interviews — Build studies that produce ready-to-chart quantitative data
- Generating Research Reports — How Koji automates report creation
- Research Synthesis Guide — Combine multiple studies into unified insights
- How to Write Research Insight Statements — The atomic unit of every good deliverable
- User Research Report Template — A ready-to-use template for your next study
- Presenting Research Findings to Stakeholders — Tailor deliverables for every audience type
- How to Analyze Qualitative Data — The analysis process that feeds all deliverables
Koji automates the most labor-intensive research deliverable — the analysis report — so your team receives findings in minutes, not days.
Related Articles
Generating Research Reports
Create comprehensive aggregate reports across all your interviews — including summaries, themes, recommendations, and statistics.
User Research Report Template: How to Present Findings That Drive Action
A complete guide to writing user research reports that stakeholders actually read — with a proven structure, templates for key sections, and how AI-generated reports change the game.
Research Synthesis: How to Combine Multiple Studies Into Clear Insights
A practical guide to synthesizing findings across multiple research studies — using thematic synthesis, triangulation, and structured data aggregation to build compounding organizational knowledge.
Presenting Research Findings to Stakeholders
Learn how to present qualitative research findings effectively — from storytelling with data and using participant quotes to structuring reports for executives, product teams, and designers.
How to Analyze Qualitative Data: From Raw Interviews to Actionable Insights
A step-by-step guide to qualitative data analysis — from reviewing raw transcripts to synthesizing themes, generating insights, and presenting findings that teams act on.
How to Write Research Insight Statements That Drive Action
Learn to transform raw interview observations into compelling insight statements. Includes four proven formats, a step-by-step process, before/after examples, and common mistakes to avoid.
Structured Questions in AI Interviews
Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.