{"site":{"name":"Koji","description":"AI-native customer research platform that helps teams conduct, analyze, and synthesize customer interviews at scale.","url":"https://www.koji.so","contentTypes":["blog","documentation"],"lastUpdated":"2026-04-29T09:44:46.779Z"},"content":[{"type":"documentation","id":"d37be63d-bf59-452a-8524-4bc2fadb7f7a","slug":"how-to-conduct-user-interviews","title":"How to Conduct User Interviews: The Complete Step-by-Step Guide","url":"https://www.koji.so/docs/how-to-conduct-user-interviews","summary":"Complete step-by-step guide covering planning, discussion guide writing, participant recruitment, interview facilitation, and qualitative analysis including data saturation and the 5-user rule.","content":"\nUser interviews are one-on-one conversations with real users to understand their needs, motivations, and behaviors. Done right, they're the most powerful tool in a researcher's toolkit. This guide walks you through every step—from writing your research goals through synthesizing your findings—and shows how modern AI tools are compressing weeks of research into hours.\n\n## What Are User Interviews?\n\nA user interview is a qualitative research method where a researcher has a structured or semi-structured conversation with one participant at a time. The goal is to understand the participant's world: their behaviors, goals, frustrations, mental models, and how they currently solve problems.\n\nUnlike surveys that measure *what* people do at scale, user interviews reveal *why* they do it—the motivations, contexts, and stories that explain human behavior.\n\n**Common interview types:**\n- **Generative interviews**: Discover unknown problems and unmet needs (best in early discovery)\n- **Evaluative interviews**: Assess whether your proposed solution addresses a real problem\n- **Jobs-to-be-done interviews**: Understand what \"job\" users are hiring your product to perform\n- **Contextual inquiry**: Interview users in their natural environment while they work\n\n## Why User Interviews Matter\n\nThe evidence for user research is clear:\n\n- Testing with just **5 users reveals approximately 85% of usability problems** in a design. The first user alone surfaces 31% of issues (Jakob Nielsen, Nielsen Norman Group).\n- Teams using iterative research cycles—three 5-user studies vs. one 15-user study—get dramatically better product outcomes because they fix problems between rounds. As Nielsen wrote: *\"The ultimate user experience is improved much more by 3 studies with 5 users each than by a single monster study with 15 users.\"*\n- **42% of startups fail because they build something nobody wants** (CB Insights)—the single largest cause of startup failure. User interviews are the primary defense against this.\n- Companies that prioritize customer feedback programs grow revenues **4–8% faster** than competitors (Bain & Company).\n- **70% of software features are rarely or never used** (Microsoft product data). Discovery interviews help you build the right 30%.\n\nAs Nielsen Norman Group researchers Maria Rosala and Kara Pernice wrote: *\"When performed well, user interviews provide in-depth insight into who your users are, their lives, experiences, and challenges. Learning these things helps teams identify solutions to make users' lives easier.\"*\n\n## When to Use User Interviews\n\n✅ **Use interviews when:**\n- You're in early discovery and don't yet understand the problem space\n- You need to understand users' mental models and decision-making\n- You want to uncover unmet needs before designing a solution\n- You need to explain a surprising quantitative pattern (\"Why is cart abandonment 40% at Step 3?\")\n- You're building personas or journey maps\n- You need to validate that a problem is real and worth solving\n\n❌ **Don't rely on interviews alone when:**\n- You need statistically significant data (complement with surveys or analytics)\n- You want to measure performance against benchmarks (add quantitative methods)\n- You need to know *how many* users have a problem, not just *why* they have it\n\n## Phase 1: Plan Your Study\n\n### Define Clear Research Goals\n\nVague goals produce vague insights. Before scheduling a single interview, define specific questions your research must answer.\n\n**Weak goals:**\n- \"Learn about our users\"\n- \"Understand why people use the app\"\n\n**Strong goals:**\n- \"Understand what triggers someone to start evaluating project management software\"\n- \"Identify the top 3 friction points in the onboarding flow\"\n- \"Discover what workarounds users have invented to compensate for missing features\"\n\nEach goal should directly inform a decision your team is about to make.\n\n### Identify Your Target Participants\n\nUser interviews derive their power from talking to the *right* people. Focus on behavioral criteria, not demographics:\n\n- **Required experience**: \"Must have evaluated or switched project management tools in the last 6 months\"\n- **Behavior of interest**: \"People who have tried and abandoned multiple checkout flows\"\n- **Screening question**: \"Have you signed up for a new SaaS tool in the last 3 months?\"\n\nWrite a screener survey to filter candidates before scheduling. This protects your research budget and ensures valid data. See the [Research Screener Questions guide](/docs/research-screener-questions) for templates.\n\n## Phase 2: Write Your Discussion Guide\n\nAn interview guide is NOT a script—it is a flexible framework. You do not read questions verbatim; you use them as landmarks while following the participant's natural story.\n\n### Guide Structure\n\n**1. Warm-up (5 minutes)**\nEasy, rapport-building questions with no wrong answers:\n- \"Tell me a little about yourself and your role\"\n- \"Walk me through a typical day at work\"\n\n**2. Context-setting (10 minutes)**\nUnderstand their current situation and behaviors:\n- \"How do you currently handle [the problem area]?\"\n- \"Tell me about the last time you did [Y]\"\n- \"What tools or methods do you use for [Z]?\"\n\n**3. Core topic exploration (25–30 minutes)**\nOpen-ended questions with prepared follow-up probes:\n- \"Tell me about a recent time when [core topic]. What happened?\"\n- \"What is the most frustrating part of [process]?\"\n- \"Walk me through exactly what you did when [event] occurred\"\n\n**4. Probing follow-ups**\nUse freely throughout:\n- \"Tell me more about that\"\n- \"What happened next?\"\n- \"How did that make you feel?\"\n- \"Why did that matter to you?\"\n- \"Can you give me a specific example?\"\n\n**5. Closing (5 minutes)**\n- \"Is there anything else you think I should know?\"\n- \"Is there someone else I should talk to?\"\n\n### Question Principles\n\n- Ask about **specific past events**, not general habits or hypotheticals\n- Use **open-ended questions** that invite stories, not yes/no answers\n- **Avoid leading questions** that embed assumptions\n- Keep questions **short and single**—never combine two questions into one\n\n**Always pilot your guide** with one colleague before launch. This catches confusing phrasing and calibrates timing.\n\n## Phase 3: Recruit Participants\n\nFor most research goals, recruit **5–8 participants** per distinct user segment. The 5-user sweet spot for cost-effectiveness is well established—five users reveals roughly 85% of usability issues at a fraction of the cost of a larger study.\n\n**Recruitment channels:**\n- **Your customer list**: Email existing users with a brief invite and incentive offer\n- **CRM segmentation**: Filter by behavior—churned users, power users, recent signups\n- **Research panels**: Respondent.io, User Interviews, Prolific\n- **LinkedIn**: Ideal for B2B research targeting specific job titles or industries\n- **Social communities**: Relevant Slack groups, Subreddits, or LinkedIn communities\n\n**Incentives**: Standard rates are $50–$150 per 45-minute session for consumers; $150–$300 for B2B professionals. Always compensate fairly—it signals respect and dramatically improves show rates.\n\n## Phase 4: Conduct the Interview\n\n### Setup Checklist\n- Record the session (with explicit consent)\n- Have a second person take notes so the interviewer can focus on listening\n- Use a quiet, distraction-free environment\n- For remote: camera on, good lighting, test audio first\n\n### The Interview Flow\n\n**Opening (5 min)**: Welcome the participant, explain the purpose, clarify you are testing the problem—not them—and ask for consent to record.\n\n**Warm-up (5 min)**: Easy questions to build comfort. Slow your speech and let natural pauses breathe.\n\n**Core exploration (30–35 min)**: Follow your guide but prioritize interesting threads over the script. The best insights often come from unexpected detours.\n\n**Active listening cues**: Lean in, nod, use brief verbal acknowledgments (\"I see,\" \"go on,\" \"interesting\") without endorsing or judging. Let silences hang—participants often fill them with their most revealing thoughts.\n\n**Closing (5 min)**: Ask if there is anything they wanted to cover. Thank them and explain how findings will be used.\n\n### What Great Interviewers Do\n\n- Listen 70%, talk 30%\n- Ask \"what happened?\" not \"what would you do?\"\n- Follow up on emotion: \"You mentioned that was frustrating—tell me more\"\n- Never pitch or describe the product you are building\n- Stay genuinely curious—the best researchers are always a little surprised\n\n## Phase 5: Analysis and Synthesis\n\n### How Many Interviews Before Saturation?\n\n**Data saturation** is reached when new interviews stop surfacing new themes. For most research questions this occurs between 8–15 interviews for a homogeneous population. For heterogeneous groups or complex topics, plan for 15–20.\n\n**Run iteratively**: after 5 interviews, pause to review patterns. If major new themes keep emerging, continue. If findings are converging, you likely have enough.\n\n### Analysis Process\n\n1. **Transcribe** interviews (AI transcription tools save hours)\n2. **Code** for themes: highlight quotes representing needs, pain points, behaviors, mental models\n3. **Cluster** codes into themes using affinity mapping\n4. **Identify patterns** across participants—what is universal vs. idiosyncratic?\n5. **Synthesize** into insight statements, personas, or journey maps\n\nLook for:\n- **Jobs to be done**: What task is the user ultimately trying to accomplish?\n- **Pains**: What is blocking or frustrating them?\n- **Workarounds**: What hacks have they invented to solve their problem?\n- **Emotional peaks**: What moments carry the most emotional weight?\n\n## Common Mistakes to Avoid\n\n| Mistake | What Goes Wrong | Fix |\n|---------|----------------|-----|\n| Asking hypotheticals | \"Would you use X?\" gets false positives | Ask \"Tell me about the last time you...\" |\n| Skipping the pilot | Guide questions are confusing or too long | Pilot with 1 colleague before launch |\n| No probing | Surface-level insights only | Build the \"tell me more\" reflex |\n| Researcher bias | Nodding at \"good\" answers shapes responses | Stay neutral; reward all answers equally |\n| Solo interviewing | Detail lost while facilitating | Always pair an interviewer with a notetaker |\n| Wrong participants | Biased or unrepresentative data | Use a screener with behavioral criteria |\n\n## The Modern Approach: AI-Moderated Interviews with Koji\n\nTraditional user interviews create a painful bottleneck: recruiting, scheduling, transcribing, and coding can take 2–3 weeks from brief to insight. Koji is an AI-native research platform that automates the time-consuming parts while preserving conversational depth.\n\n**How Koji changes the process:**\n\n- **Automated moderation**: Koji's AI conducts interviews 24/7—no scheduling required, participants answer when it suits them\n- **Intelligent probing**: When a participant mentions a pain point, Koji follows up contextually, drilling deeper the way a skilled interviewer would\n- **Structured + open questions**: Mix open-ended discovery questions with [6 structured question types](/docs/structured-questions-guide)—scale, single choice, multiple choice, ranking, and yes/no—to capture both qualitative depth and quantitative patterns in one study\n- **Automatic thematic analysis**: Koji identifies themes across all interviews automatically, no manual coding required\n- **Real-time reporting**: See insights emerge as interviews complete, not days later\n- **Voice and text modes**: Voice interviews for natural conversational depth, text interviews for async global reach\n\nWhile traditional approaches might take 2–3 weeks from brief to insights, Koji users typically go from question to shareable report in under 48 hours. Teams using AI-assisted research report 60% faster time-to-insight compared to traditional manual methods.\n\n## Related Resources\n\n- [AI Interview Questions Generator: Write Better Questions Faster](/docs/ai-interview-questions-generator)\n- [Structured Questions Guide: 6 Question Types for Better Research](/docs/structured-questions-guide)\n- [Research Screener Questions: Finding the Right Participants](/docs/research-screener-questions)\n- [How Koji's AI Follow-Up Probing Works](/docs/ai-probing-guide)\n- [Note-Taking in User Research](/docs/note-taking-user-research)\n- [Research Interview Template Library](/docs/research-interview-templates)\n","category":"Research Methods","lastModified":"2026-04-27T03:19:17.380107+00:00","metaTitle":"How to Conduct User Interviews: The Complete Guide (2026)","metaDescription":"Step-by-step guide to conducting user interviews: planning, discussion guide writing, recruiting, facilitation, and analysis. Includes Nielsen's 5-user rule, sample questions, and AI-powered approaches.","keywords":["how to conduct user interviews","user interview guide","conducting user interviews","user research interviews","discussion guide","user interview best practices","qualitative research interviews"],"aiSummary":"Complete step-by-step guide covering planning, discussion guide writing, participant recruitment, interview facilitation, and qualitative analysis including data saturation and the 5-user rule.","aiPrerequisites":["Basic familiarity with product or UX concepts","Understanding of what user research is"],"aiLearningOutcomes":["Plan a user interview study from scratch","Write an effective discussion guide","Recruit and screen participants","Conduct interviews with minimal bias","Analyze findings and identify themes","Determine the right sample size for qualitative research"],"aiDifficulty":"beginner","aiEstimatedTime":"20 min read"}],"pagination":{"total":1,"returned":1,"offset":0}}