{"site":{"name":"Koji","description":"AI-native customer research platform that helps teams conduct, analyze, and synthesize customer interviews at scale.","url":"https://www.koji.so","contentTypes":["blog","documentation"],"lastUpdated":"2026-05-09T12:04:55.746Z"},"content":[{"type":"documentation","id":"c79b3819-6948-45c9-8d39-a6b628d4e718","slug":"user-research-mistakes","title":"User Research Mistakes: 14 Pitfalls That Sabotage Your Insights (2026)","url":"https://www.koji.so/docs/user-research-mistakes","summary":"A field-tested checklist of the 14 most common user research mistakes that produce misleading insights — from pitching the idea instead of investigating the problem, to leading questions, to ending studies before saturation. Each mistake includes the underlying cause and a concrete fix. AI interview platforms like Koji eliminate roughly half of these mistakes automatically by enforcing neutral phrasing, probing on every open-ended question, and producing quote-backed reports.","content":"**The most damaging user research mistakes are not bad questions — they are structural choices that contaminate every answer.** Pitching the idea before listening, talking to the wrong people, asking about the future instead of the past, leading the witness, and ending the study before saturation are responsible for more failed launches than any single tactical error. The good news is that every mistake on this list is preventable, and a modern AI interview platform like Koji removes about half of them automatically by enforcing neutral phrasing, probing for specifics, and structuring data the moment a conversation ends.\n\nIf you only fix three things this quarter, fix these: stop pitching, stop asking what people *would* do, and start running interviews continuously instead of in big batches. Everything else compounds from there.\n\n## TL;DR — The 14 mistakes at a glance\n\n1. Pitching your idea instead of investigating the problem.\n2. Asking hypothetical \"would you\" questions.\n3. Treating compliments as validation.\n4. Recruiting the wrong participants (friends, fans, or anyone who will say yes).\n5. Running too few interviews and stopping before saturation.\n6. Running too many interviews on the wrong question.\n7. Leading questions and priming.\n8. Multitasking — moderating and note-taking at the same time.\n9. Failing to probe (\"Tell me more about that\") on vague answers.\n10. Letting the loudest stakeholder rewrite the brief mid-study.\n11. Skipping the screener and discovering bias after the fact.\n12. Synthesizing alone without quotes or evidence trails.\n13. Reporting findings as opinions instead of insight statements.\n14. Treating research as a one-off project instead of a continuous habit.\n\n## 1. Pitching your idea before you understand the problem\n\nThe single most common mistake. The moment you describe your concept (\"we are thinking about an app that…\"), the participant stops being a witness and starts being a critic. Their next 30 minutes will be reactions to your pitch instead of facts about their life. Save the demo for a separate study; in discovery, behave like a journalist, not a salesperson.\n\n**Fix:** Open with a non-leading prompt about the relevant area of the participant's life. In Koji's research brief, the AI consultant flags any opening question that names your product or its category and rewrites it before you publish.\n\n## 2. Asking hypothetical \"would you\" questions\n\n\"Would you use this?\" \"Would you pay for that?\" Hypothetical questions about the future produce hypothetical answers — usually polite ones. People are very poor at predicting their own behavior. The remedy is to ask about real, recent, specific events: \"Walk me through the last time you tried to solve this.\" Past behavior beats future intent every time.\n\n## 3. Treating compliments as validation\n\nWhen a participant says \"that's a great idea,\" you have learned exactly nothing. Compliments are the noise of polite conversation. Real validation looks like behavior: signed contracts, advance payments, time spent today on a workaround. If the only positive signal is verbal, you have not validated anything.\n\n## 4. Recruiting the wrong participants\n\nResearch quality is bounded by participant quality. Talking to friends, your investors, or random Twitter responders gives you a flattering sample that does not represent your real market. The most expensive mistake is talking to people who would never buy.\n\n**Fix:** Write a tight screener with disqualifying questions tied to the behavior you care about (last purchase date, current tool, role in the buying decision). Koji lets you attach a screener intake form to any study — non-qualifying participants are politely declined before the AI ever starts the interview, so your sample stays clean and you never spend a credit on a bad respondent.\n\n## 5. Stopping before saturation\n\nFive interviews is a famous Nielsen Norman heuristic for usability tests, but generative interviews almost always need more — typically 12–20 per segment before themes stabilize. Teams routinely call a study done after 3 interviews because the qualitative work is painful. The result is confident-sounding insights drawn from a sample too small to detect anything but the most obvious patterns.\n\n## 6. Running too many interviews on the wrong question\n\nThe inverse mistake: a poorly framed research question can absorb fifty interviews and still produce nothing actionable. Before recruiting, write a single sentence — \"By the end of this study we will know X so we can decide Y\" — and check that every question in your guide pays rent against it.\n\n## 7. Leading questions and priming\n\n\"How frustrating was the checkout process?\" already assumes frustration. Better: \"Walk me through the checkout — how did that go?\" Leading language is invisible to the asker and obvious to anyone reading the transcript. Run your guide past a colleague before you publish it, or have an AI tool flag leading verbs (frustrating, easy, intuitive, painful) in the first draft.\n\n## 8. Multitasking — moderating and note-taking at the same time\n\nA human moderator who is typing cannot make eye contact, cannot probe at the right moment, and cannot adapt. The traditional fix is a separate notetaker, which doubles cost and complicates scheduling. Koji removes the trade-off entirely: every text and voice interview is transcribed in real time, the AI moderator handles probing, and you read a structured summary the moment the participant clicks \"submit.\"\n\n## 9. Failing to probe on vague answers\n\n\"It's annoying\" is not a finding — it is a doorway. Behind it sits the actual story: which moment, which workflow, which other tool, which dollar amount, which day of the week. Most human interviewers move on after the first answer because probing feels rude. The AI moderator in Koji is trained to probe up to three layers deep on every open-ended question, with prompts like \"Can you walk me through the last time that happened?\" and \"What did you do after that?\"\n\n## 10. Letting stakeholders rewrite the brief mid-study\n\nA marketing lead reads the first three transcripts, gets excited about a new angle, and asks you to \"also ask about pricing.\" Now your study answers two questions badly instead of one well. Lock the brief before you launch and run any new question as a fresh study.\n\n## 11. Skipping the screener\n\nIf anyone with a link can take your interview, you will get bots, randoms, and someone who answered \"yes\" to every screener because they wanted the gift card. Always require a short qualification step. Koji's intake forms support disqualifying logic and consent capture in one place.\n\n## 12. Synthesizing alone without evidence trails\n\nIf your final report says \"users want a faster onboarding\" without a single quote attached, no one in the room will believe it (or worse, they will believe it for the wrong reason). Every claim should link back to participant IDs, timestamps, and verbatim quotes. Koji's research report does this automatically — every theme is backed by quote cards you can click to jump to the exact moment in the transcript.\n\n## 13. Reporting findings as opinions instead of insight statements\n\n\"I think users are confused by the dashboard\" is an opinion. \"12 of 18 interviewed users could not name the primary action on the dashboard within 5 seconds; 9 of those 12 mentioned it was their first session\" is an insight statement. Insight statements survive stakeholder pushback because they have evidence, scope, and a built-in implication for action.\n\n## 14. Treating research as a one-off project\n\nThe most strategic mistake. Teams that batch research into quarterly studies are always answering last quarter's question. Continuous discovery — at least one interview per week — keeps your model of the customer current and turns research into a competitive advantage. With AI moderation you can recruit on Monday and have ten new interviews by Friday without scheduling a single call.\n\n## How Koji eliminates the most common mistakes\n\nMost of the 14 mistakes above are mechanical: they are caused by the limits of human moderators working under time pressure. Koji removes that pressure:\n\n- **Neutral phrasing by default.** The AI consultant rewrites leading or salesy questions before you publish.\n- **Three-layer AI probing** on every open-ended question — no more \"it's annoying\" dead ends.\n- **Six structured question types** (open_ended, scale, single_choice, multiple_choice, ranking, yes_no) so quantitative signals like NPS or feature ranking sit alongside qualitative depth in the same conversation. See the [Structured Questions Guide](/docs/structured-questions-guide) for the full breakdown.\n- **Real-time transcription and theme detection** — no multitasking, no lost notes.\n- **Quote-backed reports** so every insight has evidence attached.\n- **Always-on async interviews** that make weekly cadence trivial.\n\nIf you have been doing research the old way and feeling like the insights never quite hold up, the issue is rarely you. It is usually one of the 14 patterns above — and most of them dissolve when the platform itself enforces good methodology.\n\n## Related Resources\n\n- [Structured Questions in AI Interviews](/docs/structured-questions-guide) — the six question types Koji combines in every study\n- [Research Bias Guide](/docs/research-bias-guide) — the cognitive biases that corrupt user research and how to control for them\n- [Avoiding Bias in Research Interviews](/docs/avoiding-bias-in-interviews) — practical phrasing patterns to keep your guide neutral\n- [The Mom Test Methodology](/docs/mom-test-methodology) — Rob Fitzpatrick's rules for honest customer conversations\n- [How Many User Interviews Do You Need?](/docs/how-many-user-interviews) — the sample size guide that tells you when to stop\n- [Continuous Discovery: Weekly Customer Interviews](/docs/continuous-discovery-user-research) — turn research from a project into a habit","category":"Research Methods","lastModified":"2026-05-09T03:18:13.015464+00:00","metaTitle":"14 User Research Mistakes That Sabotage Your Insights (2026 Guide)","metaDescription":"The 14 most common user research mistakes — pitching, leading questions, hypothetical phrasing, undersampling, and more — and how AI interviews automatically prevent half of them.","keywords":["user research mistakes","ux research pitfalls","common research mistakes","customer interview mistakes","user research errors","user research best practices","interview mistakes","research methodology mistakes"],"aiSummary":"A field-tested checklist of the 14 most common user research mistakes that produce misleading insights — from pitching the idea instead of investigating the problem, to leading questions, to ending studies before saturation. Each mistake includes the underlying cause and a concrete fix. AI interview platforms like Koji eliminate roughly half of these mistakes automatically by enforcing neutral phrasing, probing on every open-ended question, and producing quote-backed reports.","aiPrerequisites":["ux-research-process","ux-research-methods-guide"],"aiLearningOutcomes":["Identify the 14 most damaging user research mistakes","Recognize leading questions and hypothetical phrasing in your interview guide","Set sample sizes that reach data saturation","Use AI moderation to eliminate multitasking and probing failures","Replace one-off research projects with continuous weekly cadence"],"aiDifficulty":"beginner","aiEstimatedTime":"11 min read"}],"pagination":{"total":1,"returned":1,"offset":0}}