{"site":{"name":"Koji","description":"AI-native customer research platform that helps teams conduct, analyze, and synthesize customer interviews at scale.","url":"https://www.koji.so","contentTypes":["blog","documentation"],"lastUpdated":"2026-05-02T15:24:13.589Z"},"content":[{"type":"documentation","id":"33509561-26d6-4245-823c-517c54f22308","slug":"how-might-we-questions","title":"How Might We Questions: The Complete Framework for Turning Insights Into Innovation Opportunities","url":"https://www.koji.so/docs/how-might-we-questions","summary":"How Might We (HMW) questions are the bridge between research insight and ideation. This guide covers the origin (Sid Parnes 1967, Min Basadur at P&G, IDEO, Google), the linguistic logic of why HMW works, the Stanford d.school POV → HMW pipeline, the seven HMW patterns with examples, common mistakes, scope rubric, real-world case studies (Coast soap, IDEO shopping cart, Project Oxygen, Airbnb), and how AI-moderated research (Koji) produces sharper HMWs by grounding them in 100+ real interviews instead of three stakeholder anecdotes.","content":"## Answer First: Why HMW Questions Are the Hinge Between Research and Innovation\n\nA How Might We (HMW) question is a short, deliberately phrased question that turns a research insight into an actionable design challenge. The format is simple — three words that change how a team thinks: **How Might We…?** Used by IDEO, Google, Facebook, P&G, Stanford d.school, and the entire modern design-thinking movement, HMW questions are the hinge between two phases: the research phase that produces insight, and the ideation phase that produces solutions.\n\nGet the HMW wrong, and your team brainstorms the wrong problem for weeks. Get it right, and a single 30-minute ideation session can surface hundreds of testable ideas — which is exactly what happened to Procter & Gamble in the 1970s, when the original HMW question created what became the Coast soap brand.\n\nThis guide covers the origin of HMW, the linguistic mechanics that make it work, the seven recurring HMW patterns, the rubric for scoping a good question, real-world case studies, and how AI-native research (the kind Koji enables) feeds the entire pipeline so your HMWs are grounded in actual customer evidence rather than internal opinion.\n\n---\n\n## The Origin: From a Conference Hotel Room to Every Design-Thinking Workshop on Earth\n\nThe \"How Might We\" phrasing was originated by **Sid Parnes** and codified in his 1967 *Creative Behavior Guidebook*, which introduced \"invitation stems\" — phrases like \"How might we…\" and \"In what ways might we…\" that prime the brain to defer judgment and generate options.\n\nThe phrase was carried into industry by **Min Basadur**, a creative-process researcher who joined Procter & Gamble as a creative manager in 1969. The story is famous in innovation circles: P&G''s green-stripe deodorant soap was being out-sold by a competitor, and the team had been stuck for weeks on the question *\"How can we make a better green-stripe bar?\"* Basadur reframed it as *\"How might we create a more refreshing soap of our own?\"* — and the team generated hundreds of ideas in a single afternoon, eventually launching the Coast brand, which captured significant market share.\n\n> \"The ''how'' part assumes there are solutions out there — it provides creative confidence. ''Might'' says we can put ideas out there that might work or might not — either way, it''s OK. And the ''we'' part says we''re going to do it together and build on each other''s ideas.\"\n> — Tim Brown, former CEO, IDEO\n\nThe technique migrated from P&G to IDEO in the 1990s, where Charles Warren and David Kelley made it the default ideation kickoff. From IDEO it spread to Google (where Charles Warren brought it as Senior UX Designer in 2008), then to Facebook, then to Stanford''s d.school, and finally into the global design-thinking canon.\n\n**Warren Berger**, author of *A More Beautiful Question*, calls HMW *\"the secret phrase top innovators use,\"* and Harvard Business Review has published it under that exact title. There is a reason every major innovation team in the world — from IDEO to Stripe — opens ideation with these three words.\n\n---\n\n## Why \"How Might We\" Works: The Linguistic Logic\n\nEvery word in the phrase does specific cognitive work. Substituting any of them weakens the question.\n\n| Word | What It Does | What Happens If You Drop It |\n|---|---|---|\n| **How** | Assumes a solution exists and frames the work as discovery, not debate | \"Can we…?\" invites yes/no debate instead of generating options |\n| **Might** | Defers judgment — ideas can be wrong, partial, or absurd | \"Should we…?\" or \"Will we…?\" triggers premature evaluation and self-censorship |\n| **We** | Frames ideation as a collaborative, additive activity | \"I\" creates ownership pressure; \"you\" assigns responsibility outside the room |\n\nThe combination produces what cognitive scientists call **divergent priming** — a mental state in which the brain produces a higher quantity and broader variety of options before any filtering occurs. Research on group ideation consistently shows that quantity-first prompts produce higher-quality final ideas than quality-first prompts, because the best idea is rarely the first idea.\n\n---\n\n## The HMW Framework: From Insight to Opportunity\n\nA strong HMW question is not invented in a brainstorm. It is *derived* from research evidence using the Stanford d.school **POV (Point of View)** template:\n\n> **[USER]** needs **[USER NEED]** because **[SURPRISING INSIGHT]**\n\nYou translate the POV into an HMW by inverting the insight into a generative question. Example flow:\n\n**Research observation:** \"First-time SaaS users abandon onboarding when asked to invite teammates before they have seen any product value.\"\n\n**POV statement:** *Solo SaaS evaluators need a way to experience product value before committing colleagues, because invitation requests feel like a demand for political capital they have not yet decided to spend.*\n\n**HMW questions derived:**\n- How might we let solo evaluators experience the full value of the product without inviting anyone?\n- How might we make inviting a teammate feel like sharing a win rather than asking a favor?\n- How might we postpone collaboration setup until the user has had their first \"aha\" moment?\n\nEach HMW takes the same insight and points it in a different direction — that is the framework working as designed. You generate **multiple HMWs from a single insight**, then pick the most generative one for ideation.\n\n---\n\n## The 7 Patterns of Strong HMW Questions\n\nIDEO and Stanford d.school document seven recurring patterns. Strong HMW practice is to generate one HMW from each pattern for any given insight — you almost always discover a more interesting framing in a pattern you didn''t expect.\n\n**1. Amp up the good** — *How might we use [the user''s good behavior] to [achieve the goal]?*\n> How might we use first-time users'' enthusiasm to surface the most useful features?\n\n**2. Remove the bad** — *How might we remove [the friction] from [the experience]?*\n> How might we remove the need for a credit card during the trial?\n\n**3. Explore the opposite** — *How might we make [opposite of expected solution] true?*\n> How might we make abandoning the product feel like a small loss instead of a relief?\n\n**4. Question an assumption** — *How might we [the goal] without [the assumed prerequisite]?*\n> How might we generate qualitative insight without scheduling a single live interview?\n\n**5. Identify unexpected resources** — *How might we use [unconventional resource] to [the goal]?*\n> How might we use existing customer support transcripts as research data?\n\n**6. Create an analogy from need or context** — *How might we make [the experience] feel like [analogous experience]?*\n> How might we make data analysis feel like having a conversation with a colleague?\n\n**7. Play with scale** — *How might we [the goal] for one person? For a million? For zero?*\n> How might we run continuous research for a team of one PM?\n\nWhen teams generate HMWs across all seven patterns and then vote, the winning HMW is almost never the obvious \"remove the bad\" framing. The non-obvious patterns are where breakthrough framings hide.\n\n---\n\n## Common HMW Mistakes (And How to Fix Them)\n\n| Mistake | Example | Fix |\n|---|---|---|\n| **Embeds the solution** | How might we add a chatbot to onboarding? | How might we help first-time users get unstuck without a human? |\n| **Too narrow** | How might we increase trial conversion by 5%? | How might we make trial users feel they''ve already won before the trial ends? |\n| **Too broad** | How might we delight customers? | How might we delight first-time enterprise buyers in their first hour? |\n| **Negatively framed** | How might we stop users from churning? | How might we make month two more valuable than month one? |\n| **Single-perspective** | How might we sell more seats? | How might we make adding a teammate the most rewarding action in the product? |\n| **Not grounded in evidence** | How might we be more innovative? | (Go back and do research first — there is no insight here to act on.) |\n\nTim Brown''s rule of thumb is the simplest scope check: a good HMW is *too big to answer in five minutes and too small to answer in five years*. If you cannot start ideating immediately, your HMW is too broad. If you can answer it without ideating at all, it is too narrow.\n\n---\n\n## Real-World HMW Case Studies\n\n**Procter & Gamble — Coast soap (1970s).** Original framing: \"How can we make a better green-stripe bar?\" Reframed by Min Basadur to \"How might we create a more refreshing soap of our own?\" The reframe produced hundreds of ideas in a single session, ultimately launching Coast — a brand still on shelves more than 50 years later.\n\n**IDEO — Shopping cart redesign (1999).** ABC News''s *Nightline* commissioned IDEO to redesign the supermarket shopping cart in five days. The team''s opening HMW: *\"How might we make shopping safer, more efficient, and more fun?\"* The resulting prototype — modular baskets, child seats, scanner integration — became one of the most cited examples of design-thinking output and is widely credited with bringing design thinking into mainstream business consciousness.\n\n**Google — Project Oxygen.** When Google''s People Operations team set out to study management, they reframed *\"Are managers necessary?\"* into *\"How might we identify what behaviors distinguish Google''s best managers?\"* That HMW shift turned a defensive yes/no debate into a 10-year research program that produced the now-famous \"Eight Behaviors of Great Managers\" list.\n\n**Airbnb — Belong Anywhere (2014).** During the rebrand, Airbnb''s design team reframed *\"How might we get more bookings?\"* into *\"How might we make travelers feel they belong somewhere even when they''re away from home?\"* That HMW reshaped the entire brand, product, and host-onboarding strategy, contributing to the company''s expansion from a booking site into a global community brand.\n\nIn each case, the HMW reframe was the moment a stuck team became unstuck.\n\n---\n\n## The Modern Approach: AI-Native Research as the HMW Engine\n\nTraditional HMW work has a hidden weakness — the upstream research. Most teams generate HMWs from a handful of stakeholder interviews or last quarter''s anecdotes, which is why so many ideation sessions feel like internal opinion exchange. **A great HMW is only as good as the insight beneath it.**\n\nThis is where AI-native research platforms like Koji change the economics of design thinking. Instead of running 8–10 manual customer interviews over six weeks before each design sprint, teams can run 100+ AI-moderated voice interviews in 48–72 hours and get back a synthesized POV-ready theme set the same week.\n\n**Why AI interviews produce sharper HMWs:**\n- **Volume of evidence.** A POV like \"users abandon onboarding when asked to invite teammates\" lands differently when it''s grounded in 87 real interviews instead of three.\n- **Direct quote capture.** Koji surfaces verbatim quotes from voice interviews, so the \"surprising insight\" half of your POV is anchored in your customer''s actual language — which translates directly into HMW phrasing.\n- **Mixed methods in one study.** Koji''s [structured questions](/docs/structured-questions-guide) — open_ended, scale, single_choice, multiple_choice, ranking, and yes_no — mean a single 7-question study can produce both qualitative themes (for the insight) and quantitative weight (for prioritizing which HMW to ideate against first).\n- **Continuous, not episodic.** Because AI moderation runs 24/7, you can re-validate an HMW the moment ideation closes — replacing the slow research-ideate-build-research loop with a continuous discovery cadence.\n\nTeams using AI-assisted qualitative research report 60–80% faster time-to-insight than teams running fully manual studies, which means HMWs go from being a once-a-quarter design-sprint artifact to a weekly habit.\n\n---\n\n## Getting Started: Your First HMW Workshop\n\n1. **Run discovery research first.** Generate at least 10–15 user observations grounded in real conversations. (If you don''t have time for manual interviews, run an AI-moderated study in parallel — 30–50 voice interviews in two days is enough for a strong POV.)\n2. **Cluster observations into 3–5 POV statements.** Use the [USER] needs [NEED] because [INSIGHT] template. The \"because\" half should make a stakeholder pause — if it''s obvious, your insight isn''t deep enough.\n3. **Generate 3–5 HMWs per POV using the seven patterns.** Don''t stop at the first plausible framing. Force yourself through \"amp the good,\" \"explore the opposite,\" and \"question the assumption\" before you settle.\n4. **Apply the scope check.** Each HMW should be too big to answer in five minutes, too small to answer in five years.\n5. **Vote on the most generative HMW.** Pick the one your team has the least obvious answer to. That''s where novel ideas live.\n6. **Ideate against the chosen HMW for 30–60 minutes.** Aim for 100+ ideas before any filtering.\n7. **Validate the top ideas with the same research panel.** Run a quick concept-test study so your shortlist is evidence-grounded before any design work begins.\n\nThe whole loop — discovery research, POV, HMW, ideation, concept validation — used to take a quarter. With AI-native research embedded in the cycle, modern teams run it in a week.\n\n---\n\n## Related Resources\n\n- [Structured Questions in AI Interviews](/docs/structured-questions-guide) — the 6 question types that anchor every Koji study\n- [Design Thinking Research Methods](/docs/design-thinking-research) — where HMW fits in the empathize → define → ideate → prototype → test loop\n- [Problem Statements in UX Research](/docs/problem-statement-ux-research) — turning observations into POVs that fuel HMWs\n- [The Double Diamond Design Process](/docs/double-diamond-design-process) — discover, define, develop, deliver\n- [Customer Discovery Interviews: The Complete Guide](/docs/customer-discovery-interviews) — generating the raw evidence behind every HMW\n- [Concept Testing Methodology](/docs/concept-testing-methodology) — validating the ideas your HMW produces\n- [How to Write a Research Question](/docs/writing-a-research-question) — companion guide for the upstream research phase","category":"Research Methods","lastModified":"2026-05-01T03:19:16.861771+00:00","metaTitle":"How Might We Questions: The Complete Framework Guide (2026)","metaDescription":"Master the How Might We (HMW) framework. The origin from Min Basadur and IDEO, the linguistic logic, 7 HMW patterns with examples, scope rubric, and how AI-native research feeds sharper HMWs.","keywords":["how might we","how might we questions","hmw questions","design thinking framework","how might we examples","reframing problems","ideation framework","design thinking ideation"],"aiSummary":"How Might We (HMW) questions are the bridge between research insight and ideation. This guide covers the origin (Sid Parnes 1967, Min Basadur at P&G, IDEO, Google), the linguistic logic of why HMW works, the Stanford d.school POV → HMW pipeline, the seven HMW patterns with examples, common mistakes, scope rubric, real-world case studies (Coast soap, IDEO shopping cart, Project Oxygen, Airbnb), and how AI-moderated research (Koji) produces sharper HMWs by grounding them in 100+ real interviews instead of three stakeholder anecdotes.","aiPrerequisites":["Familiarity with the design thinking process","Basic understanding of user research","At least one set of user observations or interview transcripts"],"aiLearningOutcomes":["Origin and history of the HMW framework","How to translate a POV statement into multiple HMW questions","The 7 HMW patterns with concrete examples","How to scope an HMW so it is neither too broad nor too narrow","Common HMW mistakes and how to fix them","How AI-native research feeds the HMW pipeline"],"aiDifficulty":"beginner","aiEstimatedTime":"14 minutes"}],"pagination":{"total":1,"returned":1,"offset":0}}