Smoke Tests and Fake Door Tests: How to Validate Demand Before You Build
Smoke tests and fake door tests measure real user demand for an idea before any code is written. Learn the playbook used by Buffer, Dropbox, and modern product teams — and how to pair it with AI interviews.
What Is a Smoke Test?
A smoke test is a product validation experiment that measures real user behavior — usually clicks, signups, or purchases — for a feature or product that doesn''t actually exist yet. The goal is to convert "users say they want this" into "users actually clicked the button," before you spend engineering time building the wrong thing.
The most common form is a fake door test: a button, link, or pricing page on a real product or landing page that triggers an "almost ready — sign up for early access" message when clicked. The clickthrough rate tells you whether real demand exists.
Smoke tests sit at the start of every modern product validation playbook. Buffer launched on a single landing page describing the product with a "Plans & Pricing" button — clicking it revealed the tool was still in development and invited an email signup. The signup rate was strong enough to convince the founders to build the real product. Dropbox famously published a 90-second explainer video on a placeholder page; tens of thousands of people joined the waitlist in the days that followed, validating demand long before a working prototype existed.
These weren''t lucky shots. They were disciplined experiments designed to falsify a hypothesis cheaply.
Why "Would You Use This?" Is Almost Useless
Most product teams validate ideas the wrong way. They run a survey or 10 customer interviews asking "would you use a feature that does X?" and 70% say yes. They build the feature. Adoption is 4%.
The gap between intent and action is enormous. People are pleasant in interviews. They imagine an idealized version of themselves. They want to be helpful. None of that predicts behavior.
Smoke tests close the gap by measuring action, not opinion. A user clicking a fake "AI Auto-Summary" button on your dashboard tells you something a survey never can: that this user, in this context, with their real workload, found the proposition compelling enough to act on.
That is the only kind of demand signal worth building from.
Fake Door Test: The Standard Playbook
A fake door test follows a predictable shape:
1. Define the hypothesis. "If we offer a one-click data export to Notion, at least 8% of users who land on the export menu will click it within their first session."
2. Build the door. Add a button or menu item that looks identical to the real feature. The visual fidelity matters — if the button looks experimental, you''re testing a different thing.
3. Build the wall behind it. Clicking the button should trigger one of three responses:
- "This feature is in early access — would you like us to email you when it launches?" (captures lead intent)
- "We''re testing this idea — what would you use it for?" (captures qualitative demand)
- A short signup form for a "private beta" (captures commitment)
4. Set a sample threshold. Decide upfront how many impressions you need to consider the result reliable. For binary clickthrough decisions, a few hundred impressions are usually enough. For more nuanced signal, plan for 1,000+.
5. Set a decision rule before launch. "If clickthrough is >5%, we build. If 2–5%, we run a follow-up interview study. If <2%, we kill the idea." Decision rules set in advance protect you from rationalizing weak signal after the fact.
6. Run the test for a fixed window. A week is usually enough; longer if your traffic is low.
7. Decide. Build, investigate, or kill — based on the rule you set in step 5.
The Ethics of Fake Door Testing
Fake door tests sit at the edge of acceptable product practice. Done badly, they damage trust: users feel tricked, conversions drop, and review sites notice.
Three rules separate disciplined fake door tests from manipulative ones:
1. Always close the loop. When a user clicks, immediately tell them honestly that the feature is in early access or being evaluated. Never let them think it failed silently.
2. Capture their intent gracefully. "We''re seeing strong interest in this — can we email you when it ships?" turns a fake door into a real value exchange.
3. Don''t fake critical paths. Faking a "Pay" button or a "Cancel Subscription" link erodes trust at the worst possible moment. Reserve fake doors for net-new features users haven''t relied on yet.
A well-run fake door is indistinguishable, ethically, from a launch announcement that says "coming soon." The only difference is the data you collect.
Smoke Test Variations
The fake door is the most common smoke test, but several variants fit different stages:
Landing page smoke test. A standalone page describing the product with a clear call to action. Drive paid traffic and measure conversion. Used by Buffer for their original launch.
Painted door test. Like a fake door but with a faked partial experience — clicking the button takes the user one or two steps into a flow before revealing the feature isn''t ready. Provides richer behavioral signal at the cost of more user disappointment.
Wizard of Oz test. The user-facing experience is real, but a human is doing the work behind the scenes. Used by Zappos founder Nick Swinmurn, who manually fulfilled every shoe order from local stores before building inventory infrastructure. Validates demand and gives you ground-truth on the actual workflow.
Concierge test. Like Wizard of Oz, but the user knows a human is involved. Lower fidelity, but easier to set up and learn from.
Pre-order test. Charge real money for a product that doesn''t exist yet, with a refund if it doesn''t ship. The strongest possible commitment signal — and a hard test most ideas fail.
Each step up the validation ladder gives you stronger signal at higher cost. Start with the cheapest.
Smoke Tests + Koji: Validate the Door, Then Interview the Clickers
A click is signal. A click plus a 5-minute interview is gold.
Koji''s AI interview platform pairs naturally with smoke tests. The pattern:
- Run a fake door on your live product. The "early access" page after the click contains a Koji interview invitation.
- Voice or text interview runs immediately while the user is still in context — they remember exactly what they hoped the button would do.
- Structured questions capture quantitative anchors: how often they would use this, what they would pay, what alternatives they currently use.
- AI follow-ups probe for the underlying job-to-be-done, surfacing whether the demand is for the literal feature or for a different solution to the same problem.
- Automated thematic analysis clusters responses across hundreds of clickers — turning a binary clickthrough metric into a rich opportunity map.
Compared to a survey emailed days later, this approach captures intent at the moment of demand. Response rates are 4–6x higher and the answers are dramatically more specific. Tools like SurveyMonkey or Typeform can collect the form responses, but only Koji runs the live conversational probing that turns a click into an actionable insight.
Sample Size and Statistical Significance
Fake door tests don''t need huge samples to be useful. A few hundred impressions are usually enough to distinguish "no demand" (clickthrough <1%) from "real demand" (clickthrough >5%). The middle range — 1–5% — is where you need more data to decide.
Three calibration anchors:
- Below 1% clickthrough: there is no real demand. Kill the idea. Even strong features rarely fall below 1% if they are positioned well.
- 5–15% clickthrough: solid demand. Move to the next validation step (Koji interviews, prototype testing, beta).
- Above 15% clickthrough: strong demand. Prioritize for build.
These benchmarks vary by traffic source — paid acquisition users behave differently than power users — so calibrate against your own baseline by running smoke tests on a feature you know works.
What Smoke Tests Don''t Tell You
A click is intent, not commitment. Smoke tests reliably measure the top of the funnel and unreliably predict the bottom. Specifically, they undersell:
- Long-term engagement. A click is not a habit.
- Willingness to pay. Free signup interest converts to paid usage at 1–10% across most categories.
- Workflow fit. A user might want the feature in the abstract but not adopt it once they see how it integrates into their day.
For these dimensions, you need follow-up methods: Koji discovery interviews, prototype testing, paid beta programs, and post-launch cohort analysis.
When to Use a Smoke Test
Smoke tests are highest-leverage when:
- You are about to commit significant engineering time
- You have a real product or landing page with traffic
- The feature can be described in a sentence
- You have a credible "coming soon" close-the-loop experience
They are not the right tool when:
- The feature is so novel that users can''t recognize it from a button label
- Your traffic is too low to produce decision-grade data within a reasonable window
- The feature exists in a flow where ethics or trust would be damaged by a fake door
- You already have strong behavioral data from analogous features
In all other cases, a smoke test is the cheapest, most honest validation step you can take before you build.
A Smoke Test Follow-Up Interview Template for Koji
When a user clicks your fake door, redirect them to a Koji interview embedded in the "early access" page. Use this question structure:
- What did you hope this feature would do? (open_ended, maxFollowUps: 2)
- What were you trying to accomplish when you clicked? (open_ended, maxFollowUps: 2)
- How are you handling this today? (open_ended, maxFollowUps: 2)
- How often do you imagine using this? (single_choice: daily / weekly / monthly / a few times a year / once)
- Would you pay for this as a standalone product? (yes_no)
- If yes, what would you expect to pay per month? (open_ended numeric, conditional on yes)
- What would have to be true for you to switch from your current solution? (open_ended, maxFollowUps: 2)
This 7-question structure runs in 5–8 minutes per respondent. Across 50–200 clickers, Koji''s analysis produces a quantitative demand profile (frequency, willingness to pay) plus a qualitative job-to-be-done map.
When to Move Beyond a Smoke Test
A passed smoke test is a green light to invest, not a green light to ship. Before committing to full build, layer in:
- Concept testing interviews to validate the value proposition language
- Prototype usability testing to check the workflow fits real customer behavior
- Paid beta program to validate willingness to pay
- Pricing research (Van Westendorp PSM) to anchor the price
The smoke test reduces the risk of building the wrong thing. The validation stack that follows reduces the risk of building the right thing badly.
Related Resources
- Customer Discovery Interview Guide — qualitative research that pairs with smoke tests
- Concept Testing Guide — for richer concept evaluation after the smoke test passes
- Switch Interviews JTBD Method — interview clickers to understand the underlying job
- Pre Launch User Research — full pre-launch validation playbook
- Structured Questions Guide — capture quantitative demand signal alongside qualitative interest
- How Founders Validate Product Ideas with Customer Interviews — founder-focused validation playbook
Related Articles
Structured Questions in AI Interviews
Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.
Concept Testing: The Complete Methodology Guide
How to evaluate product and marketing ideas with target audiences before development — covering methods, metrics, sample sizes, and AI-powered approaches.
Pre-Launch User Research: How to Validate Before You Ship
A complete framework for running user research in the weeks before a product launch — covering concept validation, messaging testing, and onboarding validation using AI interviews.
Switch Interviews: The JTBD Method for Understanding Why Customers Buy (and Leave)
Switch interviews uncover the four forces of progress that cause customers to switch from one product to another. Learn the Bob Moesta playbook and how to run switch interviews with AI at scale.
Startup Idea Validation: How to Test Your Idea with Customer Interviews
A research-backed guide to validating startup ideas through customer interviews — before you write a line of code.
Van Westendorp Price Sensitivity Meter: The Four-Question Pricing Research Method
The Van Westendorp Price Sensitivity Meter uses four questions to identify the optimal price for any product. Learn how to run the PSM with AI interviews at scale and combine the four numbers with qualitative reasoning.