{"site":{"name":"Koji","description":"AI-native customer research platform that helps teams conduct, analyze, and synthesize customer interviews at scale.","url":"https://www.koji.so","contentTypes":["blog","documentation"],"lastUpdated":"2026-05-14T12:53:34.450Z"},"content":[{"type":"documentation","id":"c1cd0b39-c03a-4acb-8546-df2bfa454d1a","slug":"in-app-ai-surveys-embedded-research","title":"In-App AI Surveys: Embedded Customer Research Inside Your Product","url":"https://www.koji.so/docs/in-app-ai-surveys-embedded-research","summary":"In-app AI surveys are short adaptive interviews embedded inside your product, triggered by user behavior, that capture customer feedback in the moment. Unlike static email surveys, an AI moderator generates contextual follow-up questions from each open-ended answer. Koji ships a drop-in embed widget that runs the full AI interview inside an iframe, supports six structured question types (open_ended, scale, single_choice, multiple_choice, ranking, yes_no), bills per conversation (≈€1), and clusters responses into themes in real time. Best practices: one root question, AI-moderated probing, a structured anchor, a skip path, and behavior-based triggering — never fire on every page view.","content":"## What is an in-app AI survey?\n\nAn in-app AI survey is a short, conversational research prompt embedded directly inside your product — triggered by a specific user behavior (after onboarding, after a key action, on a pricing page) — that captures feedback in the exact moment of use. Unlike static intercept surveys, modern in-app AI surveys conduct adaptive interviews: the AI asks one question, listens, and generates intelligent follow-ups based on what the user just said.\n\nIf you ship product changes faster than you can run scheduled customer interviews, in-app AI surveys are the highest-leverage way to keep continuous discovery alive. Platforms like Koji make this turnkey — embed a widget on any route, let the AI moderate the conversation, and surface insights in real time without ever scheduling a call.\n\nThis guide covers when in-app AI surveys make sense, how to design one that respects users, the technical implementation, and how Koji's embedded interview widget compares to traditional in-app survey tools like Sprig, Pendo, and Survicate.\n\n## Why in-app AI surveys outperform email surveys\n\nThe classic \"send a survey link after the user does X\" workflow has three structural problems:\n\n- **Recall decay** — 72 hours after the event, users no longer remember the details that matter. By the time the survey lands in their inbox, the specific friction has been forgotten or rationalized.\n- **Self-selection bias** — email surveys mostly hear from two cohorts: the very happy and the very angry. The middle 80% never click through.\n- **Static questions** — traditional surveys ask the same questions to everyone, so you cannot follow up on the answer that actually matters.\n\nIn-app AI surveys fix all three. Triggered at the moment of action, they catch the experience while it is still vivid. Embedded inside the product, they don't need an email open. And because the conversation is AI-moderated, every follow-up question is generated from what the user actually said — not a pre-written branching tree.\n\nIndustry data backs this up. Hotjar's Trends in UX Research shows in-app feedback collects 4–6x more responses than email-based surveys, and Maze's 2025 State of Continuous Discovery report found teams using embedded research touchpoints ship 2.3x more validated features per quarter.\n\n## When in-app AI surveys are the right tool\n\nIn-app AI surveys are the right tool when you need:\n\n1. **Behavior-triggered feedback** — fire the survey after a specific event (first project created, sixth login, churn-intent click).\n2. **Continuous, low-effort discovery** — keep a passive listening channel open without scheduling moderated sessions.\n3. **High-volume qualitative data** — collect dozens or hundreds of short conversations a week, then let the AI synthesize them.\n4. **Mid-onboarding diagnostics** — figure out where users get stuck without re-watching session replays.\n5. **Pre-churn intervention** — catch the user before they cancel, ask them why, and route the response into the CS workflow.\n\nThey are not the right tool for deep 45-minute generative interviews (use scheduled voice calls instead), highly sensitive topics that require real trust-building, or purely statistical NPS/CSAT tracking (use a single-question intercept).\n\n## How Koji's embedded AI interviews work\n\nKoji's embed widget runs the full AI moderator inside an iframe or modal that you drop into your product with a single script tag. Once a user opens the widget, they get the same conversational interview experience you'd build in the Koji dashboard — except the entire session happens inside your app.\n\nUnder the hood, you get:\n\n- **Six structured question types** out of the box: `open_ended` (with AI probing), `scale` (NPS / CSAT), `single_choice`, `multiple_choice`, `ranking`, and `yes_no`. See the [structured questions guide](/docs/structured-questions-guide) for the full taxonomy.\n- **Adaptive AI follow-ups** — every open-ended answer can trigger up to three contextual follow-ups generated by the LLM, not by a hard-coded branch.\n- **Voice or text mode** — text by default for in-app use, with optional voice for desktop-class flows.\n- **Custom branding** — match your product's colors, logo, and tone via the [interview branding settings](/docs/customizing-branding).\n- **Personalized links** — pass `user_id`, `plan`, or any custom attribute as a URL parameter so the AI knows who it is talking to.\n- **Quality gate** — only conversations that score 3+ on Koji's substantive-answer scale consume credits. Drive-by clicks are free.\n\nA 5-question in-app text interview costs 1 credit (≈€1 at standard rates, free on the Insights plan up to 29 credits per month). That is roughly 10–20x cheaper than dedicated in-app survey tools that charge per seat.\n\n## Designing a great in-app AI survey\n\nThe hardest part is restraint. Every extra question doubles the abandonment rate. A great in-app AI survey usually has:\n\n**1. One root question.** Open-ended, conversational, and tied to the trigger event. \"What were you trying to do here?\" beats \"Rate your experience 1–5.\"\n\n**2. AI-moderated probing.** Let Koji's AI handle the follow-ups. Two well-placed probes extract more insight than ten pre-written questions. See [AI probing guide](/docs/ai-probing-guide) for how it works.\n\n**3. A structured anchor.** Pair the open-ended question with a single scale or yes/no question so the data aggregates cleanly. \"Did this feel fast? Yes/No\" + \"What made it feel that way?\" — see [scale questions guide](/docs/scale-questions-guide) for the pattern.\n\n**4. A graceful skip path.** Always show an \"Ask me later\" button. Forced surveys destroy long-term trust.\n\n**5. A clear value exchange.** Explain why you are asking. \"We are improving onboarding for new teams — your 90 seconds will directly shape it\" outperforms \"Help us improve\" by roughly 3x completion rate.\n\n## Triggering rules that actually work\n\nThe biggest in-app survey mistake is firing on every page view. Calibrate triggers like a sniper:\n\n- **First-time feature use** — fire after the user completes a key action once.\n- **Repeated abandonment** — detect 2+ failed attempts at the same flow and offer the survey.\n- **Sentiment-driven** — route negative NPS scores into an immediate AI follow-up interview.\n- **Time-bounded** — never show the same survey to the same user twice within 30 days.\n- **Cohort-targeted** — use Koji's personalized link parameters to scope surveys to plan tier, signup date, or feature flag.\n\n## Implementation in 10 minutes\n\nKoji ships an embed widget you can drop into any React, Vue, or vanilla site:\n\n1. Build the study in the Koji dashboard with 2–4 questions (one open-ended plus one or two structured anchors).\n2. Publish and copy the embed snippet from [using the embed widget](/docs/using-the-embed-widget).\n3. Add it to the route where you want the survey to appear.\n4. Pass any user context as URL parameters: `?email=...&plan=pro&onboarded=true`.\n5. Set a trigger condition in your own code — modal opens after 30 seconds on the dashboard, or on click of a \"Give feedback\" button.\n\nFor deeper integrations, the [headless API](/docs/headless-api-overview) lets you start interviews server-side and post messages programmatically — useful when you want the conversation to live inside your own chat UI.\n\n## How Koji compares to traditional in-app survey tools\n\nThe legacy category (Sprig, Pendo Feedback, Survicate, Hotjar Surveys) was built around static, multi-question forms with simple branching logic. Koji is built around AI-moderated conversations:\n\n- **AI moderation with adaptive follow-ups** — Koji generates probes on the fly from every open-ended answer. Static tools only branch on pre-defined paths.\n- **Voice + text mode** — Koji supports both inside the same study. Legacy in-app tools are text-only.\n- **Automatic theme tagging and live reports** — Koji clusters responses into themes in real time; legacy tools export raw answers to a BI tool.\n- **Multilingual** — Koji auto-detects 30+ languages; legacy tools are English-first.\n- **Per-conversation pricing** — Koji bills per credit (≈€1 each); legacy tools are per-seat at €500+ per month minimum.\n\nKoji's edge is that the same engine that runs your scheduled remote interviews also runs the 60-second in-app intercept — and insights from both flow into one repository (see [research repository guide](/docs/research-repository-guide)).\n\n## Privacy, consent, and quiet hours\n\nIn-app surveys still need consent. Koji handles this with a built-in intake form (see [intake forms and consent](/docs/intake-forms-and-consent)), fully customizable to match GDPR / CCPA requirements. Set quiet hours so the widget never interrupts active focus work, and never trigger more than one survey per session.\n\nFor enterprise teams operating in regulated environments, the [GDPR-compliant AI user research guide](/docs/gdpr-compliant-ai-user-research) covers data handling, retention, and DPA setup end to end.\n\n## Measuring success\n\nTrack three metrics on every in-app AI survey:\n\n1. **Open rate** — what percentage of triggered widgets get opened. Healthy: 15–30%.\n2. **Completion rate** — what percentage of opens reach the final question. Healthy: 50–70% for 3-question conversations.\n3. **Insight density** — average themes extracted per completed conversation. Healthy: 1.5–3 themes per response.\n\nKoji surfaces all three in the study dashboard automatically. If completion drops below 50%, shorten the survey or rephrase the root question.\n\n## Related Resources\n\n- [Structured questions guide](/docs/structured-questions-guide) — the six question types that power adaptive interviews\n- [Using the embed widget](/docs/using-the-embed-widget) — drop-in implementation for any site\n- [Embed widget reference](/docs/embed-widget-reference) — full configuration API\n- [Personalized interview links](/docs/personalized-interview-links) — pass user context into every conversation\n- [Intercept research guide](/docs/intercept-research-guide) — methodology background and best practices\n- [AI probing guide](/docs/ai-probing-guide) — how Koji generates adaptive follow-up questions","category":"Collecting Responses","lastModified":"2026-05-14T03:14:01.107852+00:00","metaTitle":"In-App AI Surveys: Embedded Customer Research Guide | Koji","metaDescription":"Embed adaptive AI interviews directly inside your product. A complete guide to in-app AI surveys, triggers, design patterns, and implementation with Koji.","keywords":["in-app surveys","embedded customer research","in-product feedback","in-app AI survey","AI feedback widget","continuous discovery","in-app NPS","customer feedback widget"],"aiSummary":"In-app AI surveys are short adaptive interviews embedded inside your product, triggered by user behavior, that capture customer feedback in the moment. Unlike static email surveys, an AI moderator generates contextual follow-up questions from each open-ended answer. Koji ships a drop-in embed widget that runs the full AI interview inside an iframe, supports six structured question types (open_ended, scale, single_choice, multiple_choice, ranking, yes_no), bills per conversation (≈€1), and clusters responses into themes in real time. Best practices: one root question, AI-moderated probing, a structured anchor, a skip path, and behavior-based triggering — never fire on every page view.","aiPrerequisites":["Active Koji workspace","A study published with 2–4 questions","Ability to add a script tag to your product"],"aiLearningOutcomes":["Decide when an in-app AI survey is the right tool","Design a 2–4 question conversation that gets completed","Pick triggers that respect users and surface high-quality data","Install the Koji embed widget on any web product","Measure open rate, completion rate, and insight density"],"aiDifficulty":"intermediate","aiEstimatedTime":"12 min read"}],"pagination":{"total":1,"returned":1,"offset":0}}