{"site":{"name":"Koji","description":"AI-native customer research platform that helps teams conduct, analyze, and synthesize customer interviews at scale.","url":"https://www.koji.so","contentTypes":["blog","documentation"],"lastUpdated":"2026-05-12T11:01:49.673Z"},"content":[{"type":"documentation","id":"1bd7d712-c17b-420e-8cf1-538169ed7fab","slug":"user-research-api-guide","title":"User Research API: Embed AI Interviews into Any Product or Workflow","url":"https://www.koji.so/docs/user-research-api-guide","summary":"The Koji User Research API exposes four REST endpoints (start/messages/complete/get) plus an embed widget and signed webhooks, letting developers run AI-moderated voice and text interviews programmatically from any backend. The API supports adaptive follow-up probing, 6 structured question types, per-interview 1–5 quality scoring, and 60 req/min rate limits with permissive CORS. Pricing is credit-based (1 credit text / 3 voice / 5 report refresh), available on every plan including free tier. The same primitives are exposed over Model Context Protocol so AI agents (Claude, Cursor) can call Koji as a tool. Common patterns include in-product cancel-flow interviews, PLG onboarding research at scale, and fully headless research orchestrated by an LLM agent.","content":"## The Bottom Line\n\nA **User Research API** lets you trigger AI-moderated user interviews from your own backend code instead of a SaaS dashboard. Koji exposes a complete headless API — start an interview, exchange messages, complete it, and receive webhook notifications when transcripts are analyzed — so you can embed conversational research anywhere a customer touchpoint exists: an in-product onboarding flow, a churn cancellation screen, a post-purchase email, a Discord bot, or an internal data pipeline.\n\nThe core primitives are four REST endpoints plus a JavaScript embed widget plus a webhooks system. With these you can run a full customer research conversation programmatically: the AI interviewer adapts to each participant, asks follow-up questions, and your backend receives quality-scored transcripts and structured answers in real time.\n\nThis guide is the entry point. If you're evaluating whether a research platform fits your stack — or you've outgrown survey APIs that only POST form responses — this article shows how Koji's API differs and where to go next.\n\n## Why a User Research API Exists\n\nMost research tools assume a researcher logs into a dashboard, sends a link, and reads the results manually. That model breaks at three common moments:\n\n1. **In-product moments matter more than scheduled studies.** The best feedback comes from a user mid-task, not a user three weeks later trying to remember what frustrated them. A research API lets you trigger an interview the moment a user hits a meaningful trigger (clicks \"Cancel subscription\", finishes onboarding, downgrades, etc.).\n2. **Product-led growth teams want every signup interviewed.** When you're onboarding hundreds of users a week, sending them all the same Typeform isn't research — it's a survey. An API lets you queue an AI interview per qualified signup and feed the structured output into your CRM.\n3. **AI-native teams want research data as a feed.** You want themes, quotes, and structured answers landing in your data warehouse, Slack, or your own LLM agent — not a PDF.\n\nA real user research API answers all three. Koji's does. Closed-dashboard tools like SurveyMonkey or Qualtrics cannot — their APIs let you POST survey responses, not run conversational interviews.\n\n## The Core API Primitives\n\n### 1. Headless Interview Endpoints\n\nThe Koji REST API exposes four conversation endpoints under `/api/v1`:\n\n- **`POST /api/v1/interviews/start`** — Open an interview session. Accepts the `projectSlug` (or `projectId`), optional `respondentMetadata` (name, email, external ID), and returns a `sessionToken` plus the first AI message.\n- **`POST /api/v1/interviews/messages`** — Send a participant message and receive the next AI response. The AI agent decides whether to ask a follow-up, move to the next question, or wrap up.\n- **`POST /api/v1/interviews/complete`** — Mark the interview complete and trigger analysis. The transcript is quality-scored and themes are extracted asynchronously.\n- **`GET /api/v1/interviews/{id}`** — Read the full transcript, quality score, structured answers, and themes after analysis runs.\n\nEvery endpoint is rate-limited at 60 requests per minute per API key, with permissive CORS for browser-based callers. Authentication is via Bearer tokens — keys are generated in the API tab of any Koji workspace.\n\n### 2. The Embed Widget (JavaScript)\n\nIf you'd rather not implement the full conversation loop yourself, Koji ships a JavaScript embed widget that renders the interview UI inside your app. Drop the snippet into a modal, a sidebar, or a dedicated page; it handles voice, text, and the structured-question widgets (buttons, sliders, radio, checkbox, drag-to-rank) automatically. The widget is the same primitive that powers Koji's hosted interview page — your branding, your domain, with the AI moderation logic still running on Koji's backend.\n\n### 3. Webhooks for Real-Time Data Pipelines\n\nResearch APIs are most useful when paired with webhooks. Koji sends a signed POST to a URL you configure when:\n\n- An interview starts\n- An interview completes\n- Transcript analysis finishes (with the quality score, themes, and structured answers)\n- A report is generated or republished\n\nThis is how teams build \"research-aware\" automations: post every interview transcript to a Slack channel; sync structured NPS answers into HubSpot; trigger a Linear ticket when a churn-risk theme appears; feed insights directly into a Claude agent that drafts PRDs.\n\n### 4. CRM and CSV Import\n\nThe respondent import endpoint accepts CSV uploads or programmatic POSTs. Each respondent gets a personalized interview link (with a unique token) so you can target named accounts, send 1:1 invites, and tie interview results back to a contact record in your CRM.\n\n## Authentication and Rate Limits\n\nKoji uses standard Bearer-token authentication:\n\n```\nAuthorization: Bearer koji_live_<your-api-key>\n```\n\nAPI keys are scoped to a workspace. Each key has a creation date, last-used timestamp, and revocation control. There are no per-endpoint scopes today — every key has full read/write access to its workspace.\n\nRate limits:\n\n- **60 requests/minute** per API key (sliding window).\n- **No daily cap.** You're bounded by credits, not requests.\n- **CORS** is permissive on `/api/v1/*` endpoints — direct browser calls work, useful for client-side embed scenarios.\n- **Webhook signing** uses HMAC-SHA256. Validate the `X-Koji-Signature` header before processing.\n\nIf you exceed the rate limit, you'll get a `429 Too Many Requests` with a `Retry-After` header.\n\n## Pricing and Credits via the API\n\nThe API is available on every Koji plan, including the free tier. Pricing is credit-based and identical to dashboard interviews:\n\n- **Text interview**: 1 credit\n- **Voice interview**: 3 credits\n- **Report refresh**: 5 credits\n- **Quality gate**: interviews scoring 1 or 2 (out of 5) don't consume credits\n\nThe Insights plan (€29/month) includes 29 credits/month; the Interviews plan (€79/month) includes 79. Overage is a flat €1/credit. There are no per-seat or per-call API surcharges. New accounts get 10 free credits — enough to integrate the API end-to-end without paying.\n\nCompared with survey APIs that charge per-response or per-question, Koji's credit model is unusually simple: a research-grade interview costs about a euro.\n\n## Architecture Patterns\n\nHere's how typical implementations look.\n\n### Pattern A: In-Product Cancel Flow\n\nA SaaS product wants to interview every user who clicks \"Cancel my subscription.\" The cancel button posts the user to a modal that loads the Koji embed widget with a pre-filled `respondentMetadata` (user ID, plan, tenure). The AI runs a 3-minute interview asking why they're leaving, what almost stopped them, and what would bring them back. The transcript is analyzed; if the theme \"missing feature\" appears, a webhook posts to the product team's Slack with the quote.\n\n### Pattern B: PLG Onboarding Research at Scale\n\nA developer-tools company wants to interview every signup that completes the tutorial. A backend cron job runs daily, identifies qualified signups, hits `POST /api/v1/interviews/start` to generate personalized links, and sends them via email. Webhook handlers route incoming transcripts into the company's warehouse (BigQuery) tagged with the user's plan and traffic source. The product manager queries the table for emerging themes by cohort.\n\n### Pattern C: Headless Research for an AI Agent\n\nA founder builds an AI agent in Claude that supervises customer research. The agent calls Koji over MCP (Model Context Protocol) to read existing studies, draft new ones, generate reports, and import respondents. From the founder's perspective, customer research is a tool the AI uses — not a SaaS the founder logs into. Koji's MCP server (15 tools) is built on top of the same API; everything callable via REST is also callable via MCP.\n\n### Pattern D: Embedding Research in a Mobile App\n\nA mobile team wants in-app feedback that goes deeper than NPS. They use the embed widget inside a sheet view, configured for voice interviews. Users speak into their phone; the AI converses, probes, and wraps up in 90 seconds. The team sees themes ranked by frequency, quality-scored, with verbatim quotes — all without anyone scheduling a call.\n\n## How Koji's User Research API Compares\n\n| Capability | Survey APIs (Typeform/SurveyMonkey) | Recording APIs (UserTesting/Maze) | **Koji User Research API** |\n|---|---|---|---|\n| Conversational AI interviews | No (static form) | Recording-based | Yes (text + voice) |\n| Adaptive follow-up probing | No | Manual moderator only | Yes (1–3 per question, autonomous) |\n| Structured + qualitative output | Form fields only | Manual tagging | 6 structured types + free-form transcripts |\n| Per-interview quality scoring | No | Manual review | 1–5 composite score auto |\n| Webhook on transcript analysis | Limited | Limited | Yes |\n| Voice mode programmatically | No | Recording, not conversational | Yes (3 credits/interview) |\n| Embed widget | Form embeds | Iframe recordings | Full interview widget with branding |\n| Available on free tier | Free tier exists | Enterprise only typically | Yes (10 free credits) |\n\nThe TL;DR: survey APIs collect answers; recording APIs save videos; Koji's API runs conversations.\n\n## Getting Started\n\n1. **Create a workspace.** Sign up at koji.so. You get 10 free credits immediately.\n2. **Generate an API key.** Settings → API → Create Key. Copy it once — it's only displayed at creation time.\n3. **Build your first study in the dashboard.** Easier to validate the brief in the UI before going headless. Set interview mode (`structured`, `exploratory`, or `hybrid`) and add structured questions if you want chartable answers.\n4. **Test `POST /api/v1/interviews/start`.** Use cURL with your key and the study's `projectSlug`. You'll get a `sessionToken` and the first AI greeting.\n5. **Loop messages.** Send a participant reply via `POST /api/v1/interviews/messages` and read the AI's next message. The agent may attach a `widget` field (for `scale`, `single_choice`, etc.) so your client can render the right input.\n6. **Complete and read results.** Call `POST /api/v1/interviews/complete` to finalize. Analysis runs asynchronously (typically under 30 seconds). Listen for the webhook or poll `GET /api/v1/interviews/{id}` for the quality score and structured answers.\n\nFor production, wire up webhooks and a dedicated subdomain for the embed widget. The full API specification, including request/response schemas, is in [Headless API Overview](/docs/headless-api-overview).\n\n## Frequently Asked Questions\n\n**Can I use the API on the free plan?** Yes. The API is available on every plan, including the free tier. New accounts get 10 free credits — enough to integrate end-to-end. Headless API access used to be Interviews-tier only; as of March 2026 it's free-tier-included alongside webhooks, voice interviews, and CRM import.\n\n**How do I authenticate?** Bearer tokens. `Authorization: Bearer koji_live_<key>`. Keys are workspace-scoped and revocable.\n\n**What's the rate limit?** 60 requests/minute per key, sliding window. Returns 429 with `Retry-After` on overflow. Most production workloads stay well below this.\n\n**Are voice interviews supported via the API?** Yes. Voice interviews cost 3 credits each. You can either use the embed widget (handles audio capture for you) or implement the full WebSocket-based voice protocol yourself.\n\n**How do webhook signatures work?** Each webhook POST includes an `X-Koji-Signature` header containing an HMAC-SHA256 of the request body using your workspace's webhook secret. Validate it server-side before processing.\n\n**Can I use Koji headlessly with no dashboard at all?** Yes. Every dashboard action — create study, edit brief, publish, generate report, export data — is available via the REST API and the MCP server. Several customers operate fully headless via Claude + MCP.\n\n## Related Resources\n\n- [Headless API Overview](/docs/headless-api-overview) — full REST specification and code samples\n- [Starting Interviews via API](/docs/starting-interviews-via-api) — step-by-step `POST /interviews/start` walkthrough\n- [Completing Interviews via API](/docs/completing-interviews-via-api) — finalization, analysis triggers, and result retrieval\n- [Embed Widget Reference](/docs/embed-widget-reference) — JavaScript widget configuration\n- [Research Automation Webhooks](/docs/research-automation-webhooks) — building real-time research pipelines\n- [Webhook Setup](/docs/webhook-setup) — signing, retries, and event types\n- [API Authentication](/docs/api-authentication) — Bearer token issuance and rotation\n- [Structured Questions Guide](/docs/structured-questions-guide) — the 6 question types returned in API responses\n- [MCP Integration Overview](/docs/mcp-overview) — same primitives, exposed to AI agents","category":"API Reference","lastModified":"2026-05-12T03:16:59.396204+00:00","metaTitle":"User Research API: Embed AI Interviews into Any Product (Koji)","metaDescription":"A complete guide to Koji's User Research API. Run AI-moderated voice and text interviews from your backend, embed widgets in-product, and receive transcripts via webhooks.","keywords":["user research api","research api","headless user research","embed ai interview","programmatic user interviews","research webhooks api","user interview api","koji api"],"aiSummary":"The Koji User Research API exposes four REST endpoints (start/messages/complete/get) plus an embed widget and signed webhooks, letting developers run AI-moderated voice and text interviews programmatically from any backend. The API supports adaptive follow-up probing, 6 structured question types, per-interview 1–5 quality scoring, and 60 req/min rate limits with permissive CORS. Pricing is credit-based (1 credit text / 3 voice / 5 report refresh), available on every plan including free tier. The same primitives are exposed over Model Context Protocol so AI agents (Claude, Cursor) can call Koji as a tool. Common patterns include in-product cancel-flow interviews, PLG onboarding research at scale, and fully headless research orchestrated by an LLM agent.","aiPrerequisites":["Basic familiarity with REST APIs and Bearer authentication","A Koji workspace and API key"],"aiLearningOutcomes":["Understand the four core REST endpoints for running AI interviews programmatically","Configure webhooks to stream transcripts into your data pipeline","Compare conversational research APIs against legacy survey APIs","Decide when to use the embed widget vs the raw REST API","Set up authentication, rate limits, and CORS correctly"],"aiDifficulty":"intermediate","aiEstimatedTime":"11 min read"}],"pagination":{"total":1,"returned":1,"offset":0}}