User Research API: Embed AI Interviews into Any Product or Workflow
How to use Koji's User Research API to run AI-moderated interviews from your own backend. Covers REST endpoints, the embed widget, webhooks, authentication, rate limits, and headless interview patterns.
The Bottom Line
A User Research API lets you trigger AI-moderated user interviews from your own backend code instead of a SaaS dashboard. Koji exposes a complete headless API — start an interview, exchange messages, complete it, and receive webhook notifications when transcripts are analyzed — so you can embed conversational research anywhere a customer touchpoint exists: an in-product onboarding flow, a churn cancellation screen, a post-purchase email, a Discord bot, or an internal data pipeline.
The core primitives are four REST endpoints plus a JavaScript embed widget plus a webhooks system. With these you can run a full customer research conversation programmatically: the AI interviewer adapts to each participant, asks follow-up questions, and your backend receives quality-scored transcripts and structured answers in real time.
This guide is the entry point. If you're evaluating whether a research platform fits your stack — or you've outgrown survey APIs that only POST form responses — this article shows how Koji's API differs and where to go next.
Why a User Research API Exists
Most research tools assume a researcher logs into a dashboard, sends a link, and reads the results manually. That model breaks at three common moments:
- In-product moments matter more than scheduled studies. The best feedback comes from a user mid-task, not a user three weeks later trying to remember what frustrated them. A research API lets you trigger an interview the moment a user hits a meaningful trigger (clicks "Cancel subscription", finishes onboarding, downgrades, etc.).
- Product-led growth teams want every signup interviewed. When you're onboarding hundreds of users a week, sending them all the same Typeform isn't research — it's a survey. An API lets you queue an AI interview per qualified signup and feed the structured output into your CRM.
- AI-native teams want research data as a feed. You want themes, quotes, and structured answers landing in your data warehouse, Slack, or your own LLM agent — not a PDF.
A real user research API answers all three. Koji's does. Closed-dashboard tools like SurveyMonkey or Qualtrics cannot — their APIs let you POST survey responses, not run conversational interviews.
The Core API Primitives
1. Headless Interview Endpoints
The Koji REST API exposes four conversation endpoints under /api/v1:
POST /api/v1/interviews/start— Open an interview session. Accepts theprojectSlug(orprojectId), optionalrespondentMetadata(name, email, external ID), and returns asessionTokenplus the first AI message.POST /api/v1/interviews/messages— Send a participant message and receive the next AI response. The AI agent decides whether to ask a follow-up, move to the next question, or wrap up.POST /api/v1/interviews/complete— Mark the interview complete and trigger analysis. The transcript is quality-scored and themes are extracted asynchronously.GET /api/v1/interviews/{id}— Read the full transcript, quality score, structured answers, and themes after analysis runs.
Every endpoint is rate-limited at 60 requests per minute per API key, with permissive CORS for browser-based callers. Authentication is via Bearer tokens — keys are generated in the API tab of any Koji workspace.
2. The Embed Widget (JavaScript)
If you'd rather not implement the full conversation loop yourself, Koji ships a JavaScript embed widget that renders the interview UI inside your app. Drop the snippet into a modal, a sidebar, or a dedicated page; it handles voice, text, and the structured-question widgets (buttons, sliders, radio, checkbox, drag-to-rank) automatically. The widget is the same primitive that powers Koji's hosted interview page — your branding, your domain, with the AI moderation logic still running on Koji's backend.
3. Webhooks for Real-Time Data Pipelines
Research APIs are most useful when paired with webhooks. Koji sends a signed POST to a URL you configure when:
- An interview starts
- An interview completes
- Transcript analysis finishes (with the quality score, themes, and structured answers)
- A report is generated or republished
This is how teams build "research-aware" automations: post every interview transcript to a Slack channel; sync structured NPS answers into HubSpot; trigger a Linear ticket when a churn-risk theme appears; feed insights directly into a Claude agent that drafts PRDs.
4. CRM and CSV Import
The respondent import endpoint accepts CSV uploads or programmatic POSTs. Each respondent gets a personalized interview link (with a unique token) so you can target named accounts, send 1:1 invites, and tie interview results back to a contact record in your CRM.
Authentication and Rate Limits
Koji uses standard Bearer-token authentication:
Authorization: Bearer koji_live_<your-api-key>
API keys are scoped to a workspace. Each key has a creation date, last-used timestamp, and revocation control. There are no per-endpoint scopes today — every key has full read/write access to its workspace.
Rate limits:
- 60 requests/minute per API key (sliding window).
- No daily cap. You're bounded by credits, not requests.
- CORS is permissive on
/api/v1/*endpoints — direct browser calls work, useful for client-side embed scenarios. - Webhook signing uses HMAC-SHA256. Validate the
X-Koji-Signatureheader before processing.
If you exceed the rate limit, you'll get a 429 Too Many Requests with a Retry-After header.
Pricing and Credits via the API
The API is available on every Koji plan, including the free tier. Pricing is credit-based and identical to dashboard interviews:
- Text interview: 1 credit
- Voice interview: 3 credits
- Report refresh: 5 credits
- Quality gate: interviews scoring 1 or 2 (out of 5) don't consume credits
The Insights plan (€29/month) includes 29 credits/month; the Interviews plan (€79/month) includes 79. Overage is a flat €1/credit. There are no per-seat or per-call API surcharges. New accounts get 10 free credits — enough to integrate the API end-to-end without paying.
Compared with survey APIs that charge per-response or per-question, Koji's credit model is unusually simple: a research-grade interview costs about a euro.
Architecture Patterns
Here's how typical implementations look.
Pattern A: In-Product Cancel Flow
A SaaS product wants to interview every user who clicks "Cancel my subscription." The cancel button posts the user to a modal that loads the Koji embed widget with a pre-filled respondentMetadata (user ID, plan, tenure). The AI runs a 3-minute interview asking why they're leaving, what almost stopped them, and what would bring them back. The transcript is analyzed; if the theme "missing feature" appears, a webhook posts to the product team's Slack with the quote.
Pattern B: PLG Onboarding Research at Scale
A developer-tools company wants to interview every signup that completes the tutorial. A backend cron job runs daily, identifies qualified signups, hits POST /api/v1/interviews/start to generate personalized links, and sends them via email. Webhook handlers route incoming transcripts into the company's warehouse (BigQuery) tagged with the user's plan and traffic source. The product manager queries the table for emerging themes by cohort.
Pattern C: Headless Research for an AI Agent
A founder builds an AI agent in Claude that supervises customer research. The agent calls Koji over MCP (Model Context Protocol) to read existing studies, draft new ones, generate reports, and import respondents. From the founder's perspective, customer research is a tool the AI uses — not a SaaS the founder logs into. Koji's MCP server (15 tools) is built on top of the same API; everything callable via REST is also callable via MCP.
Pattern D: Embedding Research in a Mobile App
A mobile team wants in-app feedback that goes deeper than NPS. They use the embed widget inside a sheet view, configured for voice interviews. Users speak into their phone; the AI converses, probes, and wraps up in 90 seconds. The team sees themes ranked by frequency, quality-scored, with verbatim quotes — all without anyone scheduling a call.
How Koji's User Research API Compares
| Capability | Survey APIs (Typeform/SurveyMonkey) | Recording APIs (UserTesting/Maze) | Koji User Research API |
|---|---|---|---|
| Conversational AI interviews | No (static form) | Recording-based | Yes (text + voice) |
| Adaptive follow-up probing | No | Manual moderator only | Yes (1–3 per question, autonomous) |
| Structured + qualitative output | Form fields only | Manual tagging | 6 structured types + free-form transcripts |
| Per-interview quality scoring | No | Manual review | 1–5 composite score auto |
| Webhook on transcript analysis | Limited | Limited | Yes |
| Voice mode programmatically | No | Recording, not conversational | Yes (3 credits/interview) |
| Embed widget | Form embeds | Iframe recordings | Full interview widget with branding |
| Available on free tier | Free tier exists | Enterprise only typically | Yes (10 free credits) |
The TL;DR: survey APIs collect answers; recording APIs save videos; Koji's API runs conversations.
Getting Started
- Create a workspace. Sign up at koji.so. You get 10 free credits immediately.
- Generate an API key. Settings → API → Create Key. Copy it once — it's only displayed at creation time.
- Build your first study in the dashboard. Easier to validate the brief in the UI before going headless. Set interview mode (
structured,exploratory, orhybrid) and add structured questions if you want chartable answers. - Test
POST /api/v1/interviews/start. Use cURL with your key and the study'sprojectSlug. You'll get asessionTokenand the first AI greeting. - Loop messages. Send a participant reply via
POST /api/v1/interviews/messagesand read the AI's next message. The agent may attach awidgetfield (forscale,single_choice, etc.) so your client can render the right input. - Complete and read results. Call
POST /api/v1/interviews/completeto finalize. Analysis runs asynchronously (typically under 30 seconds). Listen for the webhook or pollGET /api/v1/interviews/{id}for the quality score and structured answers.
For production, wire up webhooks and a dedicated subdomain for the embed widget. The full API specification, including request/response schemas, is in Headless API Overview.
Frequently Asked Questions
Can I use the API on the free plan? Yes. The API is available on every plan, including the free tier. New accounts get 10 free credits — enough to integrate end-to-end. Headless API access used to be Interviews-tier only; as of March 2026 it's free-tier-included alongside webhooks, voice interviews, and CRM import.
How do I authenticate? Bearer tokens. Authorization: Bearer koji_live_<key>. Keys are workspace-scoped and revocable.
What's the rate limit? 60 requests/minute per key, sliding window. Returns 429 with Retry-After on overflow. Most production workloads stay well below this.
Are voice interviews supported via the API? Yes. Voice interviews cost 3 credits each. You can either use the embed widget (handles audio capture for you) or implement the full WebSocket-based voice protocol yourself.
How do webhook signatures work? Each webhook POST includes an X-Koji-Signature header containing an HMAC-SHA256 of the request body using your workspace's webhook secret. Validate it server-side before processing.
Can I use Koji headlessly with no dashboard at all? Yes. Every dashboard action — create study, edit brief, publish, generate report, export data — is available via the REST API and the MCP server. Several customers operate fully headless via Claude + MCP.
Related Resources
- Headless API Overview — full REST specification and code samples
- Starting Interviews via API — step-by-step
POST /interviews/startwalkthrough - Completing Interviews via API — finalization, analysis triggers, and result retrieval
- Embed Widget Reference — JavaScript widget configuration
- Research Automation Webhooks — building real-time research pipelines
- Webhook Setup — signing, retries, and event types
- API Authentication — Bearer token issuance and rotation
- Structured Questions Guide — the 6 question types returned in API responses
- MCP Integration Overview — same primitives, exposed to AI agents
Related Articles
API Authentication
Learn how to authenticate with the Koji API using API keys and Bearer tokens.
Starting Interviews via API
Use the POST /start endpoint to programmatically launch interviews from your application.
Completing Interviews via API
Use the POST /complete endpoint to finish an interview session and trigger automatic analysis.
Research Automation: How to Build Real-Time Research Pipelines with Webhooks
Koji webhooks push interview and report data to your systems the instant something happens — enabling Slack alerts, CRM sync, automated tagging, and fully automated research pipelines that operate without manual intervention.
Webhook Setup
Receive real-time notifications when interviews complete and analysis finishes using webhooks.
Embed Widget Reference
Technical reference for the Koji embed widget including iframe parameters and PostMessage API.
Headless API Overview
Manage interviews programmatically with the Koji REST API — start, message, and complete interviews from your own code.
Structured Questions in AI Interviews
Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.
Koji MCP Integration Overview
Connect Koji to Claude, Cursor, and other AI assistants using the Model Context Protocol (MCP). Manage your entire research workflow conversationally — create studies, run interviews, analyze data, and generate reports without leaving your AI assistant.