New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Collecting Responses

Headless API Overview

Manage interviews programmatically with the Koji REST API — start, message, and complete interviews from your own code.

The Headless API lets you run Koji interviews entirely from your own backend. Instead of sending participants to a link or embedding an iframe, you make REST API calls to start conversations, send messages, and mark interviews as complete. This is ideal for building fully custom interview experiences or integrating Koji into existing workflows.

What Is the Headless API?

Think of it as Koji without the front end. Your application controls the entire flow:

  1. Start an interview by calling the start endpoint. Koji returns a conversation ID and the first message.
  2. Exchange messages by sending participant responses to the message endpoint. Koji replies with follow-up questions.
  3. Complete the interview by calling the complete endpoint. Koji triggers its analysis pipeline and generates insights.

Your participants never interact with Koji's interface directly — they interact with whatever UI you build.

Plan Requirement

The Headless API is available exclusively on the Scale plan. If you are on the Starter or Growth plan, you can use the standard interview link or embed widget to collect responses. See the Plan Comparison Guide for a full breakdown of what each tier includes.

Getting Started

Authentication

All API requests require a Bearer token in the Authorization header. You can generate API keys from your project settings.

Authorization: Bearer YOUR_API_KEY

Your API key must have the appropriate permissions for the endpoints you want to use. See API Authentication for setup instructions.

Base URL

All endpoints are available at:

https://yourdomain.com/api/v1/interviews

Core Endpoints

Start an Interview

POST /api/v1/interviews/start

Creates a new conversation and returns the opening message.

Request body:

{
  "metadata": {
    "participant_name": "Jane Smith",
    "source": "crm-integration"
  }
}

Response:

{
  "conversation_id": "conv_abc123",
  "message": {
    "role": "assistant",
    "content": "Hi Jane! Thanks for taking the time..."
  }
}

The metadata object is flexible — pass any key-value pairs you want attached to this interview. Common uses include participant identifiers, source tracking, and segmentation data.

Send a Message

POST /api/v1/interviews/:id/message

Sends the participant's response and receives the next question.

Request body:

{
  "message": "I've been using the product for about six months now..."
}

Response:

{
  "message": {
    "role": "assistant",
    "content": "That's great to hear. What was your initial impression when you first started using it?"
  },
  "turn_count": 3,
  "is_complete": false
}

The is_complete flag tells you whether the interviewer has gathered enough information and is ready to wrap up. You can use this to decide when to call the complete endpoint.

Complete an Interview

POST /api/v1/interviews/:id/complete

Marks the interview as finished and triggers Koji's analysis pipeline.

Response:

{
  "status": "completed",
  "quality_score": 4.2
}

Once completed, the interview appears in your project dashboard with a full transcript, quality score, and generated insights.

Get Interview Details

GET /api/v1/interviews/:id

Retrieve the full transcript and metadata for a specific interview.

Common Use Cases

In-App Feedback

Embed a feedback flow inside your own product. When a user triggers a feedback prompt, your backend starts a Koji interview, then your front end presents the conversation in your own UI components.

CRM Integration

Connect Koji to your CRM pipeline. When a deal reaches a certain stage, automatically trigger an interview with the contact. Responses flow back into the CRM record.

Chatbot Handoff

Route specific conversation topics from your existing chatbot to a Koji interview. The participant continues the conversation naturally while Koji handles the research-quality follow-up questions.

Batch Research

Combine CSV import with the Headless API for large-scale studies. Import your participant list, then programmatically start and manage interviews for each person.

Rate Limiting

The API enforces rate limits to ensure fair usage across all customers. If you exceed the limit, you will receive a 429 Too Many Requests response with a Retry-After header. Build retry logic into your integration to handle this gracefully.

Error Handling

All error responses follow a consistent format:

{
  "error": "Description of what went wrong"
}

Common status codes:

CodeMeaning
401Missing or invalid API key
403API key lacks the required permission, or origin not allowed
404Interview not found
429Rate limit exceeded

Best Practices

  1. Store conversation IDs. Save the conversation_id returned by the start endpoint so you can continue the conversation and retrieve results later.
  2. Respect the is_complete flag. When the response indicates the interview is ready to wrap up, call the complete endpoint to trigger analysis.
  3. Handle errors gracefully. Implement retry logic for transient failures and surface clear error messages to your users.
  4. Secure your API key. Never expose your API key in client-side code. All API calls should go through your backend.

Next Steps