{"site":{"name":"Koji","description":"AI-native customer research platform that helps teams conduct, analyze, and synthesize customer interviews at scale.","url":"https://www.koji.so","contentTypes":["blog","documentation"],"lastUpdated":"2026-05-10T22:43:34.381Z"},"content":[{"type":"documentation","id":"e7d042a6-1597-40d7-8b75-63a4e8f1cb58","slug":"mcp-setup-cursor","title":"Connect Koji to Cursor: MCP Setup Guide for Product Engineers","url":"https://www.koji.so/docs/mcp-setup-cursor","summary":"Connect the Koji MCP server to Cursor (0.45+) by adding a single block to ~/.cursor/mcp.json with a bearer token. Once connected, Cursor can call all 15 Koji MCP tools — listing studies, pulling transcripts, fetching structured answers, generating reports, and even creating new studies — directly from any chat or Composer session. This brings live customer evidence into the editor for spec writing, ticket grooming, pre-deploy sentiment checks, and stakeholder report generation, with no copy-pasting and full quality-gate filtering.","content":"\nCursor's Model Context Protocol (MCP) support turns the editor into a research-aware coding environment. By connecting Koji's MCP server to Cursor, your AI assistant gains direct access to live customer interview transcripts, structured answers, themes, quality scores, and full study reports — without copy-pasting a single quote.\n\nThis guide walks through the full setup, the most useful Koji + Cursor workflows, and the gotchas to know about authentication and tool selection.\n\n## Why Connect Koji to Cursor\n\nMost teams already use Cursor to write code. Most teams also have customer feedback scattered across Slack threads, Notion docs, and a forgotten interview tool. The MCP integration closes that gap: Cursor can query your Koji workspace the same way it queries the filesystem.\n\nConcretely, this unlocks four high-value workflows for product engineers:\n\n- **Context-aware feature work.** Before adding a new onboarding step, ask Cursor \"what did the last 20 interview participants say about onboarding friction?\" and it pulls the actual quotes.\n- **PRDs grounded in evidence.** Generate a one-page spec that cites real interview transcripts instead of speculative bullet points.\n- **Triage with proof.** When grooming the backlog, Cursor can rank tickets against the themes appearing most frequently in published Koji reports.\n- **Post-launch sense-check.** After shipping, ask Cursor to summarise interviews collected since the deploy and flag regressions in sentiment or quality scores.\n\nThis is a wedge no traditional research tool can offer. SurveyMonkey, Typeform, and Qualtrics still expect you to log into a separate dashboard. Koji's MCP server brings the data to where engineers already work.\n\n## Prerequisites\n\nBefore you start, make sure you have:\n\n- A Koji workspace with at least one published study and a few completed interviews to query.\n- Cursor 0.45 or later. Earlier versions do not support MCP.\n- A Koji API key. Generate one in **Settings → API Keys** in your workspace. See [Managing API Keys](/docs/managing-api-keys) for guidance on scopes and rotation.\n- Familiarity with editing JSON config files. The setup is one short paste.\n\nThe Koji MCP server is included on every paid plan and on the Free plan for evaluation. Tool calls are not metered separately — they share the standard interview credit pool described in [Plan Comparison Guide](/docs/plan-comparison-guide).\n\n## Step 1: Open Your Cursor MCP Config\n\nCursor stores MCP server definitions in a JSON file. The location depends on your operating system:\n\n- macOS / Linux: `~/.cursor/mcp.json`\n- Windows: `%USERPROFILE%\\.cursor\\mcp.json`\n\nIf the file does not exist, create it. You can also open the same config from inside Cursor via **Cursor → Settings → MCP → Edit Config**.\n\n## Step 2: Add the Koji MCP Server Block\n\nPaste the following into the `mcpServers` object of your `mcp.json`. Replace `YOUR_KOJI_API_KEY` with the key you generated above.\n\n```json\n{\n  \"mcpServers\": {\n    \"koji\": {\n      \"url\": \"https://koji.so/api/mcp\",\n      \"headers\": {\n        \"Authorization\": \"Bearer YOUR_KOJI_API_KEY\"\n      }\n    }\n  }\n}\n```\n\nIf you already have other MCP servers configured (GitHub, Linear, filesystem, etc.), add the `koji` entry alongside them rather than replacing the file.\n\nSave the file and restart Cursor. The first time you open a chat after the restart, Cursor will list `koji` in its connected servers panel and surface its 15 tools.\n\n## Step 3: Verify the Connection\n\nOpen a new chat in Cursor (Cmd/Ctrl + L) and type:\n\n```\n@koji list my recent studies\n```\n\nCursor should call `koji_list_studies` and return your study titles, statuses, and interview counts. If it reports an authentication error, double-check the bearer token in `mcp.json` — a stray space or quote is the most common culprit.\n\nFor a deeper smoke test, try:\n\n```\n@koji summarise the most common themes from the most recent published study\n```\n\nThis chains `koji_list_studies`, `koji_get_study`, and `koji_get_interviews` automatically. The model picks the right tool for each step — you do not need to name them explicitly. See [MCP Tool Reference](/docs/mcp-tool-reference) for the full list.\n\n## The 15 Koji Tools You Now Have in Cursor\n\nOnce connected, Cursor can call any tool the Koji MCP server exposes:\n\n**Read tools** — `koji_list_studies`, `koji_get_study`, `koji_get_interviews`, `koji_get_transcript`, `koji_get_account`, `koji_get_study_data`, `koji_get_report`\n\n**Write and analysis tools** — `koji_create_study`, `koji_update_brief`, `koji_publish_study`, `koji_generate_report`, `koji_publish_report`, `koji_configure_study`, `koji_export_data`, `koji_import_respondents`\n\nThe read tools are safe to call freely. The write tools modify your workspace, so configure Cursor's tool-approval policy to require confirmation before they run. See [MCP Authentication and Security](/docs/mcp-authentication-security).\n\n## Recommended Cursor + Koji Workflows\n\nThe integration works best when you anchor it to a concrete coding task. Here are five patterns teams ship with on day one.\n\n### 1. Writing a feature spec from interview evidence\n\nOpen the spec doc and prompt Cursor:\n\n```\nDraft a one-page PRD for a new onboarding checklist. Cite at least\nfive direct quotes from the most recent onboarding study in Koji,\nand group user pain points by theme.\n```\n\nCursor will fetch transcripts and structured answers, then generate the spec with the quotes inline. The structured answers come from Koji's six question types (`open_ended`, `scale`, `single_choice`, `multiple_choice`, `ranking`, `yes_no`) — see [Structured Questions in AI Interviews](/docs/structured-questions-guide) for how those become chartable data the model can reason over.\n\n### 2. Composer-driven ticket grooming\n\nOpen Composer (Cmd/Ctrl + I) on your `tickets/` directory and prompt:\n\n```\nFor each open ticket in this folder, score 1–5 how strongly the\nlast 30 days of Koji interviews support shipping it. Cite the\nthemes that match.\n```\n\nThe model walks each ticket, calls `koji_get_study_data` for theme aggregations, and writes the score plus a short rationale into the ticket file.\n\n### 3. Pre-deploy regression check\n\nBefore merging, ask Cursor to scan recent interview themes for any spike in negative sentiment. Useful when you ship onboarding, pricing, or auth changes — exactly the surfaces participants comment on most.\n\n### 4. Generating a stakeholder summary\n\nAfter a study concludes, prompt:\n\n```\nGenerate a stakeholder report for the latest study, then publish it.\n```\n\nThis chains `koji_generate_report` and `koji_publish_report`. The published report URL comes back inline in Cursor — paste into Slack and you are done. This is the same flow described in [Real-Time Research Insights](/docs/real-time-research-insights), but driven from your editor.\n\n### 5. Creating a quick discovery study from a code TODO\n\nHighlight a `// TODO: validate this with users` comment, open chat, and prompt:\n\n```\nCreate a 5-minute Koji study to validate this assumption with 20\nmobile users. Use one open_ended question and one scale question\nfor confidence.\n```\n\nCursor calls `koji_create_study` and returns a shareable interview URL. From idea to live study in under a minute.\n\n## Troubleshooting\n\n**\"Tool not found\" errors.** Restart Cursor — MCP servers are only registered on startup. If the issue persists, run `cat ~/.cursor/mcp.json | jq .` to confirm the JSON is valid.\n\n**Authentication failures.** Bearer tokens are sensitive; never paste them into a shared `.cursor` config. Use Cursor's environment-variable substitution (`\"Authorization\": \"Bearer ${env:KOJI_API_KEY}\"`) and load the key from your shell.\n\n**Tool selection drift.** If Cursor calls `koji_get_interviews` when you wanted `koji_get_study_data`, name the tool explicitly in your prompt. The model improves with corrections inside the same chat.\n\n**Rate limits.** The Koji MCP endpoint shares the standard API rate limits documented in [Rate Limits and CORS](/docs/rate-limits-and-cors). Heavy bulk reads should batch by `limit` and `cursor`.\n\n## How This Compares to Pasting Transcripts\n\nSome teams still copy interview snippets into Cursor by hand. That works for a single decision but breaks down at scale: stale data, missing quality scores, and no way to filter by theme or sentiment.\n\nThe MCP integration eliminates all three. Cursor sees the same authoritative data your researchers see, scoped by your API key, with quality scores that automatically suppress low-effort responses (only interviews scoring 3 or above are surfaced — see [How the Quality Gate Works](/docs/how-the-quality-gate-works)).\n\nThat is the difference between \"AI with vibes\" and \"AI with evidence.\"\n\n## Related Resources\n\n- [Koji MCP Integration Overview](/docs/mcp-overview)\n- [Connect Koji to Claude (Setup Guide)](/docs/mcp-setup-claude)\n- [MCP Tool Reference](/docs/mcp-tool-reference)\n- [MCP Authentication and Security](/docs/mcp-authentication-security)\n- [MCP Best Practices](/docs/mcp-best-practices)\n- [MCP Workflow Guide for Product Managers](/docs/mcp-workflow-product-managers)\n- [Structured Questions in AI Interviews](/docs/structured-questions-guide)\n","category":"Claude & MCP Integration","lastModified":"2026-05-10T03:23:03.624944+00:00","metaTitle":"Connect Koji to Cursor: MCP Setup Guide (5 Minutes)","metaDescription":"Step-by-step guide to connect Koji to Cursor via MCP so engineers can pull live customer interview insights, transcripts, and themes directly into the editor.","keywords":["cursor mcp","koji mcp cursor","cursor user research","mcp ide research","cursor ai customer feedback","mcp setup cursor"],"aiSummary":"Connect the Koji MCP server to Cursor (0.45+) by adding a single block to ~/.cursor/mcp.json with a bearer token. Once connected, Cursor can call all 15 Koji MCP tools — listing studies, pulling transcripts, fetching structured answers, generating reports, and even creating new studies — directly from any chat or Composer session. This brings live customer evidence into the editor for spec writing, ticket grooming, pre-deploy sentiment checks, and stakeholder report generation, with no copy-pasting and full quality-gate filtering.","aiPrerequisites":["Koji workspace with at least one published study","Cursor 0.45 or later installed","A Koji API key generated from Settings"],"aiLearningOutcomes":["Add the Koji MCP server to Cursor in under 5 minutes","Verify the connection with a smoke-test query","Use all 15 Koji tools (read and write) inside Cursor chat and Composer","Run 5 production workflows: PRD writing, ticket grooming, regression checks, report generation, study creation","Troubleshoot auth, tool selection, and rate-limit issues"],"aiDifficulty":"beginner","aiEstimatedTime":"5 min read"}],"pagination":{"total":1,"returned":1,"offset":0}}