{"site":{"name":"Koji","description":"AI-native customer research platform that helps teams conduct, analyze, and synthesize customer interviews at scale.","url":"https://www.koji.so","contentTypes":["blog","documentation"],"lastUpdated":"2026-05-16T08:07:38.842Z"},"content":[{"type":"documentation","id":"0c4fd10b-a316-4f09-8e6c-fe06b9c51c17","slug":"survey-question-types","title":"Survey Question Types: The Complete Guide to 14 Question Types with Examples (2026)","url":"https://www.koji.so/docs/survey-question-types","summary":"A complete reference of all 14 survey question types — open-ended, dichotomous, single-choice, multi-select, Likert, rating, ranking, matrix, semantic differential, slider, NPS, demographic, constant sum, and visual — with real examples, when-to-use guidance, and common pitfalls. Includes a modern AI-native approach that collapses the taxonomy into 6 adaptive structured question types with real-time follow-up probing.","content":"## The complete list of survey question types\n\nSurvey questions fall into two parents and fourteen children. The two parents are **open-ended** (qualitative — respondents write in their own words) and **closed-ended** (quantitative — respondents pick from defined options). Every survey question type is a variation on those two ideas, optimized for a specific data goal.\n\nHere is the complete map, ranked by how often each shows up in real research:\n\n| # | Question Type | Parent | Best for |\n| - | --- | --- | --- |\n| 1 | Open-ended (free text / voice) | Open | Discovery, reasoning, unexpected themes |\n| 2 | Dichotomous (Yes/No) | Closed | Eligibility, simple gates, screening |\n| 3 | Single-choice (multiple choice) | Closed | Single pick from a list of options |\n| 4 | Multi-select (multiple choice, multi-answer) | Closed | \"All that apply\" behavior or attribute capture |\n| 5 | Likert scale (agreement) | Closed | Attitudes, beliefs, satisfaction |\n| 6 | Rating scale (numeric / star) | Closed | Performance, ease, sentiment intensity |\n| 7 | Ranking | Closed | Forced trade-offs between options |\n| 8 | Matrix / grid | Closed | Bulk attribute rating across many items |\n| 9 | Semantic differential | Closed | Brand perception, emotion, aesthetic ratings |\n| 10 | Slider | Closed | Continuous numeric input |\n| 11 | NPS (Net Promoter) | Closed | Loyalty and word-of-mouth intent |\n| 12 | Demographic / firmographic | Mostly closed | Segmenting respondents |\n| 13 | Constant sum | Closed | Allocating budget, time, or attention |\n| 14 | Image / visual | Either | Concept testing, design preference |\n\nThe rest of this guide walks through each type — when to use it, how to write it well, the pitfalls, and how AI-native research platforms like Koji collapse this entire taxonomy into a single conversational flow with **6 structured question types** (open_ended, scale, single_choice, multiple_choice, ranking, yes_no — see our [structured questions guide](/docs/structured-questions-guide)).\n\n## 1. Open-ended questions\n\n**Definition:** Respondents answer in their own words — no predefined options.\n\n**Examples:**\n- \"Walk me through the last time you tried to do X.\"\n- \"What is the single biggest frustration with your current workflow?\"\n- \"If you could change one thing about [product], what would it be and why?\"\n\n**When to use:** Discovery, root-cause exploration, capturing unexpected language. Open-ended is the only question type that surfaces what you did not already think to ask about.\n\n**Pitfalls:**\n- Low response rate. In traditional surveys, **only 20–40% of respondents** answer optional open-ended questions, and most answers are 1–5 words.\n- Hard to analyze at scale without dedicated qualitative coding.\n- Easy to get vague hypotheticals (\"I would love better dashboards\") instead of behavioral evidence (\"I rebuilt the same dashboard three times last quarter\").\n\n**Modern approach:** AI-moderated interviews — like Koji's voice and text interviews — fix the response-rate and depth problems by **probing follow-up questions in real time**. See [how Koji's AI follow-up probing works](/docs/ai-probing-guide) and [open-ended interview questions: 100+ examples](/docs/open-ended-interview-questions).\n\n## 2. Dichotomous (Yes/No) questions\n\n**Definition:** Two mutually exclusive options — usually Yes/No, True/False, or Have/Have not.\n\n**Examples:**\n- \"Have you used [product] in the last 30 days?\" (Yes / No)\n- \"Are you the primary decision-maker for [category] software in your team?\" (Yes / No)\n\n**When to use:** Screening, eligibility, simple behavioral facts. Use dichotomous when there are genuinely only two states — don't force a binary on something nuanced (satisfaction is not Yes/No).\n\n**Pitfalls:**\n- Forces false dichotomies on continuous concepts (\"Do you like the product?\" — many users feel \"kind of\").\n- Loses gradient information you can never recover.\n\n## 3. Single-choice (multiple choice, one answer)\n\n**Definition:** Respondent picks exactly one option from 3+ choices.\n\n**Examples:**\n- \"Which of the following best describes your role?\" → IC, Manager, Director, VP+, Founder\n- \"What is the primary reason you signed up?\" → Specific list\n\n**When to use:** Demographics, primary intent, mutually exclusive categorization.\n\n**Pitfalls:**\n- **Order bias.** Options at the top get picked more often (primacy effect). Randomize answer order when option meaning is symmetric.\n- **Missing options.** Always include \"Other (please specify)\" and a \"None of the above\" or \"I prefer not to say\" exit.\n- **Overlapping options.** \"Engineer,\" \"Software Engineer,\" and \"Frontend Developer\" all in one list = chaos.\n\n## 4. Multi-select (multiple choice, multi-answer)\n\n**Definition:** Respondent picks all options that apply.\n\n**Examples:**\n- \"Which of these tools do you currently use? (Select all that apply)\"\n- \"What features matter most to you? (Select up to 3)\"\n\n**When to use:** Capturing behavior or attributes where multiple states are simultaneously true.\n\n**Pitfalls:**\n- **List length matters.** Respondents tire by item 8 and skim the rest — keep multi-select lists under 10 items where possible.\n- **Single-select disguised as multi-select.** If most people will logically pick one, use single-choice — multi-select adds noise.\n- **No cap = no signal.** \"Select all that apply\" without a cap often results in 4–7 selections that are hard to prioritize. Adding \"Select your top 3\" forces clarity.\n\n## 5. Likert scale questions\n\n**Definition:** Statement + a balanced agreement scale, typically 5 or 7 points: Strongly Disagree → Strongly Agree.\n\n**Examples:**\n- \"The onboarding process made it easy to find the features I needed.\"\n  - Strongly Disagree / Disagree / Neutral / Agree / Strongly Agree\n\n**When to use:** Attitudes, beliefs, satisfaction, perceptions. Likert is the workhorse of attitudinal research.\n\n**Pitfalls:**\n- **Acquiescence bias.** People drift toward \"Agree\" if the question is framed positively. Balance with reverse-coded items.\n- **5 vs 7 points.** 5-point is faster; 7-point captures more nuance for analytic teams. Pick one and stick to it across the survey.\n- **Neutral midpoint.** Including a \"Neither agree nor disagree\" option respects the respondent but invites fence-sitting. Forced-choice (no neutral) increases polarization in data but irritates respondents.\n\nSee our [Likert scale research guide](/docs/likert-scale-research-guide) for a full breakdown.\n\n## 6. Rating scale questions (numeric, star, smiley)\n\n**Definition:** Numeric or visual scale, typically 1–5 or 1–10.\n\n**Examples:**\n- \"How likely are you to recommend us to a colleague?\" (0–10)\n- \"Rate your overall satisfaction\" (1–5 stars)\n- \"How easy was it to complete this task?\" (1–7)\n\n**When to use:** Sentiment intensity, performance, ease metrics like CSAT, CES, and SUS.\n\n**Pitfalls:**\n- **End-aversion.** Many respondents avoid the extremes (1 and 7), compressing the scale.\n- **Cultural variation.** Respondents in different countries use scales differently — direct comparisons across geographies are risky without normalization.\n- See [Customer Effort Score](/docs/customer-effort-score-guide) and [System Usability Scale](/docs/system-usability-scale-guide) for two standardized rating scales worth adopting.\n\n## 7. Ranking questions\n\n**Definition:** Respondent orders a list of items by preference, importance, or frequency.\n\n**Examples:**\n- \"Rank these features from most to least important to your team.\"\n- \"Order these channels from most to least frequently used.\"\n\n**When to use:** Forcing trade-offs. Unlike ratings — where everything can be \"very important\" — ranking forces relative priority.\n\n**Pitfalls:**\n- **Cognitive load.** Ranking 4 items is fine; ranking 10 is exhausting and produces noisy data. Cap at 5–7 items.\n- **Top-of-list bias.** Respondents rank the first few items carefully and randomize the rest.\n- **No \"tie.\"** Force-ranked lists hide genuinely equivalent items. For high-stakes prioritization, supplement with rating scales.\n\nFor a deeper dive, see our [choice and ranking questions guide](/docs/choice-ranking-questions-guide).\n\n## 8. Matrix / grid questions\n\n**Definition:** Multiple related Likert or rating items in a grid, sharing the same response scale.\n\n**Examples:**\n- \"Rate the following on a 1–5 scale: Onboarding, Pricing, Support, Documentation, Reliability\"\n\n**When to use:** Efficient bulk attribute rating, especially when items share a comparable scale.\n\n**Pitfalls:**\n- **Straight-lining.** Respondents pick the same column for all rows without reading — accounts for **15–30% of matrix data quality issues** in large surveys.\n- **Mobile experience.** Matrix grids break on small screens and inflate drop-off.\n- **Hidden survey length.** A matrix of 10 rows × 5 columns is technically one question but feels like 10 — and behaves like 10 in fatigue analysis.\n\n## 9. Semantic differential\n\n**Definition:** A bipolar scale anchored by opposing adjectives.\n\n**Examples:**\n- \"How would you describe our brand?\"\n  - Innovative ←→ Traditional\n  - Friendly ←→ Cold\n  - Expensive ←→ Affordable\n\n**When to use:** Brand perception, emotional response, aesthetic ratings.\n\n**Pitfalls:**\n- **Anchor choice matters.** \"Affordable\" vs \"Cheap\" measure totally different concepts — choose anchors precisely.\n- **Cross-respondent comparability.** What counts as \"Innovative\" varies across people. Best paired with open-ended follow-ups.\n\n## 10. Slider questions\n\n**Definition:** A continuous slider input, typically 0–100.\n\n**Examples:**\n- \"Drag the slider to indicate the percentage of your week spent on manual reporting.\"\n\n**When to use:** Continuous numeric estimates where the gradient matters.\n\n**Pitfalls:**\n- **Default position bias.** Sliders pre-set at 50 will be left at 50 by lazy respondents — randomize the start position.\n- **False precision.** Sliders create the illusion of high precision on data that may be a rough guess.\n\n## 11. Net Promoter Score (NPS)\n\n**Definition:** \"How likely are you to recommend us to a colleague?\" on a 0–10 scale, segmented into Detractors (0–6), Passives (7–8), and Promoters (9–10).\n\n**When to use:** Tracking loyalty and word-of-mouth intent over time. NPS is most useful as a **trend** within your own customer base, not as a cross-industry benchmark.\n\n**Pitfalls:**\n- **The number is not the insight.** A 35 NPS without follow-up \"Why?\" tells you nothing actionable.\n- **Cultural scale variation.** US respondents use the top of the scale much more freely than European or Japanese respondents — comparing global NPS scores without controlling for this is misleading.\n\nFor the right way to use NPS, see our [NPS survey guide](/docs/nps-survey-guide) and [NPS follow-up interviews](/docs/nps-follow-up-interviews) — pairing the score with a follow-up \"Why?\" is where the value comes from.\n\n## 12. Demographic and firmographic questions\n\n**Definition:** Categorical questions that segment respondents — age, gender, role, company size, industry, geography.\n\n**Best practices:**\n- **Ask only what you will use.** Every demographic question costs response rate. If you will not segment by it, do not ask.\n- **Put them at the end** of consumer surveys (they feel intrusive upfront) but **at the start** of B2B surveys (so you can branch logic based on role/company size).\n- **Include \"Prefer not to say\"** for sensitive demographics — required by privacy regulations in many jurisdictions.\n- **Use ranges, not free text** for age, income, and company size to reduce drop-off and improve comparability.\n\n## 13. Constant sum questions\n\n**Definition:** Respondent allocates a fixed total (often 100 points or $100) across multiple options.\n\n**Examples:**\n- \"Allocate 100 points across these 5 features based on how important each is to you.\"\n\n**When to use:** Forced budget allocation — pricing research, feature prioritization, time allocation.\n\n**Pitfalls:**\n- High cognitive load — respondents drop off fast.\n- Math errors — many surveys fail to validate that allocations sum to the target.\n- Better suited to motivated, high-context respondents (e.g., customer panels, expert reviews) than cold outreach.\n\n## 14. Image, video, and visual questions\n\n**Definition:** Respondents react to an image, video, mockup, or design.\n\n**Examples:**\n- \"Which of these landing pages feels more trustworthy?\"\n- \"Watch this 30-second concept video and tell us what you think.\"\n\n**When to use:** Concept testing, brand and creative validation, prototype testing.\n\n**Pitfalls:**\n- Stimulus quality matters — a low-fidelity sketch will be judged on the sketch, not the idea.\n- Always ask \"Why?\" after a visual reaction question — the *reason* is the insight.\n\nSee our [concept testing methodology](/docs/concept-testing-methodology) and [prototype testing concept validation](/docs/prototype-testing-concept-validation) for more.\n\n## The two-question taxonomy: open vs closed\n\nUnderneath all 14 types is a simple distinction:\n\n- **Open-ended questions** are written, qualitative answers in the respondent's own words. They surface the unexpected but require manual or AI-assisted analysis.\n- **Closed-ended questions** use predefined response options to produce structured, quantitative data — measurable, comparable, fast to analyze.\n\nThe best surveys mix both. As one comprehensive analysis from Kantar puts it, closed-ended questions offer measurable, comparable data — researchers can calculate percentages, averages, and trends, and cross-tabulate responses by demographics or behaviors to uncover meaningful patterns. Open-ended questions tell you *why*.\n\n## How AI-native research collapses the taxonomy\n\nThe 14-type taxonomy above is a legacy of paper and clipboard surveys. Modern AI-moderated interviews — like Koji — collapse the distinction by running an **adaptive conversation** that uses structured question types when they are the right tool and switches to open-ended probing when depth is needed.\n\nKoji uses **6 structured question types** that map cleanly onto the most-used legacy types:\n\n| Koji Type | Replaces | When |\n| --- | --- | --- |\n| **open_ended** | Open-ended free text | Discovery, \"why,\" root cause |\n| **scale** | Likert, rating, NPS, slider | Sentiment, satisfaction, intensity |\n| **single_choice** | Single-choice, dichotomous (as 2-option) | Mutually exclusive picks |\n| **multiple_choice** | Multi-select | \"All that apply\" attributes |\n| **ranking** | Ranking, constant sum (lite) | Forced prioritization |\n| **yes_no** | Dichotomous | Eligibility, gates, simple facts |\n\nFor a deeper breakdown of each, see [structured questions in AI interviews](/docs/structured-questions-guide).\n\nThe difference from a static survey: Koji can ask a Likert question, see a low score, and **automatically probe with an open-ended follow-up** — collecting the *why* in the same conversation. Traditional surveys force you to pick a type up front and live with the limits.\n\n### Traditional survey vs Koji-style adaptive interview\n\n| Capability | SurveyMonkey / Typeform | Koji |\n| --- | --- | --- |\n| Question type variety | 14+ types | 6 structured + AI follow-up |\n| Adaptive follow-up | Skip logic only | Real-time AI probing |\n| Capture verbatim \"why\" | Optional open-ended (low response) | Built into every flow |\n| Multilingual | Translation per question | Native multi-language voice & text |\n| Time to insight | Hours to days (manual analysis) | Minutes (auto thematic analysis) |\n| Real \"why\" data | ~20–40% response rate | ~80%+ via probing |\n\nFor a side-by-side, see [Koji vs SurveyMonkey](/docs/koji-vs-surveymonkey) and [Koji vs Typeform](/docs/koji-vs-typeform).\n\n## A modern survey question template\n\nUse this template when designing your next study. The order matters — it minimizes fatigue and drop-off.\n\n1. **Screener (Yes/No or single-choice):** \"Are you a [target user]?\"\n2. **Behavioral anchor (open-ended):** \"Tell me about the last time you [did X].\"\n3. **Closed quantification (scale or Likert):** \"How easy was that to do?\"\n4. **Adaptive probe (open-ended):** \"What made it hard? OR What made it easy?\"\n5. **Prioritization (ranking):** \"Rank these 4 improvements from most to least valuable.\"\n6. **Demographics (single-choice, at the end):** Role, company size, etc.\n7. **Optional open-ended (open):** \"Anything else we should know?\"\n\nThis pattern — anchor with behavior, quantify with a scale, probe with an open-ended, prioritize with ranking — is the spine of high-signal research. Koji automates this entire pattern with intelligent moderation.\n\n## Common mistakes across all question types\n\n1. **Double-barreled questions:** \"How satisfied are you with our pricing and support?\" forces one answer to two things. Split them.\n2. **Leading questions:** \"How much do you love our new redesign?\" assumes the answer. See [avoiding bias in interviews](/docs/avoiding-bias-in-interviews) and [research bias guide](/docs/research-bias-guide).\n3. **Loaded language:** \"Should we continue our excellent customer service?\" — biased adjective.\n4. **Asking about hypotheticals when behavior is available:** \"Would you use a feature that does X?\" is far weaker than \"Have you done X before, and how?\"\n5. **Burying the headline:** Putting your most important question on page 4, after fatigue has set in.\n6. **Asking what you cannot act on:** If you cannot do anything with the answer, do not ask the question.\n\n## Related Resources\n\n- [Structured Questions in AI Interviews](/docs/structured-questions-guide) — the 6 Koji question types that replace 14 legacy types\n- [Survey Design Best Practices](/docs/survey-design-best-practices) — end-to-end principles for high-quality surveys\n- [Likert Scale Research Guide](/docs/likert-scale-research-guide) — deep dive on the most-used rating type\n- [Open-Ended Interview Questions: 100+ Examples](/docs/open-ended-interview-questions) — the qualitative companion\n- [Choice and Ranking Questions Guide](/docs/choice-ranking-questions-guide) — capturing preference data at scale\n- [How to Write Great Interview Questions](/docs/writing-interview-questions) — applies to surveys too\n- [How to Analyze Open-Ended Survey Responses with AI](/docs/ai-analyze-open-ended-survey-responses) — what to do with all that free-text data\n- [Surveys vs. Interviews](/docs/survey-vs-interview) — when to use each method","category":"Research Methods","lastModified":"2026-05-16T03:22:07.0065+00:00","metaTitle":"Survey Question Types: The Complete Guide to 14 Types with Examples (2026)","metaDescription":"Every survey question type explained — open-ended, Likert, ranking, matrix, NPS, semantic differential, and more. Real examples, pitfalls to avoid, and the AI-native approach that collapses 14 types into 6 adaptive ones.","keywords":["survey question types","types of survey questions","closed ended questions","open ended questions","Likert scale","ranking questions","multiple choice questions","dichotomous questions","semantic differential","survey question examples"],"aiSummary":"A complete reference of all 14 survey question types — open-ended, dichotomous, single-choice, multi-select, Likert, rating, ranking, matrix, semantic differential, slider, NPS, demographic, constant sum, and visual — with real examples, when-to-use guidance, and common pitfalls. Includes a modern AI-native approach that collapses the taxonomy into 6 adaptive structured question types with real-time follow-up probing.","aiPrerequisites":["user-interview-questions","survey-design-best-practices"],"aiLearningOutcomes":["Identify the right question type for any data goal","Avoid the bias and fatigue pitfalls of each question type","Write balanced Likert scales, unbiased multiple-choice options, and effective ranking questions","Design a high-signal survey using the 7-step modern template","Translate traditional question types into Koji's 6 adaptive structured question types"],"aiDifficulty":"beginner","aiEstimatedTime":"15 min read"}],"pagination":{"total":1,"returned":1,"offset":0}}