New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Research Methods

Survey Question Types: The Complete Guide to 14 Question Types with Examples (2026)

A complete reference of every survey question type — open-ended, closed-ended, Likert, matrix, ranking, semantic differential, and more. When to use each, real examples, common pitfalls, and the AI-native approach that combines them all in one conversation.

The complete list of survey question types

Survey questions fall into two parents and fourteen children. The two parents are open-ended (qualitative — respondents write in their own words) and closed-ended (quantitative — respondents pick from defined options). Every survey question type is a variation on those two ideas, optimized for a specific data goal.

Here is the complete map, ranked by how often each shows up in real research:

#Question TypeParentBest for
1Open-ended (free text / voice)OpenDiscovery, reasoning, unexpected themes
2Dichotomous (Yes/No)ClosedEligibility, simple gates, screening
3Single-choice (multiple choice)ClosedSingle pick from a list of options
4Multi-select (multiple choice, multi-answer)Closed"All that apply" behavior or attribute capture
5Likert scale (agreement)ClosedAttitudes, beliefs, satisfaction
6Rating scale (numeric / star)ClosedPerformance, ease, sentiment intensity
7RankingClosedForced trade-offs between options
8Matrix / gridClosedBulk attribute rating across many items
9Semantic differentialClosedBrand perception, emotion, aesthetic ratings
10SliderClosedContinuous numeric input
11NPS (Net Promoter)ClosedLoyalty and word-of-mouth intent
12Demographic / firmographicMostly closedSegmenting respondents
13Constant sumClosedAllocating budget, time, or attention
14Image / visualEitherConcept testing, design preference

The rest of this guide walks through each type — when to use it, how to write it well, the pitfalls, and how AI-native research platforms like Koji collapse this entire taxonomy into a single conversational flow with 6 structured question types (open_ended, scale, single_choice, multiple_choice, ranking, yes_no — see our structured questions guide).

1. Open-ended questions

Definition: Respondents answer in their own words — no predefined options.

Examples:

  • "Walk me through the last time you tried to do X."
  • "What is the single biggest frustration with your current workflow?"
  • "If you could change one thing about [product], what would it be and why?"

When to use: Discovery, root-cause exploration, capturing unexpected language. Open-ended is the only question type that surfaces what you did not already think to ask about.

Pitfalls:

  • Low response rate. In traditional surveys, only 20–40% of respondents answer optional open-ended questions, and most answers are 1–5 words.
  • Hard to analyze at scale without dedicated qualitative coding.
  • Easy to get vague hypotheticals ("I would love better dashboards") instead of behavioral evidence ("I rebuilt the same dashboard three times last quarter").

Modern approach: AI-moderated interviews — like Koji's voice and text interviews — fix the response-rate and depth problems by probing follow-up questions in real time. See how Koji's AI follow-up probing works and open-ended interview questions: 100+ examples.

2. Dichotomous (Yes/No) questions

Definition: Two mutually exclusive options — usually Yes/No, True/False, or Have/Have not.

Examples:

  • "Have you used [product] in the last 30 days?" (Yes / No)
  • "Are you the primary decision-maker for [category] software in your team?" (Yes / No)

When to use: Screening, eligibility, simple behavioral facts. Use dichotomous when there are genuinely only two states — don't force a binary on something nuanced (satisfaction is not Yes/No).

Pitfalls:

  • Forces false dichotomies on continuous concepts ("Do you like the product?" — many users feel "kind of").
  • Loses gradient information you can never recover.

3. Single-choice (multiple choice, one answer)

Definition: Respondent picks exactly one option from 3+ choices.

Examples:

  • "Which of the following best describes your role?" → IC, Manager, Director, VP+, Founder
  • "What is the primary reason you signed up?" → Specific list

When to use: Demographics, primary intent, mutually exclusive categorization.

Pitfalls:

  • Order bias. Options at the top get picked more often (primacy effect). Randomize answer order when option meaning is symmetric.
  • Missing options. Always include "Other (please specify)" and a "None of the above" or "I prefer not to say" exit.
  • Overlapping options. "Engineer," "Software Engineer," and "Frontend Developer" all in one list = chaos.

4. Multi-select (multiple choice, multi-answer)

Definition: Respondent picks all options that apply.

Examples:

  • "Which of these tools do you currently use? (Select all that apply)"
  • "What features matter most to you? (Select up to 3)"

When to use: Capturing behavior or attributes where multiple states are simultaneously true.

Pitfalls:

  • List length matters. Respondents tire by item 8 and skim the rest — keep multi-select lists under 10 items where possible.
  • Single-select disguised as multi-select. If most people will logically pick one, use single-choice — multi-select adds noise.
  • No cap = no signal. "Select all that apply" without a cap often results in 4–7 selections that are hard to prioritize. Adding "Select your top 3" forces clarity.

5. Likert scale questions

Definition: Statement + a balanced agreement scale, typically 5 or 7 points: Strongly Disagree → Strongly Agree.

Examples:

  • "The onboarding process made it easy to find the features I needed."
    • Strongly Disagree / Disagree / Neutral / Agree / Strongly Agree

When to use: Attitudes, beliefs, satisfaction, perceptions. Likert is the workhorse of attitudinal research.

Pitfalls:

  • Acquiescence bias. People drift toward "Agree" if the question is framed positively. Balance with reverse-coded items.
  • 5 vs 7 points. 5-point is faster; 7-point captures more nuance for analytic teams. Pick one and stick to it across the survey.
  • Neutral midpoint. Including a "Neither agree nor disagree" option respects the respondent but invites fence-sitting. Forced-choice (no neutral) increases polarization in data but irritates respondents.

See our Likert scale research guide for a full breakdown.

6. Rating scale questions (numeric, star, smiley)

Definition: Numeric or visual scale, typically 1–5 or 1–10.

Examples:

  • "How likely are you to recommend us to a colleague?" (0–10)
  • "Rate your overall satisfaction" (1–5 stars)
  • "How easy was it to complete this task?" (1–7)

When to use: Sentiment intensity, performance, ease metrics like CSAT, CES, and SUS.

Pitfalls:

  • End-aversion. Many respondents avoid the extremes (1 and 7), compressing the scale.
  • Cultural variation. Respondents in different countries use scales differently — direct comparisons across geographies are risky without normalization.
  • See Customer Effort Score and System Usability Scale for two standardized rating scales worth adopting.

7. Ranking questions

Definition: Respondent orders a list of items by preference, importance, or frequency.

Examples:

  • "Rank these features from most to least important to your team."
  • "Order these channels from most to least frequently used."

When to use: Forcing trade-offs. Unlike ratings — where everything can be "very important" — ranking forces relative priority.

Pitfalls:

  • Cognitive load. Ranking 4 items is fine; ranking 10 is exhausting and produces noisy data. Cap at 5–7 items.
  • Top-of-list bias. Respondents rank the first few items carefully and randomize the rest.
  • No "tie." Force-ranked lists hide genuinely equivalent items. For high-stakes prioritization, supplement with rating scales.

For a deeper dive, see our choice and ranking questions guide.

8. Matrix / grid questions

Definition: Multiple related Likert or rating items in a grid, sharing the same response scale.

Examples:

  • "Rate the following on a 1–5 scale: Onboarding, Pricing, Support, Documentation, Reliability"

When to use: Efficient bulk attribute rating, especially when items share a comparable scale.

Pitfalls:

  • Straight-lining. Respondents pick the same column for all rows without reading — accounts for 15–30% of matrix data quality issues in large surveys.
  • Mobile experience. Matrix grids break on small screens and inflate drop-off.
  • Hidden survey length. A matrix of 10 rows × 5 columns is technically one question but feels like 10 — and behaves like 10 in fatigue analysis.

9. Semantic differential

Definition: A bipolar scale anchored by opposing adjectives.

Examples:

  • "How would you describe our brand?"
    • Innovative ←→ Traditional
    • Friendly ←→ Cold
    • Expensive ←→ Affordable

When to use: Brand perception, emotional response, aesthetic ratings.

Pitfalls:

  • Anchor choice matters. "Affordable" vs "Cheap" measure totally different concepts — choose anchors precisely.
  • Cross-respondent comparability. What counts as "Innovative" varies across people. Best paired with open-ended follow-ups.

10. Slider questions

Definition: A continuous slider input, typically 0–100.

Examples:

  • "Drag the slider to indicate the percentage of your week spent on manual reporting."

When to use: Continuous numeric estimates where the gradient matters.

Pitfalls:

  • Default position bias. Sliders pre-set at 50 will be left at 50 by lazy respondents — randomize the start position.
  • False precision. Sliders create the illusion of high precision on data that may be a rough guess.

11. Net Promoter Score (NPS)

Definition: "How likely are you to recommend us to a colleague?" on a 0–10 scale, segmented into Detractors (0–6), Passives (7–8), and Promoters (9–10).

When to use: Tracking loyalty and word-of-mouth intent over time. NPS is most useful as a trend within your own customer base, not as a cross-industry benchmark.

Pitfalls:

  • The number is not the insight. A 35 NPS without follow-up "Why?" tells you nothing actionable.
  • Cultural scale variation. US respondents use the top of the scale much more freely than European or Japanese respondents — comparing global NPS scores without controlling for this is misleading.

For the right way to use NPS, see our NPS survey guide and NPS follow-up interviews — pairing the score with a follow-up "Why?" is where the value comes from.

12. Demographic and firmographic questions

Definition: Categorical questions that segment respondents — age, gender, role, company size, industry, geography.

Best practices:

  • Ask only what you will use. Every demographic question costs response rate. If you will not segment by it, do not ask.
  • Put them at the end of consumer surveys (they feel intrusive upfront) but at the start of B2B surveys (so you can branch logic based on role/company size).
  • Include "Prefer not to say" for sensitive demographics — required by privacy regulations in many jurisdictions.
  • Use ranges, not free text for age, income, and company size to reduce drop-off and improve comparability.

13. Constant sum questions

Definition: Respondent allocates a fixed total (often 100 points or $100) across multiple options.

Examples:

  • "Allocate 100 points across these 5 features based on how important each is to you."

When to use: Forced budget allocation — pricing research, feature prioritization, time allocation.

Pitfalls:

  • High cognitive load — respondents drop off fast.
  • Math errors — many surveys fail to validate that allocations sum to the target.
  • Better suited to motivated, high-context respondents (e.g., customer panels, expert reviews) than cold outreach.

14. Image, video, and visual questions

Definition: Respondents react to an image, video, mockup, or design.

Examples:

  • "Which of these landing pages feels more trustworthy?"
  • "Watch this 30-second concept video and tell us what you think."

When to use: Concept testing, brand and creative validation, prototype testing.

Pitfalls:

  • Stimulus quality matters — a low-fidelity sketch will be judged on the sketch, not the idea.
  • Always ask "Why?" after a visual reaction question — the reason is the insight.

See our concept testing methodology and prototype testing concept validation for more.

The two-question taxonomy: open vs closed

Underneath all 14 types is a simple distinction:

  • Open-ended questions are written, qualitative answers in the respondent's own words. They surface the unexpected but require manual or AI-assisted analysis.
  • Closed-ended questions use predefined response options to produce structured, quantitative data — measurable, comparable, fast to analyze.

The best surveys mix both. As one comprehensive analysis from Kantar puts it, closed-ended questions offer measurable, comparable data — researchers can calculate percentages, averages, and trends, and cross-tabulate responses by demographics or behaviors to uncover meaningful patterns. Open-ended questions tell you why.

How AI-native research collapses the taxonomy

The 14-type taxonomy above is a legacy of paper and clipboard surveys. Modern AI-moderated interviews — like Koji — collapse the distinction by running an adaptive conversation that uses structured question types when they are the right tool and switches to open-ended probing when depth is needed.

Koji uses 6 structured question types that map cleanly onto the most-used legacy types:

Koji TypeReplacesWhen
open_endedOpen-ended free textDiscovery, "why," root cause
scaleLikert, rating, NPS, sliderSentiment, satisfaction, intensity
single_choiceSingle-choice, dichotomous (as 2-option)Mutually exclusive picks
multiple_choiceMulti-select"All that apply" attributes
rankingRanking, constant sum (lite)Forced prioritization
yes_noDichotomousEligibility, gates, simple facts

For a deeper breakdown of each, see structured questions in AI interviews.

The difference from a static survey: Koji can ask a Likert question, see a low score, and automatically probe with an open-ended follow-up — collecting the why in the same conversation. Traditional surveys force you to pick a type up front and live with the limits.

Traditional survey vs Koji-style adaptive interview

CapabilitySurveyMonkey / TypeformKoji
Question type variety14+ types6 structured + AI follow-up
Adaptive follow-upSkip logic onlyReal-time AI probing
Capture verbatim "why"Optional open-ended (low response)Built into every flow
MultilingualTranslation per questionNative multi-language voice & text
Time to insightHours to days (manual analysis)Minutes (auto thematic analysis)
Real "why" data~20–40% response rate~80%+ via probing

For a side-by-side, see Koji vs SurveyMonkey and Koji vs Typeform.

A modern survey question template

Use this template when designing your next study. The order matters — it minimizes fatigue and drop-off.

  1. Screener (Yes/No or single-choice): "Are you a [target user]?"
  2. Behavioral anchor (open-ended): "Tell me about the last time you [did X]."
  3. Closed quantification (scale or Likert): "How easy was that to do?"
  4. Adaptive probe (open-ended): "What made it hard? OR What made it easy?"
  5. Prioritization (ranking): "Rank these 4 improvements from most to least valuable."
  6. Demographics (single-choice, at the end): Role, company size, etc.
  7. Optional open-ended (open): "Anything else we should know?"

This pattern — anchor with behavior, quantify with a scale, probe with an open-ended, prioritize with ranking — is the spine of high-signal research. Koji automates this entire pattern with intelligent moderation.

Common mistakes across all question types

  1. Double-barreled questions: "How satisfied are you with our pricing and support?" forces one answer to two things. Split them.
  2. Leading questions: "How much do you love our new redesign?" assumes the answer. See avoiding bias in interviews and research bias guide.
  3. Loaded language: "Should we continue our excellent customer service?" — biased adjective.
  4. Asking about hypotheticals when behavior is available: "Would you use a feature that does X?" is far weaker than "Have you done X before, and how?"
  5. Burying the headline: Putting your most important question on page 4, after fatigue has set in.
  6. Asking what you cannot act on: If you cannot do anything with the answer, do not ask the question.

Related Resources

Related Articles

How to Analyze Open-Ended Survey Responses with AI (2026 Guide)

Stop manually coding free-text survey responses. Learn how AI analyzes open-ended answers at scale — surfacing themes, sentiment, and quotes in minutes, plus why an AI interview captures 10x more depth than any survey can.

Choice and Ranking Questions in AI Interviews: Capture Preference Data at Scale

Learn how to use single choice, multiple choice, ranking, and yes/no questions in Koji AI interviews — with automatic report charts that show preference distributions across all your participants.

Structured Questions in AI Interviews

Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.

Open-Ended Interview Questions: 100+ Examples and How to Use Them

A comprehensive library of open-ended interview questions for product discovery, UX research, customer feedback, employee experience, and more — plus how to write your own.

How to Write Great Interview Questions

Learn to craft open-ended, neutral interview questions that surface genuine user insights instead of confirmation bias.

Likert Scale Questions: How to Use Rating Scales in User Research

A complete guide to Likert scale questions in user research — what they are, when to use them, how to write them correctly, and how Koji's AI interviews take rating scales further by pairing quantitative scores with qualitative follow-up.

Survey Design Best Practices: From Question Writing to Data Collection

Learn how to design effective surveys with proven best practices for question writing, flow, bias reduction, and data collection — including when to go beyond surveys to AI-powered interviews.

Surveys vs. Interviews: How to Choose the Right Research Method

A comprehensive comparison of surveys and interviews as research methods. Understand when to use each, the key trade-offs, how to combine them in mixed-methods studies, and why the choice matters for research quality.