New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Participant Recruitment

Screener Questions for User Research: A Complete Guide

Learn how to write effective screener questions that find the right research participants — and how Koji's intake forms and AI interviews make screening faster and more natural.

Screener Questions for User Research: A Complete Guide

If you recruit the wrong participants, even the best research questions won't save your study. Screener questions are the filters that separate the right respondents from everyone else — they determine whether you talk to power users or casual browsers, to paying customers or free-tier lurkers, to the people who actually have the problem you're researching.

This guide covers everything you need to know about writing effective screener questions, common pitfalls to avoid, and how modern AI-native research platforms like Koji are changing how screening gets done.


What Are Screener Questions?

A screener (sometimes called a screener survey or recruitment screener) is a short set of questions given to potential participants before a study to determine if they qualify. The goal is to filter your candidate pool down to the people who match your ideal research participant profile.

Screener questions typically appear as a separate survey sent before the interview, but they can also appear at the start of the research session itself. With platforms like Koji, screeners can be embedded directly into the intake form — before the AI interview begins — so only qualified participants enter your study.

Why screeners matter:

  • They protect your research budget by ensuring every completed session is usable
  • They help you hit specific quotas (e.g., 50% enterprise users, 50% SMB)
  • They prevent confirmation bias by filtering out people who already know your product too well
  • They ensure legal compliance (e.g., excluding minors or employees)

Types of Screener Questions

Screener questions fall into four broad categories:

1. Demographic Questions

These filter by who someone is — age, location, company size, job title, or industry.

Examples:

  • "Which best describes your current role?" (with answer options)
  • "How large is your company?"
  • "Which country are you based in?"

Use them when: You need to talk to a specific demographic segment — for example, only product managers at companies with 50+ employees.

2. Behavioral Questions

These filter by what someone does — their habits, frequency of use, past actions.

Examples:

  • "How often do you conduct user research in a given month?"
  • "Have you purchased software for your team in the past 12 months?"
  • "How often do you run customer interviews?"

Use them when: You need participants with a specific behavior pattern — for example, researchers who run at least 10 interviews per quarter.

3. Experience-Based Questions

These filter by what someone has done or knows — their level of expertise or familiarity.

Examples:

  • "How many years of experience do you have in UX research?"
  • "Have you ever used a survey tool to collect customer feedback?"
  • "Have you conducted a usability test in the past 6 months?"

Use them when: You need a specific expertise level — novices for first-time user studies, or experts for concept validation with practitioners.

4. Attitudinal Questions

These filter by how someone thinks or feels — their opinions, beliefs, or motivations.

Examples:

  • "How important is user research to your company's product decisions?"
  • "How satisfied are you with your current research process?"

Careful with these: Attitudinal screeners can introduce bias. Highly motivated respondents often give different answers than the average customer. Use sparingly and pair with behavioral filters.


How to Write Effective Screener Questions

Rule 1: Lead with behavioral, not attitudinal questions. "Do you care about customer feedback?" will get a near-universal "yes." "How many customer interviews did you conduct last month?" is harder to fake and more predictive.

Rule 2: Use forced-choice over open-ended answers. Screeners should be quick to complete and easy to analyze. Use single-choice or multiple-choice formats. Save open-ended questions for the interview itself.

Rule 3: Avoid telegraphing the right answer. If your screener question reads "Do you regularly conduct user research to improve your product?" — anyone who wants to participate knows to say yes. Shuffle answer options and avoid loading questions toward the "ideal" answer.

Rule 4: Include disqualifying answers explicitly. Write your screener so you know exactly which answers disqualify a respondent. If you only want B2B users, include "B2C only" as an answer option — and mark it as a disqualifier.

Rule 5: Match screener length to incentive. A 2-minute screener is reasonable for a $20 incentive. For a high-stakes research session with a $150 gift card, you can justify a 5-minute screener with more detailed qualification questions.

Rule 6: Test your screener before launching. Have a colleague take it and check: Is it clear? Are there confusing answer options? Does it accidentally disqualify your best-fit participants?


Common Screener Mistakes to Avoid

Over-screening: Too many qualifying criteria means you'll struggle to fill your study. If your screener requires participants to be a senior PM, at a B2B SaaS company with 200–1,000 employees, who has conducted research in the past 30 days, who has budget authority, and who doesn't use your competitor — you'll be waiting a long time.

Under-screening: No filters mean you end up with sessions that don't generate usable insights because participants don't have the relevant experience.

Screening out variation: Some of the most valuable insights come from edge-case users. Don't screen so tightly that you miss the outliers who reveal unexpected use patterns.

Not screening for motivation: Participants who joined just for the incentive give shorter, less thoughtful answers. One subtle filter: "Briefly explain why you're interested in participating" — a sentence or two separates engaged participants from quick-buck seekers.

Skipping the professional disqualifier: Always include at least one professional exclusion question. "Are you employed by or affiliated with any of the following companies?" prevents employees of competitor firms from entering your study.


Screening in Koji: A Different Approach

Traditional screener surveys are a separate, pre-study step — you send a Typeform, wait for responses, manually review them, then schedule qualified participants. With tools like Koji, the intake form and the AI interview are integrated into a single flow.

Koji's intake forms can include demographic and behavioral qualifying questions before the interview starts. If a respondent doesn't qualify, they can be gracefully excluded without consuming a completed interview slot. The questions in the intake form support:

  • Single choice — for demographic or categorical qualification
  • Yes/no — for simple inclusion/exclusion criteria
  • Multiple choice — for role or industry filters with multiple valid options

Once inside the AI interview, Koji's structured questions — including scale, single_choice, multiple_choice, ranking, and yes_no question types — can be used to qualify or segment participants further, without breaking the conversational flow. Unlike a static survey, Koji's AI can pick up on contextual signals from responses and adapt follow-up probing accordingly.

This makes screening more natural: instead of a cold survey form, participants encounter a conversational experience from the start — which typically increases both completion rates and response quality.


Sample Screener Question Set: B2B SaaS Research Study

Here's a sample screener for a study targeting B2B product managers:

1. What is your current job title? (Single choice)

  • Product Manager / Senior PM / Director of Product ✓
  • Engineer / Developer ✗
  • Designer / UX Researcher ✗ (for this specific study)
  • Marketing / Sales ✗
  • Other ✗

2. How large is the company you work at? (Single choice)

  • 1–50 employees ✗
  • 51–500 employees ✓
  • 501–5,000 employees ✓
  • 5,000+ employees ✓
  • Self-employed / freelancer ✗

3. How often do you make or influence software purchasing decisions? (Single choice)

  • Regularly — I evaluate and recommend tools ✓
  • Occasionally — I'm sometimes consulted ✓
  • Rarely or never ✗

4. In the past 6 months, have you conducted customer interviews or usability sessions? (Yes/No)

  • Yes ✓
  • No ✗

5. Are you currently employed by any of the following companies? (Multiple choice — disqualifier if any checked)

  • [List of competitor names]
  • None of the above ✓

After Screening: Getting More from Every Session

Once participants clear your screener, the real research begins. With platforms like Koji that use AI moderation, you no longer need a moderator to guide each session. The AI asks your pre-planned questions, follows up on interesting responses, and probes for depth — automatically.

This means you can screen for more participants (because sessions run without your presence), run studies asynchronously (participants join on their own schedule), and still get the quality of insights you'd expect from a moderated interview.

Research operations teams using AI-moderated platforms regularly run 10x more interviews in the same time as traditional moderated sessions — without sacrificing the qualitative depth that makes interviews valuable in the first place.

The result: your screener becomes less of a gating mechanism and more of a segmentation tool. Because Koji can run at scale without your direct involvement, you can afford to let more participants in — and let the AI-generated analysis do the segmentation work for you after the fact.


Related Resources