New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Participant Recruitment

Research Screener Questions: How to Write Questions That Find the Right Participants

Learn how to write effective screener questions that filter the right participants for your user research studies. Includes 10 proven templates, best practices, and common mistakes to avoid.

Research Screener Questions: How to Write Questions That Find the Right Participants

Bottom line: A screener question is a short survey question used before a research study to determine whether a candidate qualifies as a participant. Well-written screeners protect data quality, reduce bias, and save recruiting time — but most researchers make avoidable mistakes that let the wrong people in and keep the right people out.


What Are Screener Questions?

Every user research study depends on talking to the right people. A screener survey is a short set of questions — typically 5–11 in total — that filter potential participants before they join your study. The goal is dual: identify who fits your criteria, while avoiding revealing enough about the study that candidates can game their answers.

According to Nielsen Norman Group, "Well-written screeners ensure that your study participants are appropriate for your research goals, improve data quality, save resources, and reduce bias." When screeners fail, the entire study produces misleading results — no matter how well the interviews themselves are conducted.

The stakes have risen sharply. User Interviews' 2024 State of User Research Report found that 61% of researchers struggle with time to find participants (up from 45% in 2023), and 32% report rising recruitment costs. Meanwhile, only 13.7% of researchers use dedicated tools for recruiting participants — leaving most teams handling screening through improvised methods. Getting screeners right the first time matters more than ever.


The 5 Types of Screener Questions

A well-balanced screener draws from all five question categories:

1. Behavioral Questions (Most Important)

Ask about what people actually do, not who they are. Behavioral questions are the highest-signal screener content because they reveal real experience rather than self-reported identity.

  • "How often do you purchase products online?" (Never / Once a month / Weekly / Multiple times per week)
  • "Have you switched project management tools in the past 6 months?"

2. Demographic Questions

Useful for quota management — ensuring you do not accidentally recruit only 25-year-old marketers when you need a spread. Keep these to 2–3 maximum, and place them after behavioral qualifiers so poor-fit candidates exit early.

3. Exclusion Questions

Catch professional researchers, industry insiders, or people with conflicts of interest. This question is essential:

  • "Do you or a close family member work in UX research, market research, or product design?" → Disqualify: Yes

4. Articulation / Fraud-Detection Questions

Open-ended questions that catch "professional participants" — people who join research panels solely for incentives and answer whatever seems correct. Look for specific, coherent answers versus vague generic responses.

  • "In 2–3 sentences, describe the last time you felt frustrated using a digital product."

5. Logistical Qualification Questions

Confirm timezone, device requirements, and availability. Place these early — if a candidate cannot make the time window, there is no point asking eight more questions.


7 Best Practices for Writing Effective Screener Questions

1. Never Use Yes/No Questions — Use Multiple Choice Instead

Simple yes/no questions are trivially gameable. If you ask "Do you shop online frequently?" participants will answer yes whether they shop daily or yearly. Instead, offer a spectrum: "How often do you shop online?" with Never / Monthly / Weekly / Multiple times per week options. Participants cannot easily guess the correct frequency without a numeric anchor.

2. Keep It Under 11 Questions

Research from User Interviews shows that screeners with more than 10 questions cause significant participant drop-off. Target this structure: 4–6 behavioral questions + 2–3 demographic + 1–2 exclusion + 1 articulation question. Longer screeners also introduce non-response bias — people who complete 20-question screeners are systematically different from those who abandon them.

3. Lead with Your Highest-Priority Disqualifier

If your study requires participants from a specific city, ask that first. If it requires active users of a specific product, lead with usage frequency. This respects candidates' time and prevents frustration when they answer eight questions only to fail the ninth.

4. Use Foils — Strategic Decoy Options

Nielsen Norman Group specifically recommends including "foils" in choice-based questions: fake but plausible-sounding options that catch inattentive or dishonest participants. If your product does not have a particular feature and a candidate claims to use it daily, that inconsistency is a disqualifying red flag.

5. Avoid Revealing Study Purpose Upfront

Do not telegraph what correct answers look like. A screener for a checkout friction study should not begin with "We are studying checkout problems." Ask broadly about shopping habits, then narrow. Participants unconsciously — and sometimes consciously — shape their answers toward what they believe the researcher wants to hear.

6. Use Skip Logic and Branching

Modern screener tools support branching: if a participant answers "I have never used a CRM," skip the next three CRM-specific questions. This keeps screeners short and relevant for each respondent type, improving completion rates and data quality simultaneously.

7. Screen for Articulation, Not Just Criteria

At least one open-ended question — "Describe the last time [relevant experience] happened to you" — reveals both qualification and communication quality. Researchers frequently discover strong candidates are poor communicators, or that highly articulate candidates lack the required experience. Both signals matter.


10 Screener Question Templates You Can Use Today

E-commerce purchase frequency

"In the past 3 months, how often have you purchased products online?" Never | Once or twice | About once a week | Multiple times per week

Software tool evaluation

"Have you evaluated or switched project management tools in the past 6 months?" Yes, I recently evaluated tools | Yes, I switched tools | No, but I am considering it | No, no plans to switch

Industry insider exclusion

"Which of the following best describes your primary role?" (list UX research, market research, product design alongside many unrelated fields — disqualify those selections)

Product feature verification (foil-based)

"Which of the following features have you used in [Product X]?" (include 2–3 real features and 1–2 plausible-sounding fake ones as foils)

Articulation and fraud detection

"In 2–3 sentences, describe the last time you felt frustrated trying to accomplish something online."

Work habit quantification

"How many hours per week do you estimate you spend managing your team's projects?" Less than 1 hour | 1–3 hours | 3–6 hours | 6+ hours

B2B buyer role verification

"What is your involvement in purchasing software tools for your organization?" I am the primary decision maker | I am involved but not the final decision maker | Someone else handles this | We do not purchase software tools

Logistical availability

"We will be conducting 45-minute video interviews the week of [date]. Are you available?" Yes, anytime that week | Yes, with limited availability | No, unavailable

Device and technology screening

"What device do you primarily use to [task]?" Desktop computer | Laptop | Tablet | Smartphone

Recency and behavior anchor

"When did you last [complete the specific action being researched]?" In the past week | In the past month | 1–3 months ago | More than 3 months ago | Never


5 Common Screener Mistakes That Kill Data Quality

Mistake 1: Screening by demographics instead of behaviors Knowing someone is 35–44 and earns $75K tells you almost nothing useful. Knowing they have evaluated three software tools for their team in the past year tells you everything. As Playbookux puts it: "Stop screening for income; start screening for how they shop." Demographics describe who someone is — behaviors reveal what they actually do.

Mistake 2: Being too narrow or too broad An overly narrow screener ("must be a UX researcher with 5+ years of Figma experience who manages a team of 10+") may yield zero recruitable candidates in your panel. An overly broad screener ("anyone who uses software at work") produces an unworkably diverse sample. Pilot your screener with 10–15 candidates before committing to a full recruitment run.

Mistake 3: Revealing the right answer "We are looking for people who struggle with checkout. Have you ever abandoned a shopping cart?" This tells candidates exactly what to say. Instead, ask broadly: "Walk me through how you typically research and complete a purchase online."

Mistake 4: Skipping the open-ended question Professional participants — people who game research panels for incentives — are caught most reliably by a single open-ended question. Their answers tend to be generic, brief, and missing the specific details that genuine experience produces. This failure mode is nearly impossible to fake without authentic experience.

Mistake 5: Asking about sensitive personal information Screening for income level, religion, or political views — unless directly research-relevant — creates participation barriers, erodes trust, and may violate privacy expectations. If a demographic is genuinely required, keep sensitive questions to a minimum and explain why the information is needed.


Red Flags That Signal a Broken Screener

If any of these are true, your screener needs revision before the next recruitment run:

  • More than 80% of applicants qualify → screener is too broad; tighten behavioral criteria
  • Fewer than 5% of applicants qualify → criteria may be unrealistic for the available panel
  • Recruited participants frequently "do not know" things they claimed on the screener → foils and open-ended questions are missing
  • Recruitment is running slowly and burning budget → criteria may be too narrow or contradictory
  • Participants seem surprised by the study topic → screener and study are misaligned

The Modern Approach: AI-Moderated Interviews and Smarter Screening

Traditional participant recruitment averages 1.15 work hours per participant just in coordination — making a 20-person study nearly three full working days of logistics before the first interview begins (Nielsen Norman Group).

AI-native research platforms like Koji change this equation significantly. Rather than treating screening as a separate gate before research begins, Koji integrates qualification into the research session itself:

  • Import participants from your CRM — existing customers are already partially pre-screened by their actual behavior with your product
  • Use personalized interview links that carry context about each participant, enabling the AI interviewer to tailor questions without lengthy pre-study screeners
  • Deploy structured questions across all 6 types — open_ended, scale, single_choice, multiple_choice, ranking, and yes_no — that serve simultaneously as in-session qualification checks and primary research data
  • Run AI-moderated interviews at scale — 100 simultaneous conversations with automatic thematic analysis, quality scoring, and structured answer extraction

While traditional platforms require a screener → schedule → interview → code → analyze pipeline spanning weeks, Koji compresses the cycle into days. The AI interviewer applies consistent probing methodology to every participant, eliminating the variation that human moderators introduce across a long recruitment window.

For teams running ongoing research programs, Koji's approach means screener design becomes a one-time investment rather than a per-study bottleneck — freeing researchers to focus on insight generation rather than recruitment logistics.


Related Resources

Related Articles

How to Use Your CRM Data for Targeted AI Research: Import Participants and Personalize Every Interview

Your CRM already contains your best research sample. Learn how to export customer segments, import them into Koji, send personalized interview links, and get 3–5x higher response rates than generic research recruitment.

Personalized Interview Links: Send Targeted Research Invitations to Every Participant

Embed participant-specific context into Koji interview URLs so the AI greets each person by name, references their company, and tailors the conversation — automatically. Covers CSV import, URL parameters, and CRM integration patterns.

Structured Questions in AI Interviews

Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.

How to Research Hard-to-Reach Audiences: Executives, B2B Buyers, and Niche Segments

The people hardest to recruit for research are often the ones whose insights matter most. Learn how async AI interviews unlock executives, B2B buyers, and niche specialists who will never take a 60-minute call.

User Research Recruitment Emails: Templates and Scripts That Get Responses

Ready-to-use email templates for recruiting user research participants, with proven subject lines, body copy, and follow-up sequences that achieve 7–15% response rates.

How Many User Interviews Do You Need? The Sample Size Guide for Qualitative Research

Discover the right number of user interviews for your research. Learn about data saturation, theoretical saturation, and practical frameworks for knowing when you've collected enough qualitative data.