New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Participant Recruitment

Screening Research Participants Effectively

Learn how to write screening criteria, design qualifying and disqualifying questions, and build screeners that recruit the right people for your qualitative research.

Recruiting participants who match your research criteria is the foundation of useful qualitative data. Interview the wrong people and no amount of skilled questioning or sophisticated analysis will save your study. According to User Interviews' 2023 Research Benchmark Report, 36% of researchers cite recruiting the right participants as their single biggest challenge — more than budget, time, or analysis complexity.

Effective screening is not just about finding people who are willing to participate. It is about finding people whose experiences are relevant to your research questions.

What Is Participant Screening?

Participant screening is the process of evaluating potential research participants against predefined criteria to determine whether they are a good fit for your study. It typically happens through a short questionnaire (a screener) completed before the interview is scheduled.

A good screener does three things:

  1. Includes participants who match your target profile
  2. Excludes participants who would provide irrelevant data
  3. Does both without revealing which answers lead to selection (preventing gaming)

Defining Your Screening Criteria

Before writing a single screening question, clearly define who you want to talk to and who you do not.

Primary Criteria (Must-Have)

These are non-negotiable qualifications directly tied to your research questions.

Examples:

  • "Currently uses [product category] at least weekly"
  • "Made a purchase decision for [product type] in the last 6 months"
  • "Manages a team of 5 or more people"
  • "Has experience with [specific workflow or tool]"

Secondary Criteria (Nice-to-Have)

These add diversity or specificity but are not dealbreakers.

Examples:

  • "Mix of company sizes (startup, mid-market, enterprise)"
  • "Geographic diversity across at least 3 regions"
  • "Balance of tenure (new users and power users)"

Exclusion Criteria (Must-Not-Have)

These disqualify participants who would introduce problematic data.

Examples:

  • Works for a competitor or in the market research industry
  • Has a personal relationship with anyone on your team
  • Participated in your research within the past 6 months (to avoid "professional participants")

Writing Effective Screening Questions

Rule 1: Never Reveal the "Right" Answer

The most common screening mistake is asking questions where the desired answer is obvious.

Revealing (Bad)Non-Revealing (Good)
"Are you an active user of project management tools?""Which of the following tools do you use regularly? (Select all that apply)" followed by a list that includes PM tools mixed with other categories
"Do you make purchasing decisions at your company?""What is your role in software purchasing decisions?" with options: Final decision maker / Influencer / Evaluator / No involvement / Other
"Have you experienced frustration with onboarding?""How would you describe your most recent onboarding experience?" with a neutral scale

Rule 2: Use Behavioral Questions, Not Attitudinal Ones

What people do is a better predictor of fit than what people say they do.

  • Attitudinal (weaker): "How important is data security to you?" (Nearly everyone says "very important")
  • Behavioral (stronger): "In the last 3 months, have you taken any specific actions to review or improve your data security practices?" with specific action options

Rule 3: Include Attention Checks

Add one or two questions designed to catch inattentive respondents:

  • "Please select 'Strongly Disagree' for this question" buried in a matrix
  • A question with a clearly correct factual answer
  • A question asking them to type a specific word

Research by the Pew Research Center found that attention check questions identify approximately 8-12% of survey respondents as inattentive, and removing these respondents significantly improves data quality (Kennedy et al., 2020).

Rule 4: Keep It Short

Your screener should take 3-5 minutes to complete. Every additional question reduces your completion rate. According to SurveyMonkey's research, screener completion rates drop by approximately 15% for every additional minute beyond 5 minutes.

Screener Question Types

Multiple Choice (Single Select)

Best for: Categorical criteria like role, industry, company size

"What best describes your current role?"

  • Product Manager
  • Designer
  • Engineer
  • Researcher
  • Marketing
  • Sales
  • Other (please specify)

Multiple Choice (Multi-Select)

Best for: Usage patterns, tool inventories

"Which of the following tools do you use at least monthly? (Select all that apply)"

  • [Tool A]
  • [Tool B]
  • [Tool C]
  • [Unrelated tools as decoys]
  • None of the above

Numeric / Range

Best for: Frequency, team size, budget, tenure

"Approximately how many user interviews has your team conducted in the past 12 months?"

  • 0
  • 1-5
  • 6-15
  • 16-30
  • More than 30

Open-Ended (Short Answer)

Use sparingly. Best for: Verifying genuine experience.

"In 1-2 sentences, describe the last time you evaluated a new software tool for your team."

Open-ended responses help you distinguish between people who genuinely have the experience and those who are guessing their way through the screener.

Building the Screener Flow

A well-structured screener follows this pattern:

  1. Warm-up question (non-threatening, builds engagement)
  2. Primary qualifying questions (determine basic eligibility)
  3. Secondary qualifying questions (refine the sample)
  4. Disqualifying questions (catch exclusion criteria)
  5. Attention check (verify engagement)
  6. Demographics (for sample diversity tracking)
  7. Contact information and availability

Early Disqualification

Place your most important qualifying question early and use branching logic to screen out unqualified respondents immediately. This respects their time and reduces your review workload.

Screener Scoring

For studies where you need to select from a pool of qualified respondents, assign point values to responses:

CriteriaResponsePoints
Uses PM tools weeklyYes3
Uses PM tools monthlyYes1
Manager of 5+ peopleYes2
Made purchase decision recentlyLast 3 months3
Made purchase decision recentlyLast 6 months2

Total the points per respondent and select those above your threshold. This creates a transparent, defensible selection process.

How Koji Handles Participant Screening

If you are using Koji for your research, you can set up intake forms that serve as built-in screeners. Participants who access your interview link first complete your screening questions. Based on their responses, they either proceed to the interview or receive a polite message explaining they are not a fit for this particular study.

This automates the screening-to-interview pipeline — no manual review needed between the screener and the session. For setup instructions, see intake forms and consent.

Common Mistakes to Avoid

  1. Screening too broadly: If your criteria are too loose, you will waste interviews on people whose experiences are not relevant. Be specific about who you need.

  2. Screening too narrowly: If your criteria are too tight, you will struggle to recruit and may introduce selection bias. Make sure your criteria serve the research — not your assumptions.

  3. Revealing the target profile: If participants can guess which answers qualify them, motivated respondents will game the screener.

  4. Skipping the pilot test: Run your screener past 3-5 colleagues before launching. If they can guess the "right" answers, so can participants.

Key Takeaways

  • Define primary, secondary, and exclusion criteria before writing any screening questions
  • Never reveal which answers qualify a participant — use non-obvious question design
  • Behavioral questions ("what did you do?") outperform attitudinal questions ("how important is X to you?")
  • Keep screeners under 5 minutes to maintain completion rates
  • Include at least one attention check question
  • Use point-based scoring for transparent participant selection

For strategies on where to find participants in the first place, see finding research participants. For setting up your screening flow within Koji, explore intake forms and consent.

Frequently Asked Questions

How many screening questions should I include?

5 to 10 questions is the sweet spot. Fewer than 5 may not give you enough signal to differentiate candidates. More than 10 increases drop-off and fatigue. Focus on questions that directly map to your primary and exclusion criteria.

Should I compensate people for completing the screener?

For short screeners (under 5 minutes), compensation is not expected. For longer screeners or specialized populations, a small incentive (a gift card raffle entry, for example) can improve completion rates.

How do I screen for a behavior without leading?

Embed the target behavior within a list of plausible alternatives. Instead of asking "Do you use Slack for team communication?" ask "Which of the following do you use for team communication?" with Slack as one of many options.

What is the ideal qualification rate for a screener?

A qualification rate between 15% and 40% is typical for targeted research. Below 15% suggests your criteria may be too narrow. Above 40% suggests your screener may not be selective enough.

Can I re-screen participants from a previous study?

Yes, but verify their information is still current. Roles, tool usage, and behaviors change. A quick re-screening survey confirming key criteria is good practice before scheduling.