Card Sorting: The Complete Guide to Information Architecture Research
Everything you need to run effective card sorting studies — open, closed, and hybrid variants. Includes sample sizes, analysis techniques, and how to combine card sorting with qualitative interviews.
Card sorting is one of the most underused research methods in product development. It takes 30–60 minutes per session, requires no special equipment, and reveals how your users mentally organize information — insights that directly improve navigation, taxonomy, and content strategy.
What Is Card Sorting?
Card sorting is a participatory research technique where participants organize labeled "cards" (representing content, features, or topics) into groups that make sense to them. The resulting groupings reveal participants' mental models — how they think about and categorize the domain you are researching.
Card sorting directly informs:
- Website and app navigation structure
- Feature categorization and labeling
- Terminology and taxonomy decisions
- Content strategy and organization
The technique has been central to UX and information architecture practice for decades. According to Nielsen Norman Group, card sorting is one of the highest-ROI research methods available because the effort is low and the structural insights are immediately actionable.
Open vs. Closed vs. Hybrid Card Sorting
The three variants of card sorting answer different research questions:
Open card sort: Participants create their own groups AND name them. Use this for discovery — when you want to understand how users naturally categorize content and what language they use. Best for: designing information architecture from scratch, understanding user mental models, improving taxonomy and labeling.
Closed card sort: Participants sort cards into predefined categories. Use this to validate an existing structure — does your current navigation match how users think? Best for: evaluating an existing site structure, confirming that a redesign matches user expectations, comparing two organizational approaches.
Hybrid card sort: Participants can use predefined categories OR create new ones. Use this when you have a starting structure but want to know where it breaks down. Best for: iterating on an existing information architecture, identifying which categories work and which create confusion.
When to Use Card Sorting
Card sorting is most valuable:
- When designing a new navigation structure: Before building, run an open sort to let users show you how they would organize content
- When a site has high navigation failure rates: Users cannot find what they are looking for — a closed sort reveals where your structure diverges from user expectations
- When adding new features or content: Where does a new feature belong? A hybrid sort helps decide
- After major content migrations: Ensure the new structure still makes sense to users
Card sorting is less useful for evaluating visual design, understanding user goals and motivations, or capturing nuanced attitudes and feelings. For those, qualitative interviews are the right tool. Many researchers combine card sorting (for structural clarity) with follow-up interviews (for contextual understanding) — a combination that platforms like Koji make easy to run in sequence.
How to Design a Card Sort Study
Step 1: Define the Research Questions
What do you need to decide based on this study?
- "Should we combine these two navigation sections or keep them separate?"
- "What do users call this concept — projects, workspaces, or campaigns?"
- "Does our current site structure match how users think about these topics?"
Specific questions produce actionable results. Vague questions produce interesting but unusable data.
Step 2: Select Your Cards
The ideal card set is 30–100 items. Fewer than 20 provides limited signal; more than 100 causes fatigue.
Cards should represent:
- Top-level content or features in your product
- Topics users actually encounter — no internal jargon or system terminology
- Similar items across potential categories so you can detect how users distinguish them
Label cards clearly and consistently. Ambiguous labels introduce noise — participants will sort based on how they interpret the label, not the underlying content.
Common mistake: including too many cards. When participants are overwhelmed, they sort more quickly and with less care. Quality over quantity is the rule.
Step 3: Choose Your Moderation Style
Unmoderated card sorting is the most common approach. Participants complete the study independently via tools like OptimalSort, Lyssna, or Maze. This allows for larger sample sizes (30–100 participants) and lower cost per participant.
Moderated card sorting adds a researcher who observes and asks think-aloud questions. This produces richer data about WHY participants organized items the way they did, but is more time-intensive.
For most information architecture decisions: start with unmoderated sorts for pattern detection, then run 3–5 moderated sessions to understand the reasoning behind the patterns. The moderated sessions are where you ask follow-up questions that reveal the mental models driving sorting behavior.
Step 4: Determine Your Sample Size
- Unmoderated open sorts: 15–30 participants typically produce stable results
- Closed sorts: 20–50 participants gives statistical reliability
- Per segment: If you have meaningfully different user types, recruit 15–20 per segment — their mental models may differ significantly
Step 5: Analyze the Results
Card sort analysis uses several complementary techniques:
Similarity matrix: A grid showing how often pairs of cards were grouped together by participants. Pairs with high co-occurrence belong together in your structure.
Dendrogram: A hierarchical tree diagram showing how cards cluster. The closer two cards appear in the dendrogram, the more often participants grouped them together. Most card sort tools generate this automatically.
Category agreement: In closed sorts, how often did participants put each card in its expected category? Low agreement signals a structural mismatch between your navigation and user mental models.
Participant-generated category names: In open sorts, the labels participants create reveal the language your users actually use — invaluable for navigation labels, menu item names, and microcopy.
Step 6: Follow Up with Qualitative Interviews
Numbers tell you what happened; interviews tell you why. After analyzing your card sort data, you will likely have questions:
- Why did participants split this topic between two categories?
- What does "settings" mean to users — do they expect both billing and profile info there?
- Why did 40% of participants create a catch-all category?
Running a follow-up interview study with Koji lets you probe these questions at scale. Distribute an AI-moderated interview to your card sort participants or a new representative sample, and Koji will conduct structured conversations exploring the patterns you observed. The AI automatically synthesizes themes across responses, so you get qualitative context without manually analyzing dozens of interviews.
Card Sorting vs. Tree Testing
Card sorting and tree testing are often confused — they are actually two complementary techniques that work best in sequence:
| Method | Purpose | When to Use |
|---|---|---|
| Card sorting | Reveals how users organize information | Before you build your structure |
| Tree testing | Tests whether users can navigate a structure | After you build your structure |
Use card sorting to design your information architecture, then tree testing to validate it. The two together give you both the generative insight (how should it be organized?) and the evaluative confirmation (can users actually navigate it?).
Key Things to Know
- Remote card sorting is the norm: Most card sorts are conducted remotely with tools like OptimalSort, Lyssna, or Maze
- Terminology matters enormously: The label on a card significantly affects how it is sorted — test your labels with 2–3 participants before running the full study
- Demographic segments sort differently: Power users and casual users often have different mental models; analyze by segment when relevant
- Randomize card order: The sequence cards appear can bias sorting patterns; always randomize the presentation
- Expect inconsistency: Some participants will sort idiosyncratically — look for statistical patterns, not unanimous agreement
Tips & Best Practices
- Write a clear intro screen: Participants need to understand what a card represents and what you are asking them to do — poor instructions produce noisy data
- Invite participants to leave comments: Many tools support freetext notes per group; these are often the most insightful data points in the entire study
- Do not sort by completion time: Fast completion does not mean high quality — look for outlier categories and unusual groupings as signals worth exploring
- Combine with tree testing: Card sorting tells you how to structure; tree testing confirms it works — run both before committing to a major navigation redesign
- Involve developers in analysis: Sharing card sort findings with engineering helps explain why an IA decision was made, reducing pushback later
Related Articles
- UX Research Process: A Complete Framework
- Usability Testing Guide
- The Definitive Guide to User Interviews
- Generative vs. Evaluative Research
- How to Analyze Qualitative Data
Frequently Asked Questions
Q: How many participants do I need for a card sort? A: For unmoderated open sorts, 15–30 participants typically reach stable results. For closed sorts, 20–50 participants gives reliable pattern data. If you have multiple user segments, recruit 15–20 per segment.
Q: What is the difference between card sorting and tree testing? A: Card sorting is generative — it reveals how users organize information. Tree testing is evaluative — it tests whether users can navigate a specific structure. Use card sorting to design your IA, then tree testing to validate it.
Q: Should I run open or closed card sorting first? A: If you are designing a new structure from scratch, start with open sorting to let users show you their mental model. If you are evaluating an existing structure, use closed sorting. If you have a draft structure to test, hybrid sorting gives the best of both approaches.
Q: How do I analyze an open card sort? A: Focus on three outputs: a similarity matrix showing which items are consistently grouped together, a dendrogram visualizing clustering patterns, and the category names participants created. The labels reveal user vocabulary and mental models that are directly actionable for your navigation design.
Q: Can I combine card sorting with qualitative interviews? A: Yes — this combination is highly effective. Run unmoderated card sorts for statistical patterns, then use a platform like Koji to conduct follow-up AI interviews that probe the reasoning behind the patterns. The combination gives you both quantitative structure and qualitative understanding.
Q: How is card sorting different from user interviews? A: Card sorting is a structured task-based method that reveals how users categorize information. User interviews are conversational and reveal goals, motivations, and mental models. They are complementary: card sorting gives you structural data, interviews give you contextual understanding.
Related Articles
How to Analyze Qualitative Data: From Raw Interviews to Actionable Insights
A step-by-step guide to qualitative data analysis — from reviewing raw transcripts to synthesizing themes, generating insights, and presenting findings that teams act on.
In-Depth Interviews: The Complete Methodology Guide
Everything you need to plan, conduct, and analyze in-depth interviews — the gold standard of qualitative research.
The Definitive Guide to User Interviews
Everything you need to plan, conduct, and analyze user interviews that produce actionable research insights.
UX Research Process: A Complete Framework for 2026
A practical end-to-end guide to the UX research process — from defining your research question to activating insights that actually change product decisions.
How to Conduct Usability Testing: The Complete Guide
A comprehensive guide to usability testing for UX researchers and product managers. Covers types of testing, participant numbers, step-by-step facilitation, and the most common mistakes to avoid.
Generative vs. Evaluative Research: When to Use Each Method
Understand the difference between generative and evaluative research, when to use each, and how combining both leads to better product decisions. Includes a comparison table and decision framework.