New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Research Methods

Top Tasks Analysis: How to Identify the Few Tasks That Matter Most

A complete guide to top tasks analysis — Gerry McGovern's methodology for finding the small set of tasks customers actually use your product or website to accomplish. Includes how to run a top tasks survey, calculate the long-tail, and validate the findings with AI customer interviews.

Top Tasks Analysis: How to Identify the Few Tasks That Matter Most

Top tasks analysis is a research methodology that identifies the small set of tasks (typically 5 to 10) that account for the majority of why people use your website, app, or product. Developed by Gerry McGovern, the technique starts with a long task list of 50-100 candidate tasks, asks a representative sample of customers to vote for their top 5, and reveals a steeply skewed distribution where the top tasks attract orders of magnitude more votes than the bottom ones. Running a Top Tasks study with a conversational research platform like Koji collapses the analysis from weeks to days while adding the qualitative reasoning behind each vote.

Most teams treat every feature, every page, and every workflow as roughly equally important. Top tasks analysis exposes how wrong that assumption is. In nearly every Top Tasks study ever published, the top 5% of tasks account for 25-50% of all customer attention, while the bottom 50% combined attract less than 5%. If your product or site is not optimised around the top tasks — and most are not — you are spending the majority of your engineering and content effort on the long tail.

This guide explains what top tasks analysis is, how to run a study, how to analyse the data, and how to use it to make ruthless prioritisation calls.


What Is Top Tasks Analysis?

Top tasks analysis was popularised by Gerry McGovern in his 2010 book The Stranger's Long Neck. The method is built on one observation that has been replicated thousands of times: when you give people a long list of possible tasks and ask them to pick a small subset, the votes are wildly unequal. A few tasks dominate. The rest are noise.

A typical Top Tasks study produces a chart that looks like this:

Task A  ████████████████████████  24%
Task B  ████████████████████  19%
Task C  ████████████████  14%
Task D  ███████████  10%
Task E  ████████  7%
Task F  ██████  5%
Task G  █████  4%
Task H  ████  3%
... 92 more tasks ...
Task ZZ ▏  0.1%

The shape is the insight. The top 5-10 tasks add up to 50-70% of all votes. The middle 30 tasks pick up 20-30%. The bottom 50+ tasks share what is left. Every Top Tasks study published in the past 15 years finds this same skew, across industries — government websites, banking apps, e-commerce, B2B SaaS.

This pattern has direct implications:

  • Most of your investment should go to the top 5-10 tasks. Anything you do on those affects the majority of customers.
  • The middle 30 are caretaker work. They matter, but only enough to maintain.
  • The bottom 50 should probably be removed, archived, or de-emphasised. Most of them dilute attention away from the top.

When to Use Top Tasks Analysis

Top tasks fits best when the question is one of these:

  • "What should our homepage actually focus on?"
  • "Which features deserve most of the next quarter's engineering capacity?"
  • "Which pages should our search index prioritise?"
  • "What should our customer support team optimise for?"
  • "What are the few things our app needs to be brilliant at?"

It is less helpful when:

  • You are inventing a new product (the candidate task list does not yet exist)
  • The product has fewer than 10 distinct user-facing tasks (the skew is not pronounced enough to act on)
  • You are doing pure aesthetic redesign (top tasks is a strategy tool, not a design tool)

For early-stage products without a candidate task list, run generative research first to discover what the tasks even are, then move to top tasks once you have 50+ candidates.


How to Run a Top Tasks Study

Step 1 — Build the candidate task list (the "long list")

Generate 50-100 candidate tasks. Sources:

  • Your existing site or app navigation
  • Customer support tickets — what people actually contact you about
  • Search query logs — what people search for when they get to your site
  • Sales call transcripts
  • Existing customer interviews — the strongest source by far

Write each task as a short noun-phrase the customer would recognise. "Pay my invoice." "Update my billing address." "Cancel my subscription." Avoid internal jargon ("manage account profile metadata") — use customer language.

If your team has been running continuous discovery interviews on Koji, your task list is already half-written: pull the most common themes from the report and convert them to tasks.

Step 2 — Decide the audience

Top tasks is most powerful when the audience is your real customer base, not a panel. Use a research screener to filter respondents to people who have used your product in the last 90 days. For multi-segment products (e.g., admin vs end user), run separate studies per segment — the top tasks differ.

Step 3 — Sample size

McGovern's original guidance was 400+ respondents. With a representative panel and a clean task list, the rank order tends to stabilise around 200. For pilot studies, 100 is enough to see the rough skew.

If you are running Top Tasks within an existing customer base (rather than a panel), 100-150 is usually sufficient because the audience is more homogeneous.

Step 4 — Build the survey

The mechanics are simple. Show the full task list. Ask each respondent to pick their top 5. Some teams add a follow-up: "What is the single most important task on this list?"

Two practical considerations:

  • Randomise the task order per respondent. Without randomisation, top-of-list tasks get a position bias of up to 15%.
  • Display all tasks on one screen if possible. Pagination causes attention to skew toward the first page.

Step 5 — Add qualitative depth (the Koji upgrade)

This is the step traditional Top Tasks studies skip — and it is the highest-leverage upgrade you can make.

After picking their top 5, ask a few open-ended questions:

  • "What were you trying to accomplish the last time you did [top task]?"
  • "What is the hardest part of [top task] today?"
  • "What stops you from doing [top task] more often?"

In a traditional survey, these open-ended questions die in the long tail of unread text. In a Koji conversational interview, the AI moderator probes each answer, surfaces themes automatically, and produces a research report with quotes per task. You go from "Task A is the top task" to "Task A is the top task, here is exactly why customers struggle with it, and here is what they would change."

Use Koji's structured questions to do this in a single study:

  • Multiple choice (with a 5-pick limit) — the top tasks vote
  • Ranking — the order of the top 5 tasks per respondent
  • Open-ended (with follow-up probing) — why each task matters
  • Scale — severity of the friction on each top task

Step 6 — Analyse

The core analysis is the vote distribution. Sort tasks by total votes (or weighted votes if you used ranking), then chart them as a Pareto-style bar chart.

Three derived metrics to compute:

  • Top-task share. What percentage of total votes do the top 5 tasks capture? Healthy products score 50-70%. Below 40%, the product is too unfocused to optimise.
  • Tail share. What percentage of tasks attract less than 1% of votes? A long tail is normal; a long tail that occupies most of your nav menu is a problem.
  • Segment overlap. If you ran multiple segments, how similar are their top tasks? High overlap means you can build one experience; low overlap means you need different surfaces per segment.

Sample Sizes by Goal

GoalRecommended sample
Pilot / directional read100-150 respondents
Production prioritisation200-400 respondents
Cross-segment comparison100-200 per segment
Tracking over time200+ per wave, fielded annually

Top Tasks vs Adjacent Methods

MethodQuestion it answersBest for
Top tasksWhich tasks attract most demand?Strategic prioritisation
Card sortingHow do users group tasks?Information architecture
Tree testingCan users find the task in this nav?IA validation
Kano modelWhich features delight vs satisfy?Feature prioritisation
MaxdiffForced trade-offs across many itemsForced ranking

A common pattern: run top tasks to find what matters → run tree testing to validate the navigation supports the top tasks → run card sorting only on the top tasks if the IA is unclear.

Top tasks and MaxDiff overlap. Top tasks is more interpretable to non-research audiences ("pick your top 5"). MaxDiff is more statistically rigorous but harder to explain. For most teams shipping the majority of decisions, top tasks is the better default.


Acting on Top Tasks Findings

The Top Tasks findings are only as useful as the decisions they drive. Three patterns successful teams use:

1. Re-budget the team's attention

If 70% of customer demand sits in 8 tasks, but only 30% of the team's quarterly capacity is allocated to those tasks, the team has a portfolio mismatch. Top tasks gives you the data to argue for re-allocation.

2. Set a "top tasks score"

Pick a measurable proxy for each top task — completion rate, time to complete, satisfaction score — and track it. Over time, the top tasks score becomes a single product-health metric the whole team can rally around.

3. Ruthlessly demote the long tail

The bottom 50 tasks are not "small features the team should still build." They are noise. Hide them, archive them, or remove them from the navigation. McGovern's term is "deletism" — the discipline of removing rather than adding.


Why Run Top Tasks With Koji

Traditional Top Tasks studies use a forms-based survey tool (SurveyMonkey, Qualtrics, Google Forms). They produce the rank order, but the qualitative reasoning never materialises — open-text responses drown in volume and rarely get analysed.

Koji collapses the workflow:

  • Single conversation, all formats. Top tasks vote (multiple_choice), top task ranking (ranking), top-task friction (open_ended with follow-up), severity (scale) — all in one Koji interview.
  • Voice or text. Customers can speak their answer to "what is the hardest part of [top task]" and the AI captures it cleanly.
  • AI follow-up probing. When a respondent says "it is just slow," the AI asks "what specifically feels slow — finding it, or filling it out?"
  • Automatic analysis. Themes per top task. Quotes per top task. Distribution charts. All ready in minutes.
  • Insights chat — ask "what did 25-34 year olds say about Task A?" in plain English.

A team running their first Top Tasks study with Koji typically goes from candidate-task list to acted-on findings in 7-10 days, against 6-8 weeks for the traditional survey-then-interview workflow.


Common Pitfalls

  1. Too few candidate tasks. With only 20-30 candidates, the skew is muted and the long tail is invisible. Aim for 50-100.
  2. Internal jargon in the task list. Tasks must be in customer language. "Manage user permissions" is jargon. "Add a teammate" is a task.
  3. Unrandomised order. Position bias warps the top-task ranking by 10-15%.
  4. Skipping the qualitative. The vote is the rank; the reasoning is the action plan. Skipping the open-ended turns Top Tasks into a number with no story.
  5. Not re-running. Tasks shift as products and customer bases evolve. Re-field annually at minimum.
  6. Treating low-vote tasks as low-value. Some low-vote tasks are critical for compliance, security, or accessibility. Use the data to inform attention allocation, not as a kill list — apply judgement.

The Bottom Line

Most product teams over-invest in the long tail and under-invest in the few tasks customers actually care about. Top Tasks analysis is the single most direct way to see that imbalance and act on it. Combined with a conversational research platform like Koji that captures both the vote and the reasoning in one study, the methodology is faster and more decision-grade than ever.

If your team has not run a Top Tasks study in the last 12 months, the odds that your roadmap is well-aligned to customer demand are low. The fix is one study away.


Related Resources

Related Articles

Structured Questions in AI Interviews

Mix quantitative data collection — scales, ratings, multiple choice, ranking — with AI-powered conversational follow-up in a single interview.

Research Screener Questions: How to Write Questions That Find the Right Participants

Learn how to write effective screener questions that filter the right participants for your user research studies. Includes 10 proven templates, best practices, and common mistakes to avoid.

Generative Research: How to Uncover User Needs You Didn't Know Existed

A complete guide to generative (exploratory) user research — what it is, when to use it, which methods work best, and how AI-powered platforms like Koji make it faster and more scalable than ever.

Tree Testing: The Complete Guide to Testing Your Information Architecture

A comprehensive guide to tree testing — the UX research method for validating information architecture and navigation before you build.

MaxDiff Analysis: The Complete Guide to Maximum Difference Scaling (2026)

Learn how MaxDiff (Maximum Difference Scaling) produces sharper feature and message prioritization than rating scales — and how to pair it with conversational AI interviews to capture the why behind every score.

Task Analysis in UX Research: A Complete Methodology Guide

Task analysis is the foundation of usability — the systematic study of how users complete goals. This guide covers hierarchical task analysis (HTA), cognitive task analysis (CTA), the 7-step process, real examples, and how AI-moderated voice interviews let teams build task models from hundreds of users in days.

Kano Model: How to Prioritize Features Using Customer Research

A complete guide to the Kano Model — the feature prioritization framework that maps customer emotions to product decisions. Learn how to run Kano surveys, classify features, and build products customers love.

Card Sorting: The Complete Guide to Information Architecture Research

Everything you need to run effective card sorting studies — open, closed, and hybrid variants. Includes sample sizes, analysis techniques, and how to combine card sorting with qualitative interviews.

Continuous Discovery: How to Run Weekly Customer Interviews Without Burning Out

Continuous discovery is the practice of conducting customer interviews every week as part of your normal workflow. This guide explains how to build an always-on research practice that actually scales.

How to Run Feature Prioritization Surveys That Build Products Users Actually Want

Learn how to run feature prioritization surveys using RICE, Kano, MoSCoW, and opportunity scoring frameworks. Combine quantitative ranking with AI-driven qualitative depth to build what users truly need.