{"site":{"name":"Koji","description":"AI-native customer research platform that helps teams conduct, analyze, and synthesize customer interviews at scale.","url":"https://www.koji.so","contentTypes":["blog","documentation"],"lastUpdated":"2026-05-04T17:41:07.891Z"},"content":[{"type":"documentation","id":"f17d589a-1d55-43b7-8bcc-12a188a923db","slug":"top-tasks-analysis-guide","title":"Top Tasks Analysis: How to Identify the Few Tasks That Matter Most","url":"https://www.koji.so/docs/top-tasks-analysis-guide","summary":"A complete guide to top tasks analysis — Gerry McGovern's methodology for identifying the small set of tasks (typically 5-10) that account for the majority of customer demand on a product or website. Covers candidate task list creation, sample size, randomisation, vote analysis, the long-tail distribution, and how to combine top tasks voting with open-ended reasoning using Koji structured questions and AI follow-up probing in a single conversation.","content":"# Top Tasks Analysis: How to Identify the Few Tasks That Matter Most\n\n**Top tasks analysis is a research methodology that identifies the small set of tasks (typically 5 to 10) that account for the majority of why people use your website, app, or product. Developed by Gerry McGovern, the technique starts with a long task list of 50-100 candidate tasks, asks a representative sample of customers to vote for their top 5, and reveals a steeply skewed distribution where the top tasks attract orders of magnitude more votes than the bottom ones. Running a Top Tasks study with a conversational research platform like Koji collapses the analysis from weeks to days while adding the qualitative reasoning behind each vote.**\n\nMost teams treat every feature, every page, and every workflow as roughly equally important. Top tasks analysis exposes how wrong that assumption is. In nearly every Top Tasks study ever published, the top 5% of tasks account for 25-50% of all customer attention, while the bottom 50% combined attract less than 5%. If your product or site is not optimised around the top tasks — and most are not — you are spending the majority of your engineering and content effort on the long tail.\n\nThis guide explains what top tasks analysis is, how to run a study, how to analyse the data, and how to use it to make ruthless prioritisation calls.\n\n---\n\n## What Is Top Tasks Analysis?\n\nTop tasks analysis was popularised by Gerry McGovern in his 2010 book *The Stranger's Long Neck*. The method is built on one observation that has been replicated thousands of times: when you give people a long list of possible tasks and ask them to pick a small subset, the votes are wildly unequal. A few tasks dominate. The rest are noise.\n\nA typical Top Tasks study produces a chart that looks like this:\n\n```\nTask A  ████████████████████████  24%\nTask B  ████████████████████  19%\nTask C  ████████████████  14%\nTask D  ███████████  10%\nTask E  ████████  7%\nTask F  ██████  5%\nTask G  █████  4%\nTask H  ████  3%\n... 92 more tasks ...\nTask ZZ ▏  0.1%\n```\n\nThe shape is the insight. The top 5-10 tasks add up to 50-70% of all votes. The middle 30 tasks pick up 20-30%. The bottom 50+ tasks share what is left. Every Top Tasks study published in the past 15 years finds this same skew, across industries — government websites, banking apps, e-commerce, B2B SaaS.\n\nThis pattern has direct implications:\n\n- **Most of your investment should go to the top 5-10 tasks.** Anything you do on those affects the majority of customers.\n- **The middle 30 are caretaker work.** They matter, but only enough to maintain.\n- **The bottom 50 should probably be removed, archived, or de-emphasised.** Most of them dilute attention away from the top.\n\n---\n\n## When to Use Top Tasks Analysis\n\nTop tasks fits best when the question is one of these:\n\n- \"What should our homepage actually focus on?\"\n- \"Which features deserve most of the next quarter's engineering capacity?\"\n- \"Which pages should our search index prioritise?\"\n- \"What should our customer support team optimise for?\"\n- \"What are the few things our app needs to be brilliant at?\"\n\nIt is less helpful when:\n\n- You are inventing a new product (the candidate task list does not yet exist)\n- The product has fewer than 10 distinct user-facing tasks (the skew is not pronounced enough to act on)\n- You are doing pure aesthetic redesign (top tasks is a strategy tool, not a design tool)\n\nFor early-stage products without a candidate task list, run [generative research](/docs/generative-research-guide) first to discover what the tasks even are, then move to top tasks once you have 50+ candidates.\n\n---\n\n## How to Run a Top Tasks Study\n\n### Step 1 — Build the candidate task list (the \"long list\")\n\nGenerate 50-100 candidate tasks. Sources:\n\n- Your existing site or app navigation\n- Customer support tickets — what people actually contact you about\n- Search query logs — what people search for when they get to your site\n- Sales call transcripts\n- Existing customer interviews — the strongest source by far\n\nWrite each task as a short noun-phrase the customer would recognise. \"Pay my invoice.\" \"Update my billing address.\" \"Cancel my subscription.\" Avoid internal jargon (\"manage account profile metadata\") — use customer language.\n\nIf your team has been running [continuous discovery interviews](/docs/continuous-discovery-user-research) on Koji, your task list is already half-written: pull the most common themes from the report and convert them to tasks.\n\n### Step 2 — Decide the audience\n\nTop tasks is most powerful when the audience is your *real* customer base, not a panel. Use a [research screener](/docs/research-screener-questions) to filter respondents to people who have used your product in the last 90 days. For multi-segment products (e.g., admin vs end user), run separate studies per segment — the top tasks differ.\n\n### Step 3 — Sample size\n\nMcGovern's original guidance was 400+ respondents. With a representative panel and a clean task list, the rank order tends to stabilise around 200. For pilot studies, 100 is enough to see the rough skew.\n\nIf you are running Top Tasks within an existing customer base (rather than a panel), 100-150 is usually sufficient because the audience is more homogeneous.\n\n### Step 4 — Build the survey\n\nThe mechanics are simple. Show the full task list. Ask each respondent to pick their top 5. Some teams add a follow-up: \"What is the *single* most important task on this list?\"\n\nTwo practical considerations:\n\n- **Randomise the task order per respondent.** Without randomisation, top-of-list tasks get a position bias of up to 15%.\n- **Display all tasks on one screen if possible.** Pagination causes attention to skew toward the first page.\n\n### Step 5 — Add qualitative depth (the Koji upgrade)\n\nThis is the step traditional Top Tasks studies skip — and it is the highest-leverage upgrade you can make.\n\nAfter picking their top 5, ask a few open-ended questions:\n\n- \"What were you trying to accomplish the last time you did [top task]?\"\n- \"What is the hardest part of [top task] today?\"\n- \"What stops you from doing [top task] more often?\"\n\nIn a traditional survey, these open-ended questions die in the long tail of unread text. In a Koji conversational interview, the AI moderator probes each answer, surfaces themes automatically, and produces a [research report](/docs/reading-your-research-report) with quotes per task. You go from \"Task A is the top task\" to \"Task A is the top task, here is exactly why customers struggle with it, and here is what they would change.\"\n\nUse Koji's [structured questions](/docs/structured-questions-guide) to do this in a single study:\n\n- **Multiple choice (with a 5-pick limit)** — the top tasks vote\n- **Ranking** — the order of the top 5 tasks per respondent\n- **Open-ended (with follow-up probing)** — why each task matters\n- **Scale** — severity of the friction on each top task\n\n### Step 6 — Analyse\n\nThe core analysis is the vote distribution. Sort tasks by total votes (or weighted votes if you used ranking), then chart them as a Pareto-style bar chart.\n\nThree derived metrics to compute:\n\n- **Top-task share.** What percentage of total votes do the top 5 tasks capture? Healthy products score 50-70%. Below 40%, the product is too unfocused to optimise.\n- **Tail share.** What percentage of tasks attract less than 1% of votes? A long tail is normal; a long tail that occupies most of your nav menu is a problem.\n- **Segment overlap.** If you ran multiple segments, how similar are their top tasks? High overlap means you can build one experience; low overlap means you need different surfaces per segment.\n\n---\n\n## Sample Sizes by Goal\n\n| Goal | Recommended sample |\n|---|---|\n| Pilot / directional read | 100-150 respondents |\n| Production prioritisation | 200-400 respondents |\n| Cross-segment comparison | 100-200 per segment |\n| Tracking over time | 200+ per wave, fielded annually |\n\n---\n\n## Top Tasks vs Adjacent Methods\n\n| Method | Question it answers | Best for |\n|---|---|---|\n| **Top tasks** | Which tasks attract most demand? | Strategic prioritisation |\n| [Card sorting](/docs/card-sorting-guide) | How do users group tasks? | Information architecture |\n| [Tree testing](/docs/tree-testing-guide) | Can users find the task in this nav? | IA validation |\n| [Kano model](/docs/kano-model) | Which features delight vs satisfy? | Feature prioritisation |\n| [Maxdiff](/docs/maxdiff-analysis-guide) | Forced trade-offs across many items | Forced ranking |\n\nA common pattern: run top tasks to find what matters → run [tree testing](/docs/tree-testing-guide) to validate the navigation supports the top tasks → run [card sorting](/docs/card-sorting-guide) only on the top tasks if the IA is unclear.\n\nTop tasks and MaxDiff overlap. Top tasks is more interpretable to non-research audiences (\"pick your top 5\"). MaxDiff is more statistically rigorous but harder to explain. For most teams shipping the majority of decisions, top tasks is the better default.\n\n---\n\n## Acting on Top Tasks Findings\n\nThe Top Tasks findings are only as useful as the decisions they drive. Three patterns successful teams use:\n\n### 1. Re-budget the team's attention\n\nIf 70% of customer demand sits in 8 tasks, but only 30% of the team's quarterly capacity is allocated to those tasks, the team has a portfolio mismatch. Top tasks gives you the data to argue for re-allocation.\n\n### 2. Set a \"top tasks score\"\n\nPick a measurable proxy for each top task — completion rate, time to complete, satisfaction score — and track it. Over time, the top tasks score becomes a single product-health metric the whole team can rally around.\n\n### 3. Ruthlessly demote the long tail\n\nThe bottom 50 tasks are not \"small features the team should still build.\" They are noise. Hide them, archive them, or remove them from the navigation. McGovern's term is \"deletism\" — the discipline of removing rather than adding.\n\n---\n\n## Why Run Top Tasks With Koji\n\nTraditional Top Tasks studies use a forms-based survey tool (SurveyMonkey, Qualtrics, Google Forms). They produce the rank order, but the qualitative reasoning never materialises — open-text responses drown in volume and rarely get analysed.\n\nKoji collapses the workflow:\n\n- **Single conversation, all formats.** Top tasks vote (multiple_choice), top task ranking (ranking), top-task friction (open_ended with follow-up), severity (scale) — all in one Koji interview.\n- **Voice or text.** Customers can speak their answer to \"what is the hardest part of [top task]\" and the AI captures it cleanly.\n- **AI follow-up probing.** When a respondent says \"it is just slow,\" the AI asks \"what specifically feels slow — finding it, or filling it out?\"\n- **Automatic analysis.** Themes per top task. Quotes per top task. Distribution charts. All ready in minutes.\n- **[Insights chat](/docs/insights-chat-guide)** — ask \"what did 25-34 year olds say about Task A?\" in plain English.\n\nA team running their first Top Tasks study with Koji typically goes from candidate-task list to acted-on findings in 7-10 days, against 6-8 weeks for the traditional survey-then-interview workflow.\n\n---\n\n## Common Pitfalls\n\n1. **Too few candidate tasks.** With only 20-30 candidates, the skew is muted and the long tail is invisible. Aim for 50-100.\n2. **Internal jargon in the task list.** Tasks must be in customer language. \"Manage user permissions\" is jargon. \"Add a teammate\" is a task.\n3. **Unrandomised order.** Position bias warps the top-task ranking by 10-15%.\n4. **Skipping the qualitative.** The vote is the rank; the reasoning is the action plan. Skipping the open-ended turns Top Tasks into a number with no story.\n5. **Not re-running.** Tasks shift as products and customer bases evolve. Re-field annually at minimum.\n6. **Treating low-vote tasks as low-value.** Some low-vote tasks are critical for compliance, security, or accessibility. Use the data to inform attention allocation, not as a kill list — apply judgement.\n\n---\n\n## The Bottom Line\n\nMost product teams over-invest in the long tail and under-invest in the few tasks customers actually care about. Top Tasks analysis is the single most direct way to see that imbalance and act on it. Combined with a conversational research platform like Koji that captures both the vote and the reasoning in one study, the methodology is faster and more decision-grade than ever.\n\nIf your team has not run a Top Tasks study in the last 12 months, the odds that your roadmap is well-aligned to customer demand are low. The fix is one study away.\n\n---\n\n## Related Resources\n\n- [Structured Questions in AI Interviews](/docs/structured-questions-guide) — How Koji combines multiple_choice voting, ranking, scale, and open-ended probing in a single Top Tasks study\n- [Card Sorting Guide](/docs/card-sorting-guide) — Use after Top Tasks to design the IA around them\n- [Tree Testing Guide](/docs/tree-testing-guide) — Validate that navigation supports the identified top tasks\n- [Kano Model](/docs/kano-model) — Complement to Top Tasks for feature prioritisation\n- [MaxDiff Analysis Guide](/docs/maxdiff-analysis-guide) — A more statistically rigorous alternative\n- [Feature Prioritization Surveys](/docs/feature-prioritization-survey-guide) — Translating Top Tasks findings into a roadmap\n- [Continuous Discovery: Weekly Customer Interviews](/docs/continuous-discovery-user-research) — Keep your candidate task list fresh year-round\n","category":"Research Methods","lastModified":"2026-05-04T03:21:36.3095+00:00","metaTitle":"Top Tasks Analysis Guide — Find What Customers Actually Use (2026)","metaDescription":"Run a Top Tasks study to identify the 5-10 tasks that drive the majority of customer demand. Includes McGovern methodology, sample sizes, and how Koji captures vote + reasoning in one conversation.","keywords":["top tasks analysis","top tasks methodology","top tasks survey","gerry mcgovern","task analysis ux","prioritization research","task identification","top tasks study","customer task analysis","website prioritization","feature prioritization research","task long tail"],"aiSummary":"A complete guide to top tasks analysis — Gerry McGovern's methodology for identifying the small set of tasks (typically 5-10) that account for the majority of customer demand on a product or website. Covers candidate task list creation, sample size, randomisation, vote analysis, the long-tail distribution, and how to combine top tasks voting with open-ended reasoning using Koji structured questions and AI follow-up probing in a single conversation.","aiPrerequisites":["ux-research-process","task-analysis-ux-research"],"aiLearningOutcomes":["Build a 50-100 candidate task list using customer language","Run a Top Tasks survey with proper randomisation and sample size","Analyse the vote distribution to find the top tasks, middle, and long tail","Combine quantitative voting with qualitative reasoning using Koji structured questions","Translate Top Tasks findings into ruthless prioritisation decisions"],"aiDifficulty":"intermediate","aiEstimatedTime":"12 min read"}],"pagination":{"total":1,"returned":1,"offset":0}}