{"site":{"name":"Koji","description":"AI-native customer research platform that helps teams conduct, analyze, and synthesize customer interviews at scale.","url":"https://www.koji.so","contentTypes":["blog","documentation"],"lastUpdated":"2026-05-09T07:16:24.597Z"},"content":[{"type":"documentation","id":"e50c9fd4-37e1-4724-839e-3633eca26f6e","slug":"heart-framework-ux-metrics","title":"HEART Framework: Google’s 5-Metric Model for Measuring User Experience (2026 Guide)","url":"https://www.koji.so/docs/heart-framework-ux-metrics","summary":"A definitive guide to Google’s HEART framework — the five UX metrics (Happiness, Engagement, Adoption, Retention, Task Success), Kerry Rodden’s Goals–Signals–Metrics process, real product examples, and how Koji collapses HEART surveys into a single afternoon.","content":"## What Is the HEART Framework?\n\nThe HEART framework is Google’s five-metric model for measuring the quality of a user experience: **Happiness, Engagement, Adoption, Retention, and Task Success**. It was developed in 2010 by Kerry Rodden, Hilary Hutchinson, and Xin Fu on Google’s research team, and first published at ACM CHI as *Measuring the User Experience on a Large Scale*. The goal was to give product teams a small, focused set of user-centered metrics they could pair with their existing analytics, instead of drowning in raw clickstream data.\n\nMore than fifteen years later, HEART remains the most-cited UX measurement framework in product. It is used inside Google, Spotify, Atlassian, and most modern product orgs because it does two things at once: it covers both **attitudinal** signals (what users say) and **behavioral** signals (what users do), and it pairs naturally with the Goals–Signals–Metrics (GSM) process for turning fuzzy product goals into trackable numbers.\n\n## The Five HEART Metrics, Explained\n\n### H — Happiness\nAttitudinal measures of how users feel about the product. Typically collected through scale-based surveys (CSAT, NPS, SUS, custom satisfaction items) and qualitative follow-ups. Happiness is the only HEART metric that *requires* asking the user; you cannot infer it from logs.\n\n**Common signals:** satisfaction rating, perceived ease, recommendation intent, perceived speed.\n**Common metrics:** CSAT % top-2-box, average SUS score, NPS, average 1–7 happiness score.\n\n### E — Engagement\nThe depth of interaction per session, for active users. Engagement is **not** the same as adoption; it specifically describes how intensely current users use the product.\n\n**Common signals:** sessions per user per week, time in app, photos uploaded, messages sent.\n**Common metrics:** average sessions per active user, median session duration, activity events per user per week.\n\n### A — Adoption\nNew users in a defined period, or new users of a specific feature. Adoption answers “Is anyone picking this up?” — it is the leading indicator before retention.\n\n**Common signals:** first-time users of a feature, upgrades, conversions to paid plan.\n**Common metrics:** % of monthly actives who used Feature X at least once, weekly new signups, % of trial users who completed activation.\n\n### R — Retention\nThe rate at which users return over a defined period. Retention is the closest HEART metric to a business outcome — it is what every product compounds against over time.\n\n**Common signals:** users active in week 2, week 4, week 12; reactivation; churn.\n**Common metrics:** D1/D7/D30 retention curves, monthly active retention, gross/net revenue retention for B2B.\n\n### T — Task Success\nWhether users can complete the things they came to do. Task Success is the most under-instrumented HEART metric — most teams measure adoption and retention but never check whether the users they retained are actually succeeding.\n\n**Common signals:** task completion, errors per task, time to complete, search-to-success rate.\n**Common metrics:** task completion rate %, average Single Ease Question (SEQ) score, error rate, time on task.\n\n> **Expert insight:** “When we created HEART, the goal wasn’t to add five more metrics to a dashboard — it was to give designers and PMs a vocabulary for choosing the *right* metric for the question they were actually asking,” — Kerry Rodden, framework co-author, in her published HEART reference notes (kerryrodden.com/heart).\n\n## The Goals–Signals–Metrics (GSM) Process\n\nHEART without GSM is a checklist. HEART *with* GSM is a measurement system. GSM is the three-step exercise the original authors paired with the framework:\n\n1. **Goals** — For each HEART category that matters to your project, write one sentence describing what success looks like *for the user*. Not for the business.\n2. **Signals** — List the observable behaviors or stated attitudes that would tell you the goal is being achieved (or missed).\n3. **Metrics** — For each signal, choose the precise number you will track over time.\n\n### Worked example: a new in-app onboarding flow\n\n| HEART | Goal | Signal | Metric |\n|---|---|---|---|\n| Happiness | New users feel confident after onboarding | Self-reported confidence, low frustration | Post-onboarding 1–7 confidence score; % of users rating onboarding 6 or 7 |\n| Adoption | New users complete the activation moment | First action of value within session 1 | % of new users who reach the activation event in <10 min |\n| Task Success | Users complete each onboarding step without error | Completion of each step, low error rate | Step-level completion rate; SEQ score per step; error count |\n\nNotice that **Engagement** and **Retention** were intentionally left off. The team is measuring an onboarding flow, not the whole product. This is the entire point of GSM — deliberately scoping which HEART metrics matter for *this* project.\n\n## Why HEART Works (and What It Replaced)\n\nBefore HEART, most product orgs split into two warring camps: a quant analytics team chasing dashboard numbers, and a qual UX team running studies that no one tied to KPIs. HEART’s contribution was to make both legitimate inputs to the same scorecard. Happiness lives next to Engagement. SEQ scores live next to D7 retention. The framework forces a team to admit that **users’ stated experience and observed behavior are both part of ‘the metric.’**\n\nA 2023 industry survey by ProductPlan found that HEART is among the top three UX measurement frameworks adopted by mature product teams, alongside North Star and AARRR. Atlassian, Spotify, GoDaddy, and dozens of public design system case studies cite HEART as the model their measurement plan is built on.\n\n## How to Implement HEART in Five Steps\n\n### 1. Pick the project, not the product\nDo not try to instrument HEART for “the whole app.” Pick a single project: a feature launch, a redesign, a flow you suspect is broken. The metrics framework should be scoped to a decision you are about to make.\n\n### 2. Choose 2–3 HEART categories\nResist the urge to fill in all five. The original Google paper explicitly recommends picking the categories that map to the project’s goal. A new feature launch is usually Adoption + Task Success. A retention play is usually Retention + Happiness. A redesign of a complex form is usually Happiness + Task Success.\n\n### 3. Run the GSM exercise as a workshop\nGet the PM, designer, researcher, and engineer in the room for 60–90 minutes. For each HEART category you picked, write the Goal sentence first, then brainstorm Signals, then narrow to one or two Metrics. Force every metric to have a numeric target.\n\n### 4. Instrument behavioral metrics in analytics; instrument attitudinal metrics in research\nEngagement, Adoption, Retention, and the *behavioral* part of Task Success belong in your analytics platform (Mixpanel, Amplitude, PostHog, GA4). Happiness and the *perceptual* part of Task Success (SEQ, post-task confidence) belong in research — surveys, AI-moderated interviews, or in-product micro-surveys.\n\n### 5. Set a review cadence\nReview HEART metrics at the cadence that matches the decision: weekly for an active launch, monthly for a steady-state product area, quarterly for a strategic theme. Without a cadence, HEART becomes a slide that gets shown once and forgotten.\n\n## The Modern Approach: HEART With AI-Moderated Research\n\nThe traditional weakness of HEART has always been the **attitudinal half** — Happiness and the perceptual side of Task Success. Behavioral metrics are essentially free once analytics is wired up; surveys and qualitative follow-ups are not. Most product teams either (a) skip Happiness entirely and rely on the proxy of NPS once a quarter, or (b) run a quarterly Typeform survey nobody analyses past the average score.\n\nThis is exactly the gap an AI-native research platform like **Koji** closes. With Koji you can:\n\n- **Bundle every attitudinal HEART signal into one continuous study.** Use Koji’s [structured questions](/docs/structured-questions-guide) (six types: open_ended, scale, single_choice, multiple_choice, ranking, yes_no) to drop in a SEQ scale, a Happiness scale, a CSAT single_choice, and a feature adoption yes_no — in a single 4-minute interview.\n- **Run the Happiness survey continuously, not quarterly.** Koji’s AI moderator runs interviews 24/7 against a shareable link or in-product widget, so Happiness is a streaming metric, not an annual event.\n- **Get the *why* alongside every score.** Each scale question is followed by an AI-driven open-ended probe (“What made you rate it that way?”), and Koji’s [thematic analysis engine](/docs/thematic-analysis-guide) clusters the explanations into themes automatically. You see your Happiness score *and* the top three reasons it moved — in the same dashboard.\n- **Replace the read-out slide with live reports.** Koji’s real-time report aggregation updates the HEART scorecard the moment new responses arrive, eliminating the multi-week gap between data collection and stakeholder review.\n\nTeams using AI-assisted research tools report 60% faster time-to-insight than teams running the same studies manually (Forrester, *State of Customer Insights 2024*), and Koji’s own customers run HEART attitudinal scorecards in days instead of the typical 2–3 week survey cycle.\n\n## Five Common HEART Mistakes to Avoid\n\n1. **Trying to track all five categories on every project.** GSM exists to scope you down. Two categories well-instrumented beats five categories half-instrumented.\n2. **Confusing Engagement with Adoption.** Engagement = depth among existing actives. Adoption = first-time uptake. Mixing them produces meaningless dashboards.\n3. **Skipping Happiness because ‘we’ll do NPS quarterly.’** NPS is a brand-loyalty metric. It is not a Happiness signal at the feature level. Run a fast SEQ + 1–7 satisfaction post-task instead.\n4. **Defining Task Success as ‘they clicked the button.’** A click is not a success. The user has to *complete the underlying job*. Pair behavioral completion with a SEQ score and a post-task open-ended probe.\n5. **No baseline and no target.** A HEART metric without a previous reading and a target number is a vanity number. Always show ‘current vs prior period vs target.’\n\n## When NOT to Use HEART\n\nHEART is overkill for very early-stage discovery (“do people have this problem at all?”) — use [Mom Test customer interviews](/docs/mom-test-methodology) or [Jobs-to-Be-Done switch interviews](/docs/jobs-to-be-done-framework) instead. HEART also under-serves pure e-commerce funnels, which are better measured with conversion-rate optimisation. HEART shines for **product features and experiences with repeat use**, which is most B2B and consumer SaaS.\n\n## Related Resources\n\n- [Structured Questions in AI Interviews](/docs/structured-questions-guide) — the six question types Koji supports for HEART scale and yes/no items\n- [System Usability Scale (SUS): Complete Guide](/docs/system-usability-scale-guide) — the standard Happiness instrument for HEART\n- [Customer Effort Score (CES): How to Measure and Reduce Friction](/docs/customer-effort-score-guide) — a complement to Task Success in HEART\n- [How to Build an NPS Survey That Actually Drives Action](/docs/nps-survey-guide) — NPS as a coarse Happiness proxy\n- [Top Tasks Analysis: Identify the Few Tasks That Matter](/docs/top-tasks-analysis-guide) — the natural input to scoping Task Success\n- [UX Research Process: A Complete Framework for 2026](/docs/ux-research-process) — where HEART fits in the broader research workflow","category":"Research Methods","lastModified":"2026-05-07T03:17:11.999234+00:00","metaTitle":"HEART Framework: Google’s UX Metrics Model (2026 Guide + Examples)","metaDescription":"Master the HEART framework: Happiness, Engagement, Adoption, Retention, Task Success. Learn the GSM process from Kerry Rodden, see real examples, and run HEART surveys faster with Koji’s AI moderator.","keywords":["heart framework","google heart framework","ux metrics","user experience metrics","goals signals metrics","GSM","kerry rodden","task success","user happiness","product metrics","ux measurement"],"aiSummary":"A definitive guide to Google’s HEART framework — the five UX metrics (Happiness, Engagement, Adoption, Retention, Task Success), Kerry Rodden’s Goals–Signals–Metrics process, real product examples, and how Koji collapses HEART surveys into a single afternoon.","aiPrerequisites":["Familiarity with product analytics","Basic understanding of UX metrics","A live product or feature to measure"],"aiLearningOutcomes":["Map any product goal to the five HEART categories","Apply the Goals–Signals–Metrics (GSM) process to define measurable signals","Choose the right HEART metric to instrument first","Avoid the four most common HEART implementation mistakes","Run a Happiness + Task Success survey in Koji in under 30 minutes"],"aiDifficulty":"intermediate","aiEstimatedTime":"14 min read"}],"pagination":{"total":1,"returned":1,"offset":0}}