New

Now in Claude, ChatGPT, Cursor & more with our MCP server

Back to docs
Survey & Study Templates

How to Run Training Needs Assessments That Close Skill Gaps

Master training needs assessment surveys using competency mapping, skill gap analysis, the Kirkpatrick model, and training ROI measurement. Build L&D programs that deliver measurable business impact.

How to Run Training Needs Assessments That Close Skill Gaps

Organizations spend over $100 billion annually on corporate training in the US alone (Training Magazine Industry Report), yet 75% of managers report dissatisfaction with their company's learning and development function (McKinsey). The disconnect is not a lack of training. It is a lack of targeted training, programs that address the specific skill gaps that actually affect business performance.

A training needs assessment (TNA) is the diagnostic step most organizations skip. Instead of surveying what skills teams need, they buy off-the-shelf courses, send everyone to the same conference, and hope for the best. The result is wasted budget, disengaged learners, and skill gaps that persist despite millions in L&D spending.

This guide shows you how to build training needs assessments that identify real skill gaps, prioritize investments, and measure whether training actually worked.


What Is a Training Needs Assessment?

A training needs assessment is a systematic process for identifying the gap between current skills and required skills within an organization. It answers three fundamental questions:

  1. Where are we now? What skills, knowledge, and capabilities does the workforce currently have?
  2. Where do we need to be? What skills are required to meet business objectives?
  3. How do we close the gap? What training, development, or hiring is needed?

The Three Levels of Needs Assessment

Organizational level: What does the business strategy require? Are there new markets, technologies, or regulations that demand new capabilities?

Team/role level: What competencies does each role require? Where do teams fall short of role requirements?

Individual level: What are each employee's specific strengths and development areas?

Effective TNAs operate at all three levels simultaneously. An organizational-level assessment might reveal that the company needs stronger data analytics capabilities. A role-level assessment identifies which teams need those skills. An individual-level assessment determines who needs what training.


Building Your Competency Framework

Before you can assess gaps, you need to define what "good" looks like. This is your competency framework.

Step 1: Identify Core Competencies by Role

For each role or role family, define 8-12 competencies across three categories:

Technical competencies (role-specific hard skills):

  • Software engineering: system design, code review, testing methodology, CI/CD
  • Marketing: SEO, content strategy, analytics, paid media
  • Sales: prospecting, discovery, negotiation, account management

Cross-functional competencies (shared across roles):

  • Data analysis and interpretation
  • Project management
  • Communication (written and verbal)
  • Problem-solving and critical thinking

Leadership competencies (for people managers):

  • Coaching and developing others
  • Strategic thinking
  • Change management
  • Stakeholder management

Step 2: Define Proficiency Levels

For each competency, define what each proficiency level looks like:

LevelDescriptionBehavioral Indicator
1 - FoundationalBasic understanding, needs guidanceCan complete tasks with detailed instructions
2 - DevelopingWorking knowledge, occasional support neededCan complete standard tasks independently
3 - ProficientSolid skills, works independentlyHandles complex situations, mentors others
4 - AdvancedDeep expertise, leads othersInnovates, solves novel problems, sets standards
5 - ExpertIndustry-leading, strategic impactShapes organizational capability, thought leader

Step 3: Set Target Proficiency by Role

For each role, set the expected proficiency level for each competency. A junior data analyst might need Level 2 in SQL and Level 1 in machine learning, while a senior data scientist needs Level 4 in both.


Training Needs Assessment Survey Design

Self-Assessment Survey

The foundation of any TNA is a self-assessment where employees rate their own proficiency against the competency framework.

Skill proficiency questions (scale):

  • "On a scale of 1-5, how would you rate your current proficiency in [competency]?" (1=Foundational, 5=Expert)
  • "On a scale of 1-5, how important is [competency] to your current role?" (1=Not important, 5=Critical)
  • "On a scale of 1-5, how confident are you in your ability to [specific skill behavior]?"

The importance-proficiency matrix: By asking both proficiency and importance, you create a 2x2 matrix:

Low ProficiencyHigh Proficiency
High ImportanceCritical gap - train immediatelyStrength - leverage and share
Low ImportanceLow priorityOver-invested - reallocate

This matrix is your prioritization tool. Critical gaps (high importance, low proficiency) get addressed first.

Learning preference questions:

  • "How do you prefer to learn new skills?" (ranking: instructor-led training, online courses, mentoring, on-the-job practice, reading/self-study, peer learning)
  • "How much time per week can you realistically dedicate to learning?" (single choice: less than 1 hour, 1-2 hours, 2-4 hours, 4+ hours)
  • "What has been your most effective learning experience at this company?" (open-ended)

Barrier identification questions:

  • "What prevents you from developing new skills?" (multiple choice: lack of time, no relevant training available, not sure what to learn, manager does not support it, budget constraints, no clear career path)
  • "On a scale of 1-10, how supported do you feel by your manager in your professional development?" (scale)

Manager Assessment Survey

Managers provide a complementary perspective on their team's capabilities.

Team gap questions:

  • "For each team member, rate their proficiency in [competency]" (scale 1-5)
  • "What are the most critical skill gaps on your team?" (ranking of competency areas)
  • "Which skill gaps are most impacting your team's performance or business outcomes?" (open-ended)

Future needs questions:

  • "What new skills will your team need in the next 12-18 months?" (open-ended)
  • "Are there projects or initiatives you cannot pursue due to skill gaps?" (yes/no + description)

Training effectiveness questions:

  • "How effective has previous training been for your team?" (scale 1-5)
  • "What format of training has been most effective for your team?" (ranking)

Organizational Stakeholder Survey

Senior leaders and HR business partners provide the strategic context.

  • "What business objectives in the next 1-3 years will require new workforce capabilities?" (open-ended)
  • "Where do you see the biggest capability gaps relative to our strategic plan?" (open-ended)
  • "Rank these development areas by strategic priority" (ranking of capability categories)

Analyzing Training Needs Data

Gap Analysis

For each competency, calculate the gap: Gap = Target Proficiency - Current Proficiency

Aggregate gaps across the organization to identify:

  • Universal gaps: Skills where most employees fall below target (organization-wide training needed)
  • Role-specific gaps: Skills lacking within particular teams (targeted programs needed)
  • Individual gaps: Personal development needs (coaching or self-directed learning)

Calibrating Self-Assessment Data

Self-assessments have a well-documented accuracy problem. Research shows that the least competent individuals tend to overestimate their abilities (the Dunning-Kruger effect), while the most competent tend to underestimate.

To calibrate:

  1. Compare self-assessments to manager assessments. Gaps between the two highlight blind spots.
  2. Use behavioral anchors. Instead of asking "How good are you at data analysis?" ask "Can you independently build a pivot table, create a regression model, or design an A/B test?" Specific behaviors are harder to misjudge.
  3. Include objective skill checks. For technical skills, consider pairing surveys with practical assessments.

Prioritization Framework

Not all gaps deserve training investment. Prioritize using:

  • Business impact: How much does this gap affect revenue, efficiency, or strategic goals?
  • Prevalence: How many people have this gap?
  • Trainability: Can this gap be closed through training, or is it better addressed through hiring?
  • Urgency: Is this needed now, or in 12-18 months?

The Kirkpatrick Model: Measuring Training Effectiveness

Donald Kirkpatrick's four-level evaluation model is the industry standard for measuring whether training actually works. Each level builds on the previous one.

Level 1: Reaction

Did participants find the training engaging and relevant?

Survey questions (administered immediately after training):

  • "On a scale of 1-5, how relevant was this training to your role?" (scale)
  • "On a scale of 1-5, how engaging was the training delivery?" (scale)
  • "What was the most valuable takeaway?" (open-ended)
  • "What could be improved?" (open-ended)

Level 2: Learning

Did participants actually acquire the intended knowledge and skills?

Measured through:

  • Pre/post knowledge assessments
  • Skill demonstrations
  • Self-assessed confidence change: "On a scale of 1-10, rate your confidence in [skill] before and after the training"

Level 3: Behavior

Are participants applying what they learned on the job?

Survey questions (administered 30-90 days after training):

  • "How frequently are you applying what you learned in [training]?" (single choice: daily, weekly, occasionally, rarely, never)
  • "What barriers have you encountered in applying your new skills?" (multiple choice + open-ended)
  • "Can you describe a specific situation where you used what you learned?" (open-ended)

Manager follow-up:

  • "Have you observed behavior changes in [employee] since the training?" (yes/no + description)

Level 4: Results

Did the training produce measurable business outcomes?

This requires connecting training to business metrics:

  • Productivity improvements
  • Error rate reductions
  • Customer satisfaction changes
  • Revenue impact
  • Time-to-competency for new hires

Measuring Training ROI

The ROI Formula

Training ROI = (Benefits - Costs) / Costs x 100

Costs include:

  • Training development or purchase costs
  • Facilitator time
  • Employee time away from work (opportunity cost)
  • Technology and platform costs
  • Travel and logistics (for in-person)

Benefits include:

  • Productivity gains (measured through output metrics)
  • Error/rework reduction (measured through quality metrics)
  • Reduced turnover (engagement surveys + retention data)
  • Revenue impact (for customer-facing skills)

Making ROI Measurable

The biggest challenge in training ROI is isolating the impact of training from other factors. Best practices:

  • Control groups: When possible, compare trained vs. untrained groups on performance metrics
  • Pre/post measurement: Measure the target metric before training and 3-6 months after
  • Participant estimation: Ask trained employees to estimate what percentage of their performance improvement is attributable to the training
  • Manager validation: Have managers confirm or adjust participant estimates

Using Koji for Training Needs Assessments

Training needs assessments require depth that static surveys cannot deliver. When an employee rates their data analysis skills as a 3 out of 5, you need to understand: What specifically can they do? What can they not? Where did they learn what they know? What has prevented them from developing further?

How Koji Transforms TNAs

AI probes for specific skill gaps. When an employee rates a competency, the AI does not just record the number. It asks behavioral follow-up questions: "You rated your project management skills as a 3. Can you walk me through how you typically plan and track a project? What parts feel natural, and where do you feel less confident?" This surfaces specific, actionable gaps rather than abstract ratings.

Learning preferences emerge naturally. Instead of a multiple-choice question about learning preferences, the AI asks about their best and worst learning experiences. The conversation reveals not just what format they prefer, but why, and under what conditions they learn best.

Barrier exploration goes deeper. When someone selects "lack of time" as a training barrier, the AI explores: Is it truly a time issue, or a prioritization issue? Does their manager support development time? Are there structural obstacles? This nuanced understanding is essential for designing solutions that actually get used.

Koji Structured + Conversational Approach for TNAs

A typical Koji TNA interview might flow like this:

  1. Scale: "On a scale of 1-5, how would you rate your proficiency in [competency]?"
  2. AI follow-up: "Tell me about a recent situation where you needed that skill. How did it go?"
  3. Scale: "How important is this skill to your current role?" (1-5)
  4. Ranking: "Rank these learning formats by preference: instructor-led, online course, mentoring, on-the-job, self-study"
  5. AI follow-up: "You ranked mentoring first. Have you had mentoring experiences that worked well? What made them effective?"
  6. Multiple choice: "What prevents you from developing new skills?" (select all that apply)
  7. AI follow-up: Explores the top barrier in detail
  8. Open-ended: "If you could develop one skill that would have the biggest impact on your career, what would it be and why?"
  9. AI explores the career aspiration and connects it back to available development opportunities

Why This Approach Produces Better TNAs

  • Calibrated data: Conversational probing reveals actual proficiency more accurately than self-ratings alone
  • Rich context: You understand not just the gap, but the root cause and the best way to close it
  • Higher engagement: Employees feel heard and invested in, not surveyed
  • Automatic theme extraction: Koji aggregates across hundreds of conversations to identify organization-wide patterns

Building a Continuous TNA Program

Annual Cycle

QuarterActivity
Q1Full TNA survey deployment + analysis
Q2Training program design and launch
Q3Mid-year check-in (Kirkpatrick Levels 1-2)
Q4Behavior and results evaluation (Kirkpatrick Levels 3-4)

Integrating with Performance Management

TNAs are most effective when connected to performance reviews, promotion criteria, and career development conversations. The competency framework used for assessment should be the same framework used for performance evaluation and career pathing.


Conclusion

Training needs assessments are the difference between strategic L&D investment and expensive guesswork. By building a competency framework, assessing gaps systematically, prioritizing based on business impact, and measuring outcomes through the Kirkpatrick model, you transform L&D from a cost center into a performance driver.

The most important principle: start with the business need, not the training catalog. Identify what the organization needs to achieve, map the skills required, assess current capabilities, and only then design training to close the gaps that matter most.

Related Articles

How to Build an Employee Engagement Survey That People Actually Answer Honestly

The definitive guide to employee engagement surveys that surface real sentiment. Learn why traditional surveys fail, how conversational AI eliminates social desirability bias, and how to design studies that drive meaningful organizational change.

How to Build Pulse Surveys That Keep Your Finger on the Organizational Heartbeat

The complete guide to employee pulse surveys. Learn the optimal frequency, question rotation strategy, and how conversational AI turns brief check-ins into deep organizational intelligence.

How to Build 360-Degree Feedback Surveys That Develop Better Leaders

The complete guide to 360-degree feedback surveys. Learn how to design multi-rater feedback programs that develop managers into better leaders using conversational AI that surfaces honest, specific, and actionable developmental feedback.

How to Measure Candidate Experience and Win the War for Talent

Learn how to measure and improve candidate experience with proven survey methodologies. Cover candidate NPS, interview process feedback, offer decline analysis, and employer brand measurement.

How to Conduct Stay Interviews That Prevent Your Best People from Leaving

Learn how to design and conduct stay interviews that identify retention risks before it is too late. Cover stay interview methodology, trust-building, retention risk scoring, and action planning.

How to Build an NPS Survey That Actually Drives Action

A comprehensive guide to designing, deploying, and acting on Net Promoter Score surveys. Learn the best practices that separate vanity metrics from actionable insights, and how Koji's conversational approach unlocks the "why" behind every score.