EDUCATION

Measure What Matters: Bright L&D Future


Are You Making A Difference? Measure What Matters

You may know a version of this old story:

It’s midnight on a quiet street, and a somewhat tipsy lone figure crouches under a streetlamp, patting down the sidewalk. A passerby asks what he’s doing. “Looking for my door key,” he sighs. The passerby joins the search for a few fruitless minutes before asking, “Are you sure you lost your key here?” The man shakes his head, “No, I lost it in the park.” Puzzled, the helper responds, “Then why are we searching here?” The man gestures to the pool of light cast by the lamp, “Because the lighting is much better here.”

We may chuckle at the absurdity, but this classic streetlight effect (also known as the drunkard’s search) illustrates a common human bias: we tend to look for answers where it’s easy to search, not necessarily where the truth lies [1].

L&D’s Street Light Effect

In the Learning and Development (L&D) world, we often act out our own version of this story. Faced with the daunting question, “Did we really make a difference?”, many L&D professionals find themselves combing the well-lit areas of their data: Learning Management System (LMS) reports, course completion rates, and smile-sheet surveys. Not because that’s where the impact is but simply because those metrics are readily at hand.

The real “keys” to performance impact might be lying in the dark, scattered in job performance dashboards, sales figures, or customer satisfaction scores, but those areas are harder to illuminate. So under the proverbial streetlight we stay, generating reports on things like attendance and post-training quiz scores. It feels safe and satisfying. This is the streetlight effect in L&D measurement: we measure what’s easy, not necessarily what matters.

The Streetlight Effect in L&D: Measuring What’s Easy, Not What’s Important

The habit of “searching where the light is” explains many pitfalls in L&D measurement. Consider how the success of training is often reported:

“We had 500 people attend the workshop, and 95% of them said they would recommend it!” or “Our LMS shows 1200 course completions this quarter! The total training time delivered by our team is 600 hours.”

These vanity metrics shine brightly. They’re easy to gather (the LMS tracks completions automatically, and who doesn’t love a good post-training survey that makes us feel appreciated?). But do they really tell us if employees improved their skills or if the business benefited? Does the business interpret your 600 hours spent in training as delivered value (as opposed to investment)? Frequently, the answer is no.

One study found that companies “rely far too heavily on basic metrics such as completion rates and smile sheets” [3]. These are exactly the kind of things under the L&D streetlight: they’re visible and simple to measure. It’s automated, convenient, and comforting–much like the glow of that streetlamp.

The Association for Talent Development’s new research found that only 43% of talent development professionals say their business and learning goals are aligned. (n=277) [4]

If we’re not aligned or not sure if we’re aligned, are we looking at what matters?

What Are We Not Looking At?

One of my favorite questions when investigating early on business problems or opportunities: “Okay, what are we not looking at?”

Yes, asking questions and slowing down the process can be costly. But so can be relying on convenient data points only. Convenience comes at a cost! By focusing on easy metrics under the streetlight, organizations often miss the real story hidden in dark alleys. As one report put it, companies end up assuming that if learners complete training and give it a thumbs-up, then training must be effective. It is a “dangerous assumption” that completion equals success [3].

In reality, completion and satisfaction don’t guarantee learning, behavior change, or results. An employee might give a course 5 stars because it was entertaining, yet change nothing about their work the next day. A team might achieve 100% mandatory training completion, yet you see no improvement in the related safety incidents or sales figures. Under the cozy light of completion rates and survey averages, those failures to drive real change remain in the shadows.

Is It Not Just Me, Then?

No, you’re not alone. I’ve worked in larger and smaller organizations on hundreds of projects over 25 years: I saw the same patterns. Measurement and evaluation often getting stuck at “Level 1” surveys or knowledge checks. I’m not the only one saying this. According to industry surveys, most organizations struggle to measure deeper impact. For example, 43% of companies say they do no Level 4 measurement at all [3], referring to Kirkpatrick’s Level 4 (results, the impact on the business).

Why We Stick To The Light: Barriers To Meaningful Measurement

You know what I found fascinating in all these studies (including my own experience)? L&D teams knew in theory that they should measure what matters. They knew what was important, what mattered. Then why? Why not measure it?

If measuring real impact is so important, why aren’t more L&D teams doing it? It’s not because L&D professionals are lazy or don’t care. In fact, 91% of companies do believe they should measure learning’s impact beyond the basics (only 9% said there’s no need for higher-level evaluation) [3]. The intent is there. The problem is that several deep-rooted barriers keep L&D stuck in the well-lit zone:

  1. “We don’t know where to start.”
    Determining how to measure behavior change or business results can be overwhelming. Many teams lack a clear road map. It’s telling that a top challenge reported is simply knowing how or where to begin with measurement planning [2]. It’s much easier to default to the familiar routine of collecting course feedback and test scores than to venture into uncharted analytical territory. It is okay to start where you are! Iteration and progress take you further in the long-run than waiting for the perfect conditions to start.
  2. Lack of data access and integration
    Getting to those “dark areas” (like job performance metrics or business KPIs) often means pulling data from outside the L&D silo. That might require tapping into sales systems, quality assurance data, or HR performance reviews. For many L&D teams, this is easier said than done–data resides in different systems, owned by other departments, and may not be readily shareable. Not surprisingly, “accessing the necessary data” is cited as a persistent barrier to learning measurement [2]. Data security and privacy rules can also pose challenges due to the potential misuse of information. If you can’t get the data on, say, error rates or customer satisfaction post-training, you’re forced to rely on what you can get (LMS stats and surveys).
  3. Lack of business alignment and stakeholder buy-in
    Measuring true impact often requires cooperation across the business. You might need managers to observe and report behavior changes, or executives to prioritize measurement efforts. But convincing stakeholders that deep measurement is worth the effort can be tough. Many stakeholders are satisfied as long as employees check the training box. In fact, getting buy-in that measurement should be a priority is another leading challenge [2]. Without leadership support, L&D might not get the time or resources to chase those meaningful metrics hiding in the dark. On that note, stop and take a step back: what more value could you bring to the table to help your stakeholders make data-driven decisions? Think of data not only as a “proof of impact” in retrospect, but actionable insights that can provide value for the business to act proactively! What if you could tell X% of participants will need more support in the transition?
  4. Skills and confidence in analytics
    Let’s face it: not every L&D professional is a data analyst, nor do they need a PhD in statistics to be effective. However, today’s L&D teams are expected to wear multiple hats. Designing and delivering learning is one skillset; measuring its business impact is another. Many L&D departments simply don’t have strong capabilities in data analysis or experimental measurement techniques. They might lack the tools or expertise to run robust evaluations (e.g., connecting training cohorts to control groups, doing statistical comparisons, etc.). The lack of shared data literacy, low confidence with large skills gap can contribute to hesitation–it’s safer to produce a basic report (number of training hours delivered–check!) than to attempt a complex analysis that might be beyond the team’s comfort zone.
  5. The complexity of behavior change
    Even with the right data and skills, human behavior is complex. It can be hard to isolate the effect of a training program on on-the-job actions and measure what matters. Behavior change often unfolds over time and can be influenced by many factors besides training (manager support, work environment, incentives, personal motivation, and so on). Measuring it may require observation, follow-up assessments, or connecting to performance metrics that fluctuate for reasons beyond training. It’s not as straightforward as grading a quiz. Because it’s complex and sometimes slow to change, many organizations shy away from digging into behavior change. However, without behavior change, did we really make any difference?

These barriers explain why L&D measurement tends to hover in the light of what’s easy. But remaining there has consequences. When we fail to measure meaningfully, we risk flying blind. As one analyst quipped, by not establishing outcome metrics upfront, organizations end up “in a constant cycle of putting content out there and hoping for the best” [3].

Moreover, the inability to measure impact was cited by 69% of companies as the top challenge to achieving critical learning outcomes [3].

In other words, not measuring impact isn’t just a measurement problem; it’s a business problem. This means that L&D can’t demonstrate alignment with strategic goals and, therefore, can’t prove (or improve) its value to the organization.

How To Evolve From Here? Measure What Matters

In the next articles of this series, we’re going to explore how to move from the convenient streetlight to the unknown darkness in order to spotlight where real impact lies and measure what matters. We’re going to look at how to choose your measurement and evaluation model, and what’s out there beyond the well-known Kirkpatrick one. Finally, we’re going to investigate how AI can be used as a force multiplier by scaling the limited number of spotlights your team can handle to thousands and thousands at scale.

References:

[1] Streetlight effect

[2] Measuring L&D’s Success: What Reports Matter Most for Organizations?

[3] Measuring Learning’s Impact

[4] The Future of Evaluating Learning and Measuring Impact: Improving Skills and Addressing Challenges


Source link

Related Articles

Back to top button