Blog

Hard evidence from soft skills: Measuring the impact of learning and development

Swimmer measuring time

Whilst studying psychology at university, I was often asked, ‘But is psychology even a science?

As you might imagine, this sometimes left me feeling slightly inferior to those studying ‘harder’ sciences, such as physics or chemistry. In these ‘harder’ sciences, it’s possible to conduct randomised controlled trials where variables can be manipulated or controlled. This isn’t always possible in psychological research, where the concepts being measured are highly complex, inter-related and are rarely directly observable.

Despite all of this, academics over the last 100 years or so have come a long way in accurately measuring abstract psychological phenomena, such as IQ and personality. This has meant that modern psychology, like those ‘hard’ sciences, can adopt an empirical approach based on evidence and data.

I’m sharing this story because the Learning and Development (L&D) industry may face the same challenge as the academic study of psychology. Namely, I think both share the challenge of coming up with ‘hard’ evidence and measures, despite dealing with ‘soft’ skills or concepts.

The current business landscape seems to be intensifying this challenge. For example, one recent study has suggested L&D professionals increasingly feel their organisation makes decisions based on data. 1  This same research has also revealed increasing pressure from senior leaders to demonstrate the impact of L&D.

This begs the question, how is the L&D industry currently demonstrating ‘hard’ evidence of the benefits of ‘soft’ development?

The current state of L&D evaluation: the good, the bad and the ugly

The good news is that one recent study has indicated that 96% of L&D professionals wish to measure their success and 89% believe it’s possible to do so.1 Perhaps this is because these individuals realise that evaluation allows them to measure their success, learn from and improve their services, and influence key stakeholders by demonstrating their impact.

The bad news is that this same study has unveiled reports of significant difficulties when seeking to measure the impact of L&D. Professionals from the L&D industry reported reasons such as competing priorities, lacking access to data and being unclear on where to start.

The ugly news in this research is that only a small proportion of L&D is evaluated beyond attendance or satisfaction rates. The L&D profession could be delivering transformational services, bolstering capability and enhancing engagement across the organisation, and yet the evidence suggests the profession is currently only showing how many delegates ‘turned up’ and whether they enjoyed their trip out of the office.

How can you evaluate your L&D programmes?

The following top tips are far from exhaustive, but can nevertheless help you to start bringing a data-driven approach to your L&D interventions:

Treat evaluation as a priority

At the start of any programme, think about making your goals measurable and clarifying the data you can gather to demonstrate impact.

Think about the data you already have

You may not need to collect more data. You might be able to curate existing data sources that help to show impact, such as engagement data.

Start with Kirkpatrick

Kirkpatrick’s well-known four-levels of evaluation are far from perfect, but they provide a helpful list of things you might measure to demonstrate impact.

Don’t go it alone

Often, there are internal data experts who can help you to select and use the right data to demonstrate impact. Find these experts and draw on their support.

Tell the story

Empirical data doesn’t mean your evaluation needs to be dry. Use both statistics and stories to craft a compelling narrative that shows real impact to others.

[1] LEO Learning. (2018). LEO Learning Research Results – Emerging Trends. Retrieved from: https://leolearning.com/app/uploads/2018/08/Building_MBIL_strategy_research_results_YR02.pdf