March 15, 2016

The evolution of evaluation

Written by Phil Willcox

How L&D and organisations measure the success of learning and development has changed a lot in DPG’s lifetime. Twenty five years ago, when DPG was formed, it was ‘How good has the learning been and oh, has it made a difference?’ That focus has shifted dramatically to ‘Has it made a difference and oh, how good has the learning been?’

If you’re measuring learning, then you are measuring the wrong thing. It’s now all about measuring impact. As the Learning and Performance Institute has said, should we be talking about learning and development or learning and performance?

There is a much stronger appetite now from within organisations to evaluate learning activities more effectively and robustly. There are three main elements that have come together to bring about this changed focus of evaluation.

Firstly, there is the economic perspective. The economic circumstances of 2008-2011 and even up to 2013, meant that organisations started wanting to know in much more detailed fashion how much ‘bang for the buck’ they would get.

Secondly, in 2006 there was the emergence of the Ulrich model, the idea of centres of excellence and business partners that would align initiatives with business priorities to create value for companies. Companies want to know if they are getting this value.

Thirdly, there was the growing call (that is still continuing) for HR and L&D to have more influence in organisations or a seat at the top table. A key strategy to achieve this is be demonstrating the impact of your work.

These three elements have come together and created this different focus on evaluation.

Evaluation is difficult, but it’s not impossible. Measuring impact takes effort, time and energy and importantly, L&D cannot do it alone. It has to be done in partnership with the business and this comes back to the idea of L&D as business partners.

The L&D practitioner has to look much further beyond what they are doing – it’s about what you are doing within the much bigger, broader organisational sphere. Organisations and L&D practitioners both have an interest in measuring how activities are impacting on the business. So, there is a need for collaboration between L&D and stakeholders, line managers and directors to deliver value.

Irrespective of the evaluation approach practitioners use, you and your stakeholders must be really clear about how this activity supports what the organisation is trying to achieve. The most important thing is that L&D has a focus on and commitment to measuring the impact of their activities.

One relatively recent shift – in the last eight to ten years – is the much more prevalent narrative around ROI. The challenge with ROI, is that it is not always quantifiable. Some evaluation models (e.g. Kirkpatrick & the Brinkerhoff Success Case method have stepped away from trying to isolate ROI.

In the last five to ten years there has been a much stronger need to provide regulatory evidence and so pre and post tests are a necessity, especially for organisations in highly regulated sectors, such as finance. Assessment results remain a way of demonstrating value at a certain level.

Happy sheets have been around for ages – partly I think because people didn’t know what else to do. That seems to be changing and practitioners are moving away from that approach.

What practitioners are interested in now is learning transfer or behavioural change. How people apply their learning in their job. That’s a perennial issue.

There is a default setting that all L&D practitioners must evaluate all programmes and activities to the same level. That has to change. Using the same approach and effort to evaluate an induction programme that has been running successfully for a few years, as you would for a big project with strategic impact that hasn’t been proven yet, seems a waste of resource. To make the most of the time and resource L&D has available for evaluation, we must choose carefully where we use it.

Phil Willcox is the UK’s only Kirkpartick Certified Practitioner and DPG’s geek on all things evaluation.