I think it’s pretty important to measure any program. If time and money are being dedicated to it, there has to be a justification.
Ideally, any L&D program that is implemented has a specific are of the business it is supposed to improve (new hire productivity, retention, sales, service). Best practice is to have a measure of the business impact to really understand the ROI of the training.
Hi Martin, for us it really depends on what is being trained. If it is ‘hard skills’ such as customer service, it is fairly easy to monitor those trainee’s Customer Service KPIs. For training such policy and procedure, we have an assessment at the end of the training and monitor the number of performance issues being managed. This could be better measured by putting hours and wage rates of the managers dedicating their time to these performance issues, but we don’t go that far at the moment.
As the ‘founder’ of 2 L&D departments, I’ve actually found that I’m much more attune to measuring ROI than my manager or leadership - I speculate that it may be due to the fact that they’ve never had to, nor had to think about, how to measure L&D. Similar to what Avi and HRJedi said, linking to business impact when possible, is very helpful, but in brand new L&D functions, even pre + post surveys to ensure that staff now have clarity on a process/procedure is a win since much of what we’re doing is very foundational. Since this is a new area for leadership, also tying to metrics they specifically care about has been a good way to go.
I also like to see how advanced learning organizations measure themselves so I have something to aspire to and see what I can start to implement. Using CLO Magazine’s Learning Elite Application to fill it out for my L&D team has been very helpful to think of other measures of success (training hours, budget - as examples).
Thanks for all of the responses. All of the responses aligned with something that I’ve seen as well, as far as I read through an article and questioning people. It seems like most of the companies don’t have a standard way of measuring the ROI of L&D. It also interests to find out, that most of the people that I talk to said, in their company, they only measure the impact of L&D program if it’s directly connected to the revenue generated function (such as sales/customer support function) in the company.
I’m interested to hear more opinion from people especially anyone who’s involved with L&D program.
This is a question that haunts every L&D professional I remember at school hearing this question in different classes and looking at my professors’ faces cringe because they simply have no answer to it. As Alicia said L&D programs are measured differently from an organization to another. even though I heard of people putting together complex metrics and algorithms that could help them define training ROI, I think it could be simply be measured through research like semi-experimental methods such as turn over rates for a department that received a training vs another department that didn’t, comparing performance reviews and engagement surveys of people that received a certain training vs other teams that didn’t. That could be a good start.
One technique I have used at previous employers was the Retrospective Post-then-Pre Evaluation.
It’s a good measure of post learning intentions and perceived learning and it is very easy to administer. At the end of the learning (typically a course), you ask participants to rate their intentions and their level of knowledge both prior to attending the course and after attending the course.
When done this way, the results can be pretty impressive. For instance, one set of participants were more than 10 times more likely to say they intended have frequent coaching conversations after the course
Hi @mvalentino
One thing that may be of interest to you for exploration are Kirkpatrick’s Levels of evaluation:
Then you can design evaluation strategies based on which level you wish to evaluate. Let me know if you’d like to brainstorm more around that!
Another thing to consider is the front-end analysis. Some things aren’t solely training issues, yet, often leaders look to train the problem away. So, when you evaluate the effectiveness of a training you may not get the results you’re looking for if the problem is not best solved through, traditional “training” methods.
Cheers,
Megan
Actually, the post and pre items were done at the same time, after the session. The technique is called a retrospective post-then-pre evaluation. I know it may seem unconventional, but it developed in response to the “response shift bias” that evaluators of social programs often encountered.
People tend to rate themselves highly, until they know what they don’t know. For instance, if you ask rural teenage moms how they are as parents on a 1 to 5 scale, they might rate themselves as 4s and 5s on a pre survey.
But then, after going through a course on proper parenting, they start to realize all the things they ought to be doing, but aren’t. So they rate themselves as 2s and 3s on the post survey.
It starts to appear as if the the course made them worse parents, when in fact, they’ve learned a lot and their intentions have shifted quite a bit. To solve for this, some program evaluations started asking both sets of questions at the end.
And no, we didn’t use ratings from others immediately, but we did inform the learning (and the eval) based on specific engagement drivers. Later on, we followed up with engagement surveys to demonstrate that the managers that took part in the training improved in those areas.
We’re currently in the midst of doing this right now. We run annual Leadership & Emerging Leadership development programs which are underpinned by an agreed set of Leadership Competencies that have been created in consultation with our People Managers. The programs involve a mix of workshops, group and individual coaching, with each participant presenting their learnings at the conclusion of the programs (ie. where I was at the start, where I am now.). We also receive a coaching summary from their (external) coach with learnings from the program. The programs run for 6-9 months. Our 180 and 360 feedback tools (annual) also align with the Leadership Competencies and we run metrics on participant results pre and post program, typically 8-12 months apart to allow time for learnings to be embedded. Other ways we measure ROI are through twice yearly engagement surveys (one quite detailed, the other short and specific). There are also general measures through feedback, productivity, observation and their enhanced ability to lead & develop others as a result of the program.
@Alicia_Henriquez I would love to get your insight into how to ‘sell’ an L&D initiative to Sr Leadership that might need some convincing? Post eval ROI can help track the impact, but what is the best way to pre-sell? Thank you!!
Hi @PWerynski! Happy to schedule a chat to discuss more in depth but my #1 piece of advice is: what challenge does your Sr. Leadership need to solve and how will your L&D initiative take on this challenge and in turn make their lives easier? It’s tough for anyone to oppose a proposal to make their life easier with clear goals and action steps. DM me if you want to chat further!
@Alicia_Henriquez that is great advice thank you! Will give it some thought and reach out if I need further guidance. Thank you so much for your feedback.