Making the Case for Training ROI

By Justin Robbins, UJET

With a recession looming and businesses looking to make cost-saving measures, there’s one phrase that many trainers can expect to hear. “We regret to inform you that the training budget has been reduced (or worse, eliminated) due to necessary cuts.”  

But, let’s be honest, even if the current economic climate wasn’t what it is, contact centers are often under pressure to do more with less.  These conversations are not uncharted waters for many, and yet many struggle to navigate them effectively. The reason? It often comes down to an inability to define and defend the ROI of a training. Since time in training equals time away from working with customers, quantifying to what extent it fulfills contact center objectives is one of the primary reasons that training is under constant scrutiny. 

 In order to assess the effectiveness of training, we must utilize a thorough and systematic evaluation process.  This evaluation process can be divided into three fundamental steps.  They are:

  1. Define the objectives of the training program
  2. Specify each objective into a measurable item
  3. Assess to which extent the learners have mastered the objectives

A successful evaluation will provide us with insight on the effectiveness of the training program, indicators on whether to change, stop, or expand the program, and ways to improve the program for future delivery.  To ensure a holistic perspective and well-rounded insight, these evaluations should occur on four levels.

The four levels are:

Level 1: Reaction

Level 2: Learning Evaluation

Level 3: Application to the Job (or Transfer)

Level 4: Evaluating the Impact and ROI (or Results)

At the first level, we should be capturing data throughout, or at the very least, at the conclusion of, training on how attendees found value in areas such as program methodology, course exercises, quality of materials and support resources, facilitators knowledge and capabilities, facilities, overall strengths, and opportunities, etc.  A critical component to higher completion rates and validity of data is to schedule the survey as close to the completion of training as possible.  Another recommended best practice is to keep the survey in the same format or channel of the training.  For example, if a student is completing an e-learning module, the survey should be built into the framework of the e-learning system.

Level two is the process of collecting, analyzing, and reporting information to determine both how much the participants learned and how that knowledge was applied during the learning experience.  Through the insight collected at this level, we can begin to make determinations on how effective the trainer is at facilitating the content and subsequent discussions, whether or not the content, sequence, and priority or emphasis is appropriate, and whether or not the exercises and instructional methods reinforce key concepts and allow for practice and application.

The third level of evaluation is to assess the degree to which the knowledge, skills, and abilities taught in the classroom are being used on the job.  This is commonly referred to as knowledge transfer.  During this level of evaluation, you identify the enablers and barriers that facilitate or inhibit successful application.  The three types of barriers that you will encounter are situational (time, money, other responsibilities), institutional (established practices, limited offerings, inconvenient times and locations), and dispositional (attitudes, opinions, perceptions).  By identifying these barriers, we can determine if there needs to be a change in the content, instructional/learning strategies, delivery method, or whether the training should even continue.  In addition, this level of evaluation allows us to identify what changes may need to occur in the systems and support tools that are available.

A level four evaluation is the process in which we determine the impact of training on organizational productivity, improved customer satisfaction, and the organization’s strategic business plan.  There are two aspects to a level four evaluation:

Impact: What is the change in business metrics attributable to training?

ROI: What is the return on the training investment (calculated by dividing the net dollar value of the benefit by the costs of training)?  

Whether the return on training is improved customer satisfaction, increased conversion rates, or improved process efficiency, the key is to have a comprehensive evaluation process that provides you with quality insights, actionable improvements, and a clear understanding of the benefits of the training program.  By using this evaluation process, you are ensuring a training program that is not only effective, but also driving value to the greater organization.  When you’re able to accomplish this, training is no longer a consumer of valuable time and money, but rather a generator of value and sustainable improvement.

Justin Robbins is a customer service expert, professional speaker, and business consultant that helps organizations maximize the strategic value of their customer experience initiatives.  He is a member of the QATC Board of Advisors, and is a frequent author of industry research, articles, and best practice content, a professional member of the National Speakers Association and featured expert for the New York Times, NBC Nightly News with Lester Holt, Fox News and numerous other media outlets.  He may be reached at justin@justinmrobbins.com