Making the Case for Training ROI

By Justin Robbins

The chances are great that if you’ve worked in learning and development for any length of time, you’ve been part of a meeting that involved the following message: “We regret to inform you that the training budget has been reduced (or worse, eliminated) due to necessary cuts.”  Even if that isn’t the case, we are under constant pressure to “do more with less”. While I agree that we should always practice fiscal responsibility and invest our dollars wisely, I would also say that an inability to define and defend the ROI of training has crippled our defense for investing more into our training programs.

I am constantly shocked to see organizations struggle with performance and then make the decision to cut training dollars. This astounds me because I know the transformational power that exists when organizations effectively identify, implement, and support a comprehensive training program.   When done properly, training, coupled with meaningful coaching, is exactly what it takes to improve performance, drive customer satisfaction, enhance employee engagement, and drive revenue. Since time in training equals time away from working with customers, quantifying to what extent it fulfills contact center objectives is one of the primary reasons that training is under constant scrutiny.

In order to assess the effectiveness of training, we must utilize a thorough and systematic evaluation process.  This evaluation process can be divided into three fundamental steps. They are:

  1. Define the objectives of the training program
  2. Specify each objective into a measurable item
  3. Assess to which extent the learners have mastered the objectives

A successful evaluation will provide us with insight on the effectiveness of the training program, indicators on whether to change, stop, or expand the program, and ways to improve the program for future delivery.  To ensure a holistic perspective and well-rounded insight, these evaluations should occur on four levels. The four levels are:

  • Level 1: Reaction
  • Level 2: Learning Evaluation
  • Level 3: Application to the Job (or Transfer)
  • Level 4: Evaluating the Impact and ROI (or Results)

At the first level, we should capturing data throughout, or at the very least, at the conclusion of, training on how attendees found value in areas such as program methodology, course exercises, quality of materials and support resources, facilitators knowledge and capabilities, facilities, overall strengths and opportunities, etc.  A critical component to higher completion rates and validity of data is to schedule the survey as close to the completion of training as possible. Another recommended best practice is to keep the survey in the same format or channel of the training. For example, if a student is completing an e-learning module, the survey should be built into the framework of the e-learning system.

The second level is the process of collecting, analyzing, and reporting information to determine both how much the participants learned and how that knowledge was applied during the learning experience.  Through the insight collected at this level, we can begin to make determinations on how effective the trainer is at facilitating the content and subsequent discussions, whether or not the content, sequence, and priority or emphasis is appropriate, and whether or not the exercises and instructional methods reinforce key concepts and allow for practice and application.

The third level of evaluation is to assess the degree to which the knowledge, skills, and abilities taught in the classroom are being used on the job.  This is commonly referred to as knowledge transfer. During this level of evaluation, you identify the enablers and barriers that facilitate or inhibit successful application.  The three types of barriers that you will encounter are situational (time, money, other responsibilities), institutional (established practices, limited offerings, inconvenient times, locations), and dispositional (attitudes, opinions, perceptions). By identifying these barriers, we can determine if there needs to be a change in the content, instructional/learning strategies, delivery method, or whether the training should even continue.  In addition, this level of evaluation allows us to identify what changes may need to occur in the systems and support tools that are available.

The final level of evaluation is the process in which we determine the impact of training on organizational productivity, improved customer satisfaction, and the organization’s strategic business plan.  There are two aspects to a level four evaluation

Impact:How much is the change in business metrics attributable to training?

ROI:What is the return on the training investment (calculated by dividing the net dollar value of the benefit by the costs of training)?

Whether the return on training is improved customer satisfaction, increased conversion rates, or improved process efficiency, the key is to have a comprehensive evaluation process that provides you with quality insights, actionable improvements, and a clear understanding of the benefits of the training program.  By using this evaluation process, you are ensuring a training program that is not only effective, but also driving value to the greater organization. When you’re able to accomplish this, training is no longer a consumer of valuable time and money, but rather a generator of value and sustainable improvement.

QATC Board Member Justin Robbins is the founder of JM Robbins & Associates, a US-based consulting firm that helps organizations maximize the strategic value of their customer experience initiatives. He is a customer service expert, professional speaker, and business consultant who’s coached and consulted with thousands of individuals around the world on contact center and customer experience best practices. He may be reached at Justin@jmrobbins.com.

Don’t miss Justin’s keynote, “The Power of Connection,” on Monday afternoon at the 2018 QATC Annual Conference, which is set for Sept. 17-19 at the Gaylord Opryland Hotel in Nashville, TN.

Making the Case for Training ROI

By Justin Robbins

The chances are great that if you’ve worked in learning and development for any length of time, you’ve been part of a meeting that involved the following message: “We regret to inform you that the training budget has been reduced (or worse, eliminated) due to necessary cuts.”  Even if that isn’t the case, we are under constant pressure to “do more with less”. While I agree that we should always practice fiscal responsibility and invest our dollars wisely, I would also say that an inability to define and defend the ROI of training has crippled our defense for investing more into our training programs.

I am constantly shocked to see organizations struggle with performance and then make the decision to cut training dollars. This astounds me because I know the transformational power that exists when organizations effectively identify, implement, and support a comprehensive training program.   When done properly, training, coupled with meaningful coaching, is exactly what it takes to improve performance, drive customer satisfaction, enhance employee engagement, and drive revenue. Since time in training equals time away from working with customers, quantifying to what extent it fulfills contact center objectives is one of the primary reasons that training is under constant scrutiny.

In order to assess the effectiveness of training, we must utilize a thorough and systematic evaluation process.  This evaluation process can be divided into three fundamental steps. They are:

  1. Define the objectives of the training program
  2. Specify each objective into a measurable item
  3. Assess to which extent the learners have mastered the objectives

A successful evaluation will provide us with insight on the effectiveness of the training program, indicators on whether to change, stop, or expand the program, and ways to improve the program for future delivery.  To ensure a holistic perspective and well-rounded insight, these evaluations should occur on four levels. The four levels are:

  • Level 1: Reaction
  • Level 2: Learning Evaluation
  • Level 3: Application to the Job (or Transfer)
  • Level 4: Evaluating the Impact and ROI (or Results)

At the first level, we should capturing data throughout, or at the very least, at the conclusion of, training on how attendees found value in areas such as program methodology, course exercises, quality of materials and support resources, facilitators knowledge and capabilities, facilities, overall strengths and opportunities, etc.  A critical component to higher completion rates and validity of data is to schedule the survey as close to the completion of training as possible. Another recommended best practice is to keep the survey in the same format or channel of the training. For example, if a student is completing an e-learning module, the survey should be built into the framework of the e-learning system.

The second level is the process of collecting, analyzing, and reporting information to determine both how much the participants learned and how that knowledge was applied during the learning experience.  Through the insight collected at this level, we can begin to make determinations on how effective the trainer is at facilitating the content and subsequent discussions, whether or not the content, sequence, and priority or emphasis is appropriate, and whether or not the exercises and instructional methods reinforce key concepts and allow for practice and application.

The third level of evaluation is to assess the degree to which the knowledge, skills, and abilities taught in the classroom are being used on the job.  This is commonly referred to as knowledge transfer. During this level of evaluation, you identify the enablers and barriers that facilitate or inhibit successful application.  The three types of barriers that you will encounter are situational (time, money, other responsibilities), institutional (established practices, limited offerings, inconvenient times, locations), and dispositional (attitudes, opinions, perceptions). By identifying these barriers, we can determine if there needs to be a change in the content, instructional/learning strategies, delivery method, or whether the training should even continue.  In addition, this level of evaluation allows us to identify what changes may need to occur in the systems and support tools that are available.

The final level of evaluation is the process in which we determine the impact of training on organizational productivity, improved customer satisfaction, and the organization’s strategic business plan.  There are two aspects to a level four evaluation

Impact:How much is the change in business metrics attributable to training?

ROI:What is the return on the training investment (calculated by dividing the net dollar value of the benefit by the costs of training)?

Whether the return on training is improved customer satisfaction, increased conversion rates, or improved process efficiency, the key is to have a comprehensive evaluation process that provides you with quality insights, actionable improvements, and a clear understanding of the benefits of the training program.  By using this evaluation process, you are ensuring a training program that is not only effective, but also driving value to the greater organization. When you’re able to accomplish this, training is no longer a consumer of valuable time and money, but rather a generator of value and sustainable improvement.

QATC Board Member Justin Robbins is the founder of JM Robbins & Associates, a US-based consulting firm that helps organizations maximize the strategic value of their customer experience initiatives. He is a customer service expert, professional speaker, and business consultant who’s coached and consulted with thousands of individuals around the world on contact center and customer experience best practices. He may be reached at Justin@jmrobbins.com.

Don’t miss Justin’s keynote, “The Power of Connection,” on Monday afternoon at the 2018 QATC Annual Conference, which is set for Sept. 17-19 at the Gaylord Opryland Hotel in Nashville, TN.