A Practical Approach to Continuous Improvement

By Maggie Klenke

Call centers are the most thoroughly measured area of most organizations. There is a multitude of statistics, guidelines, industry standards, benchmarks and measurement tools. But what information will really help you to assess and improve the performance of your contact center, and where can you get it? Do you benchmark against other call centers? Will quality monitoring and/or mystery shopping give you what you need? What about the customers’ perspective? Where do you start?

Benchmarking comes in many different forms. One of the most common is the automated process that asks participants to answer questions via an automated system and compiles the results on an automated basis. While this allows for a large number of participants and easy analysis, the results may be misleading. For example, if the question posed is, “What is your average speed of answer?”, the answers given by individual respondents may be the goal that a call center sets for itself, its actual results for a defined period of time, or some number that may be untrue but makes the company look a little better compared to others.

Likewise, “What is your agent starting salary?” may generate answers that are just the base salary or may include some or all benefits. If all participants are not answering the questions from the same frame of reference, the results can be essentially useless. In addition, if the survey respondents come from a variety of industries, sizes of operation, and maturity of market, is it reasonable to draw conclusions to guide your company from the averages of the answers they all provide?

Another form of benchmarking is a survey for a small subset of companies that are essentially comparable. They may be in the same industry, generally in the same size range, and have other commonalities. The survey may be administered very carefully to ensure that every respondent has interpreted the question in the same way, and may even validate that the answers given are truthful and not wishful thinking. As you can imagine, these types of benchmarking processes are fairly limited in scope and very expensive – potentially costing $20,000 per participant or more. Let’s say that the result of the survey indicates that one utility has a lower cost per call than the average of the surveyed group. Does that mean that they are doing a great job and can quit thinking about ways to do better? Maybe the results show that the average revenue per call for one catalog company is $100 and the group average is $250. Does that mean the low average company isn’t doing all it could to maximize revenue given the pricing of the items they sell or is there something else at play? Real care is needed in determining the questions on the survey and whether the data will be meaningful to the participants in the end.

Benchmarking can also be done within a single company’s operation. Perhaps one site is compared to another in terms of performance, or a call center uses prior years as the basis of comparison to determine if progress is being made in some operational area. These benchmarks are more likely to use the same interpretation of questions and data, although there is still room for variations if the company is not careful. For example, one site might use ACD X and calculate service level one way, while another site with ACD Y calculates in a different way. When all of the variations of calculation and interpretation are resolved, the comparisons can be useful. However, the issues in one site are different from another, such as cost of living, competition for agents, etc. And each year has its own challenges, making year-to-year comparisons in a single operation open to some need for footnotes.

Another form of data that is often used in the call center industry is “industry standard.” For example, what is the industry standard for shrinkage? Well, first of all, there is no real “industry standard” because there is no standards body that sets such things. And what if the average of a group of companies surveyed is 25% shrinkage? If your call center’s shrinkage is actually higher than that, is it because you have very low turnover and lots of seniority-based vacation time? Is yours lower because the work is fairly easy and routine and you don’t pull agents off for training very often? A more relevant question is, what is my actual shrinkage now and what can I do to make it better? Set goals to improve it by small increments and focus on the specifics that are realistic in your own environment. You may have done all that can reasonably be done in your center and still be at 40%.

Quality monitoring (QM) is another way that call centers focus on continuous improvement. A small sampling of calls is reviewed in detail to determine if there are things that were done well to reinforce and things needing coaching for improvement. QM is a good mechanism to determine if the company’s policies and procedures are being followed and if the techniques used by the agent are efficient. The only problem is that it can only be done on a small sample of calls and the results are often shared with the agent days and even weeks later. And if the coaching doesn’t follow the scoring process, the effort is largely wasted.

Mystery shoppers are used in some call centers. This involves an outsider dialing into the center pretending to be a real customer and scoring the agent on the call handling according to a set of criteria. This process has many of the same challenges that quality monitoring does in terms of small sample size, expense, and focus on internal processes and procedures.

What about the perspective of the customer in all this? Are we doing some kind of survey to determine if the customer is satisfied or “delighted?” After all, we could be exceeding the industry benchmarks and doing everything according to the defined policies and still have an unhappy customer. A study by SQM Group shows that there is very little correlation between call monitoring ratings and customer satisfaction ratings. Only 20 percent of call monitoring ratings in the study had an impact on customer satisfaction rating, and only 17 percent had an impact on first-call resolution. So we really need to ask the customer to participate in the process of assessing our performance.

This can be done in a variety of ways including online surveys, mail surveys, telephone surveys, and email surveys. Each of these has its own benefits and drawbacks. One thing they generally have in common is that the sample is relatively small due to the low response rate, cost of administration, and interpretation of the results. The analysis may take days or weeks to compile and such surveys may only provide data on a quarterly or annual basis as a result. So if your semi-annual survey indicates that your customers have scored your center a 6.5 on a scale of 10 in effectiveness at solving their problems in one interaction, do you have enough information to effectively improve the performance? Without some specifics to work with, you are really hampered in making a meaningful change.

Some call centers are using post-call IVR and post-contact email processes to gather data from nearly every customer. The customer is routed to the survey either automatically at the end of the contact, or manually transferred by the agent. Amazingly, the transfer and response rate is very high, with 85% or more participating when the survey is well designed and the agents understand the importance of encouraging every customer to participate. And now the information is provided not only in quantity, but in near real-time. An unhappy customer can be identified quickly and routed to someone who can resolve the problem before it affects loyalty. And agents who need coaching on a particular technique or error are identified quickly and the problem resolved before it affects a large number of customers.

This kind of survey can find out about the customers’ perception of the handling of the inquiry while it is still fresh in their minds, then ask the customers to relate how important each element of the process is to them. For example, the survey might ask the caller to score the agent’s friendliness on a scale of 1-5, and then ask how important agent friendliness is on the same scale. We can find we are spending a lot of time and energy delivering some level of service to customers they don’t care much about, and not spending enough on the important issues. And interestingly, the data can be clustered by supervisory group, revealing trends within an agent team that need to be addressed with the supervisor. For example, if a high number of agents in a supervisory team score lower than the rest of the center on one criterion, it is likely that it is a technique that is not being well-taught by the supervisor or team coach. It is critical to have a quantity of timely data to be able to draw these conclusions and then act upon them. Correlating the data so that relationships can be identified is a critical element of the data analysis process. For example, if you have coached a group of agents on a new technique, can you see the impact on customer satisfaction measures?

With a high percentage of call centers now offering self-service through IVR and the web, it is important to include monitoring or these important interactions in the continuous improvement process. The systems should be reviewed regularly to ensure that all functions are working as planned, and customer satisfaction survey questions directly related to these channels will reveal points of confusion or areas that annoy customers and need revision.

Let us not forget that the end goal of all of this data gathering and analysis is action. If the call center does not take the information provided and make some change to improve the operations as a result, then the process is not just a waste, it is counter-productive. So gather all the right data, analyze it carefully for relevance to your operation, and then take action to make the changes that will result in continuous improvement.

Maggie Klenke has written numerous books and articles related to call center and WFM. A semi-retired industry consultant, Maggie serves as an Educational Advisor for QATC. She may be reached at Maggie.klenke@mindspring.com.

Winter 2020

The Connection

A Practical Approach to Continuous Improvement

By Maggie Klenke

Call centers are the most thoroughly measured area of most organizations. There is a multitude of statistics, guidelines, industry standards, benchmarks and measurement tools. But what information will really help you to assess and improve the performance of your contact center, and where can you get it? Do you benchmark against other call centers? Will quality monitoring and/or mystery shopping give you what you need? What about the customers’ perspective? Where do you start?

Benchmarking comes in many different forms. One of the most common is the automated process that asks participants to answer questions via an automated system and compiles the results on an automated basis. While this allows for a large number of participants and easy analysis, the results may be misleading. For example, if the question posed is, “What is your average speed of answer?”, the answers given by individual respondents may be the goal that a call center sets for itself, its actual results for a defined period of time, or some number that may be untrue but makes the company look a little better compared to others.

Likewise, “What is your agent starting salary?” may generate answers that are just the base salary or may include some or all benefits. If all participants are not answering the questions from the same frame of reference, the results can be essentially useless. In addition, if the survey respondents come from a variety of industries, sizes of operation, and maturity of market, is it reasonable to draw conclusions to guide your company from the averages of the answers they all provide?

Another form of benchmarking is a survey for a small subset of companies that are essentially comparable. They may be in the same industry, generally in the same size range, and have other commonalities. The survey may be administered very carefully to ensure that every respondent has interpreted the question in the same way, and may even validate that the answers given are truthful and not wishful thinking. As you can imagine, these types of benchmarking processes are fairly limited in scope and very expensive – potentially costing $20,000 per participant or more. Let’s say that the result of the survey indicates that one utility has a lower cost per call than the average of the surveyed group. Does that mean that they are doing a great job and can quit thinking about ways to do better? Maybe the results show that the average revenue per call for one catalog company is $100 and the group average is $250. Does that mean the low average company isn’t doing all it could to maximize revenue given the pricing of the items they sell or is there something else at play? Real care is needed in determining the questions on the survey and whether the data will be meaningful to the participants in the end.

Benchmarking can also be done within a single company’s operation. Perhaps one site is compared to another in terms of performance, or a call center uses prior years as the basis of comparison to determine if progress is being made in some operational area. These benchmarks are more likely to use the same interpretation of questions and data, although there is still room for variations if the company is not careful. For example, one site might use ACD X and calculate service level one way, while another site with ACD Y calculates in a different way. When all of the variations of calculation and interpretation are resolved, the comparisons can be useful. However, the issues in one site are different from another, such as cost of living, competition for agents, etc. And each year has its own challenges, making year-to-year comparisons in a single operation open to some need for footnotes.

Another form of data that is often used in the call center industry is “industry standard.” For example, what is the industry standard for shrinkage? Well, first of all, there is no real “industry standard” because there is no standards body that sets such things. And what if the average of a group of companies surveyed is 25% shrinkage? If your call center’s shrinkage is actually higher than that, is it because you have very low turnover and lots of seniority-based vacation time? Is yours lower because the work is fairly easy and routine and you don’t pull agents off for training very often? A more relevant question is, what is my actual shrinkage now and what can I do to make it better? Set goals to improve it by small increments and focus on the specifics that are realistic in your own environment. You may have done all that can reasonably be done in your center and still be at 40%.

Quality monitoring (QM) is another way that call centers focus on continuous improvement. A small sampling of calls is reviewed in detail to determine if there are things that were done well to reinforce and things needing coaching for improvement. QM is a good mechanism to determine if the company’s policies and procedures are being followed and if the techniques used by the agent are efficient. The only problem is that it can only be done on a small sample of calls and the results are often shared with the agent days and even weeks later. And if the coaching doesn’t follow the scoring process, the effort is largely wasted.

Mystery shoppers are used in some call centers. This involves an outsider dialing into the center pretending to be a real customer and scoring the agent on the call handling according to a set of criteria. This process has many of the same challenges that quality monitoring does in terms of small sample size, expense, and focus on internal processes and procedures.

What about the perspective of the customer in all this? Are we doing some kind of survey to determine if the customer is satisfied or “delighted?” After all, we could be exceeding the industry benchmarks and doing everything according to the defined policies and still have an unhappy customer. A study by SQM Group shows that there is very little correlation between call monitoring ratings and customer satisfaction ratings. Only 20 percent of call monitoring ratings in the study had an impact on customer satisfaction rating, and only 17 percent had an impact on first-call resolution. So we really need to ask the customer to participate in the process of assessing our performance.

This can be done in a variety of ways including online surveys, mail surveys, telephone surveys, and email surveys. Each of these has its own benefits and drawbacks. One thing they generally have in common is that the sample is relatively small due to the low response rate, cost of administration, and interpretation of the results. The analysis may take days or weeks to compile and such surveys may only provide data on a quarterly or annual basis as a result. So if your semi-annual survey indicates that your customers have scored your center a 6.5 on a scale of 10 in effectiveness at solving their problems in one interaction, do you have enough information to effectively improve the performance? Without some specifics to work with, you are really hampered in making a meaningful change.

Some call centers are using post-call IVR and post-contact email processes to gather data from nearly every customer. The customer is routed to the survey either automatically at the end of the contact, or manually transferred by the agent. Amazingly, the transfer and response rate is very high, with 85% or more participating when the survey is well designed and the agents understand the importance of encouraging every customer to participate. And now the information is provided not only in quantity, but in near real-time. An unhappy customer can be identified quickly and routed to someone who can resolve the problem before it affects loyalty. And agents who need coaching on a particular technique or error are identified quickly and the problem resolved before it affects a large number of customers.

This kind of survey can find out about the customers’ perception of the handling of the inquiry while it is still fresh in their minds, then ask the customers to relate how important each element of the process is to them. For example, the survey might ask the caller to score the agent’s friendliness on a scale of 1-5, and then ask how important agent friendliness is on the same scale. We can find we are spending a lot of time and energy delivering some level of service to customers they don’t care much about, and not spending enough on the important issues. And interestingly, the data can be clustered by supervisory group, revealing trends within an agent team that need to be addressed with the supervisor. For example, if a high number of agents in a supervisory team score lower than the rest of the center on one criterion, it is likely that it is a technique that is not being well-taught by the supervisor or team coach. It is critical to have a quantity of timely data to be able to draw these conclusions and then act upon them. Correlating the data so that relationships can be identified is a critical element of the data analysis process. For example, if you have coached a group of agents on a new technique, can you see the impact on customer satisfaction measures?

With a high percentage of call centers now offering self-service through IVR and the web, it is important to include monitoring or these important interactions in the continuous improvement process. The systems should be reviewed regularly to ensure that all functions are working as planned, and customer satisfaction survey questions directly related to these channels will reveal points of confusion or areas that annoy customers and need revision.

Let us not forget that the end goal of all of this data gathering and analysis is action. If the call center does not take the information provided and make some change to improve the operations as a result, then the process is not just a waste, it is counter-productive. So gather all the right data, analyze it carefully for relevance to your operation, and then take action to make the changes that will result in continuous improvement.

Maggie Klenke has written numerous books and articles related to call center and WFM. A semi-retired industry consultant, Maggie serves as an Educational Advisor for QATC. She may be reached at Maggie.klenke@mindspring.com.