QATC Survey Results
This article details the results of the most recent QATC quarterly survey on critical quality assurance and training topics. Almost 100 contact center professionals representing a wide variety of industries provided insight regarding performance metrics.
Number of Agent Seats
The largest number of participants is from call center operations with over 500 seats. However, the balance is widely dispersed across ranges from under 50 agents up to 500 seats. The industries represented by the participants are widely dispersed as well. This provides a good representation of all sizes of centers and industries.
Metrics Used to Measure Performance
Respondents were provided a list of common contact center metrics and asked to select those that were used in their centers. Nearly all centers selected the quality monitoring score with adherence to schedule and attendance next. Over half use average handle time and approximately one-third use service level. It is important to ensure that the metrics used to measure individual performance are ones that the agent can control and those that can be fairly measured. Given the distribution of contact types in any period, the handle time might vary significantly, especially where skill-based routing is used and agents have different combinations of skills. Service level is generally best achieved when the forecast is accurate and the schedule is flexible enough to place enough staff in the right skills at each period to handle the load. If an agent is following the schedule and controlling handle time, can she really do much more to ensure service level is met? Many consider service level or ASA to be a measure of overall center performance rather than one applied to individual agents.
Frequency of Metric Updates
When asked how often the metrics are updated, over 40 percent indicated that it is done daily with another 15 percent updating in real time. In fairness, the question may have been interpreted by respondents to mean different things. Some may have thought the question was focused on how often the center rethinks which metrics to use, resulting in probable responses in the monthly, quarterly, and annually choices. Other respondents may have thought the question was asking how often the results are measured for reporting where daily, real-time, and weekly might have been more logical choices. In considering the question of how often the center should rethink the metrics used, it is best that they be kept reasonably current if things have changed in the center, but stability of metrics results in better information for trends and historical analysis. As for the frequency of gathering and reporting the results, some metrics benefit from real-time data (such as adherence to schedule) while others need time to balance out the normal peaks and valleys to be effective (such as attendance and sales results).
Frequency of Disclosure to Agents
Respondents were asked how often the agents can see the updated metrics. Over 40 percent indicated that results are available for viewing daily with another 24 percent indicating it is updated weekly. Some indicated that it may be quarterly or even annually that agents can see their results. The best impact on behavior modification occurs when the information about performance is presented to the agents frequently enough to identify issues that need to be addressed. While real-time may not be essential, knowing that the results are not up to goal soon enough to be addressed quickly can ensure that bad habits do not form and help longer-term performance be achieved.
Frequency of Disclosure with Agents for Coaching
Survey participants were asked how often the performance data is used in coaching sessions with the agents. Monthly was the most frequent choice at nearly half of the respondents while weekly was indicated by approximately one-third. If agents have access to the data on a frequent basis, the coaching session should not surprise the agents. However, coaching to address any questions, challenges, and to reward excellence is definitely needed on a regular basis.
Respondents were asked if their center has automated scoring tools. Only 14 percent indicated that the process is completely automated while almost 80 percent use some combination of automated and manual scoring processes. Given the variety of metrics used and the mix of systems needed to produce them, it is challenging to achieve complete automation in this process.
Respondents were asked if their center combines the performance metrics to result in a total score for each individual. Nearly 60 percent indicated that they do while 41 percent do not. Combining metrics requires that a weight be assigned to each metric, even if they are all weighted equally. This can require some careful consideration in ensuring fairness and defensibility. However, it greatly simplifies the process of tracking changes in performance over time and comparing agents to one another.
Use of Scores for Ranking for Priviledges
Respondents were asked if they use any combination of the scores to define ranking of agents for privileges such as schedule pick, vacation timing, etc. Nearly half (48 percent) indicated that they do use the scores for ranking for pick of schedule. Fourteen percent use them to rank for choice of time off but only a few use them for overtime ranking or sales commissions. Just over 20 percent indicated that they rank for other privileges using these scores.
Metric Creating Most Questions and Push-back from Agents
Respondents were asked to identify the metric that causes the most questions and push-back from agents. Quality monitoring scores were chosen by 43 percent with adherence to schedule following with 29 percent. Average handle time was chosen by 11 percent and the rest of the options each had 5 percent or less. When the scores for quality monitoring are documented and well-calibrated, it can reduce the challenges as no matter who does the scoring, it is the same and can be readily explained to the agents. Adherence is an area that can create different interpretations of what is eligible for an exception and what is not. Clear definitions that are well communicated can help to reduce the time and angst associated with agent challenges.
This survey provides insight into the metrics used for performance measurement of the individual agents in the contact center. Among the metrics chosen, there is some opportunity to ensure that the metrics can be fairly and accurately measured and are within the control of the agent. Utilizing the results reported to “turn the agent toward the mirror” can help them to identify and correct any problems while they are still minor. It is common in centers today to utilize the scores on a variety of metrics to influence ranking for privileges. In general, after their paycheck, many agents feel that earning the right to work their preferred shift is the most important thing they can achieve. However, it is critical that the definitions of scoring criteria are clearly communicated to minimize questions and challenges from the agents hoping to improve their scores and rankings.