QATC Survey Results

This article details the results of the most recent QATC quarterly survey on critical quality assurance and training topics. Over 70 contact center professionals representing a wide variety of industries provided insight regarding quality assurance organizations and processes.

Number of Agents

The percentage of participants is relatively equal for centers of under 50 agents, 50 to 100 agents, 101 to 200 agents, and over 500 agents. The centers representing healthcare and retail have the most respondents, but each of the other options were selected by some portion of the centers. This mix of respondents provides a broad spectrum of call center sizes and industries.

Quality Assurance Department

Over 90 percent of the respondents indicated that they have a quality monitoring department. This demonstrates the focus of many centers on the importance of quality and training for their teams.

Ratio of QA Staff to Agents

When asked to quantify the ratio of QA personnel to agents, over two-thirds indicated that they have one QA person to less than 50 agents with about half of those having one to less than 25 agents. Another 18 percent have one to between 51 and 100 agents with the remainder of respondents assigning one QA to more than 100 agents.

The ratio is an important part of the quality program as it is directly responsible for how many evaluations each QA analyst can complete for each agent. It is also influenced by the role of the QA analyst, as some only monitor and score contacts while others may also do the coaching of the agents.

Well-Defined QA Program

Nearly 90 percent of the respondents indicated that they have a well-defined QA program. This is important to ensure the fairness and efficiency of the operation.

Recording of Calls

Survey participants were asked if they have an automated QA system in place to record the calls and over 90 percent indicate that they do. Having a recording to refer to in the process of evaluation can ensure a higher level of accuracy in the scoring and arbitrate any differences of opinion between QA and agents regarding the details on a specific scored call.

Quality Monitoring System Used

Respondents were asked which commercial QA system is used in their center. While many of the best-known vendor names were offered as choices, more than one-third of the respondents chose “other” to indicate they have a system that was not on the list. This may include simple recording systems that do not include an evaluation/scoring element. The vendor with the largest number of respondents was Verint, followed closely by NICE. Each of the others were mentioned by 2 percent or less.

Method Used for Recording Calls

Over three-quarters of the respondents indicated that they record all the calls while only a few do random, selective, or manual recording. This provides a full record of the interactions in case there is a question regarding a specific contact that may not have been selected for the QA process. In some industries, full recording is a legal requirement.

Use of Speech Recognition

When asked if the center uses a speech recognition system that spots specific words or phrases to initiate a recording, more than two-thirds do not. However, there are 15 percent who do use such a system and another 15 percent that are interested in learning more about this technology. This can be helpful in focusing attention of a specific problem area, new product or procedure, or other area of interest.

Number of Calls Evaluated

Respondents were asked to identify the number of calls per agent per month they evaluate. For 59 percent, the goal is 1 to 5 calls per agent, but one-third evaluate 6 to 10 calls per agent. Only 7 percent are evaluating more than 10 calls per agent. It is important to have sufficient calls evaluated to support a coaching process, but it is generally not cost-effective to evaluate enough to be statistically valid as a performance metric. Evaluations help the coaches to identify opportunities for improvement and areas of excellence to be reinforced. However, if the coaching does not follow the evaluation process consistently, the effort of evaluation is largely wasted.

Screen Capture Capability

Three-quarters of the respondents indicate that they do have the ability to record the screen functions during the contact as well as the voice portion. This can help the evaluator and coach identify errors, misinterpretations of data, and identify more efficient processes that may be needed.

Agent-Initiated Monitoring

Respondents were asked if agents are able to initiate monitoring of a specific contact. Only 22 percent indicated that they have activated this feature of the QA system. This can be helpful for agents when they identify a challenging contact that would benefit from some help from the coach.

Seventy percent use recorded calls in their training programs and allowing agents to initiate recording on a particularly good call that might be useful for training purposes can be helpful. This also gives the agent some ownership in the QA process rather than the contacts always being selected by someone else.

Conclusion

This survey provides insight into the organization and processes utilized in the QA departments of the respondents. While most have QA systems that record calls, other capabilities are not as widespread (such as screen capture, agent-initiated evaluation, and speech recognition). This is a process that is generally accepted as important to the effectiveness of a contact center today, but there is room to grow the utilization of the added functions that can improve the results even more.

We hope you will complete the next survey on measuring QA analysts, which will be available online soon.