QATC Survey Results

This article details the results of the most recent QATC quarterly survey on critical quality assurance and training topics.  Nearly 80 contact center professionals representing a wide variety of industries provided insight regarding quality assurance analyst performance measures.

Number of Agents

The largest number of participants is from call center operations with over 500 agents (26%).  However, the balance is widely dispersed with 17% for under 50 agents, 20% for 50 to 100, and 14% for 101 to 200 agents, and smaller amounts for the middle ranges. The centers representing financial and “other” industries have the most respondents, but each of the other options were selected by some portion of the centers.   This mix of respondents provides a broad spectrum of call center sizes and industries.

Time Allowed for Scoring

When asked how long a QA analyst is given to score a call, almost half of the respondents allow more than 2 times the average handle time.  How much more than 2 times is not specified.

Expectation for Calls Scored Per Day

WThen asked how many calls each QA analyst is expected to score in a day, between 8 and 16 was selected by 38 percent of the respondents followed closely by 32 percent who selected fewer than 8 calls.  However, 30 percent of the total respondents chose 17 or more calls per day with a few selecting more than 32 calls. This wide variation may be a function of the average handle time of calls and/or other duties that the QA analyst is expected to perform.

Calibration Time

For 43 percent of the respondents, QA analysts are expected to spend between 1 and 2 hours per week in calibration activities, while another 28 percent spend some time but less than 1 hour per week.  Only 10 percent indicated that their QA agents do not do calibration activities.

Preparing and Distributing Reports

Survey participants were asked how much time the QA analysts spend in an average week in preparing and distributing reports.  Approximately one-third of the respondents indicated that it is 1 to 2 hours per week, followed closely by 30 percent who indicated it is less than 1 hour per week.  Only 5 percent indicated that their QA analysts spend over 3 hours on this task while 16 percent do not ask their QA analysts to do this type of work.

Time Spent Coaching

Respondents were asked how much time the QA analyst spends in an average week coaching agents.  While one-third indicated that this is not a role for their QA staff, 41 percent indicated that they spend less than 4 hours per week. The remaining 25 percent spend between 4 and over 16 hours per week on coaching.  For some centers, coaching is a role primarily assigned to the supervisors/team leads while in others it may be done by QA staff or shared between the groups. It is important that the coaching follow the QA scoring process, or the scoring effort is essentially wasted and may even be counter-productive.

Time Training Agents

Nearly half of the respondents (46 percent) indicated that their QA analysts have no role in training agents while another 40 percent indicated that the time spent was less than 4 hours in an average week.  Only a total of 15 percent spent more than 4 hours on training agents. This role is often separated from the QA function, but it is important for the trainers and QA staff to communicate effectively to identify those situations that require individual coaching and those that require a more systemic training response.

Number of Calls Evaluated

Respondents were asked how many calls are evaluated per month per agent in their centers.  More than half (54 percent) indicated that they score between 1 and 5 calls while 34 percent do between 6 and 10 per agent.  While the evaluated calls are important to the coaching effort, it is important to remember that this small percentage is not a statistically significant sample, especially for disciplinary actions.  Only a few respondents (5 percent) indicated that they evaluate more than 15 calls per month per agent and these centers may be utilizing automated evaluation tools to assist the QA staff.

Tracking of Disputes

When asked if they track the number of disputes filed against each QA analyst, approximately half indicated that they do track and half indicated that they do not.  All of the respondents indicated that disputes represent less than 15 percent of the scored calls, with 60 percent indicating that less than 10 percent are disputed.  Identification of the level of challenges that agents are requesting and the sources of those disputes may help to identify opportunities to improve the calibration process and improve the acceptance of scores among the agent population.

Disputes Filed Against QA Analysts Tracked?

Percentage of Total Calls Scored Disputed

Disputes Considered in Error

Respondents were asked to identify the percentage of disputes that are considered errors on the part of the QA analyst.  Nearly half indicated that less than 5 percent are considered QA errors. Where there are significant numbers of disputes but rare cases of error on the QA team, training for the agents on standards and expectations is indicated.  However, if higher levels of QA error are found, the QA team may need more training or standards documents to be more clearly defined. Bringing the agents and QA team to a common understanding of expectations should be the goal so that quality is improved and morale in the center is supported.

Conclusion

This survey provides insight into the roles and responsibilities of the QA analysts.  While some analysts are totally focused on the scoring of calls/contacts, others may perform coaching, training, and other roles.  Calibration of the scoring may also be a role assigned to QA analysts although it often involves supervisors, managers, and even agents to reach consensus on quality definitions.  Tracking the results in terms of disputes per QA analyst, agent, and type of issue can provide valuable information to improve the calibration of the scores, minimize challenges, and the time spent resolving them, and improve overall morale in the center.

We hope you will complete the next survey, which will be available online soon.