For many contact centers, the quality assurance (QA) scoring process is primarily focused on identifying opportunities for improvement in agent performance and to provide a foundation for coaching. This is a noble goal and one that will serve the customers and agents well. However, there is also a broader opportunity to be gained in the QA process that focuses the company on enterprise goals and helping the whole company to improve. Review of the areas of concern in the quality monitoring/scoring form and the emphasis placed on each one can suggest reconsideration as the needs of the business have evolved.
In terms of enterprise feedback, the QA process can provide valuable insights into what customers are thinking, how they react to the products and services, and even what they would like to see in the upcoming releases. If this information stops in the ears of the agents and QA analysts, it does little good for the company. It needs to be passed on to marketing, product development, and others who can actually affect the offerings over time. Take a look at your QA forms and identify any place for capture of this kind of feedback that can be forwarded on to others in the company. While you are at it, consider how you encourage agents to capture such data on any call they take and forward it on. It is important to capture excellence and positive feedback as well as negatives and desires for new things if you are to be taken seriously by the receiving departments.
As we consider the QA process as a useful tool in evaluating agent performance, there are a couple of major considerations. One is how the scoring process will work overall. In some cases, centers score on a 100-point scale with the points divided between various categories and individual items. In others, the score is relatively simple, such as Meets Expectations, Exceeds Expectations, or needs Improvement. The latter can be particularly useful in newer QA operations and in those where new hires are being evaluated. After all, the focus is generally on finding the excellence and opportunities to improve or the “coachable moments” so a numerical score is not all that important. Even mature operations find this can be a better way to focus on the coaching rather than on the scoring.
Where the scoring is done on a numerical scale, there are some considerations regarding the emphasis. Let’s assume that your scoring has four major categories:
- Customer service
- Contact resolution
- Sales results
- Data gathering (for marketing and/or product development)
Handling some types of calls will focus primarily on the customer service aspect of the interactions. These might including billing inquiries, shipping issues, change of address, etc. There is probably little opportunity for sales or much data gathering so the emphasis might well be on customer service and contact resolution with specific questions about correct data entry, call control, speed, and accuracy of information given to the caller.
In another part of the center, customers are responding to marketing’s ads and the focus is on conversion of these inquiries into sales and revenue production. But that doesn’t take away the need to be accurate and efficient. In technical support, the emphasis is likely to be on contact resolution with sales as a lesser focus (although it is common today to see all contacts and agents have some expectation of sales efforts).
It makes a lot of sense to have weights on the scoring items so that they are not all considered of equal value to the organization. The chart below is an example of how this might look.
The QA score is the number of points out of 100 that each of the four areas received on that call. The Customer Service Weight is the value that element has in terms of the performance of an agent handling a customer service contact. Notice that this is a fairly balanced set of weights with emphasis on the customer delight and first call resolution elements.
The sales call puts the major focus on the sales attempt/results at 70% of the total weight and since this element scored poorly with a 40 out of 100 possible points, the sales call overall received a dismal 55 total score. However, in technical support the emphasis is on first call resolution and the call scored 90 points in that element so the agent received an overall score of 86.
We can see that overall scores are heavily influenced by the weights of the various elements. By ensuring that the elements are properly weighted to reflect the broader concerns of the enterprise and the key performance indicators of the call center, performance results and coaching can also be better aligned with the organization as a whole.
Maggie Klenke has written numerous books and articles related to call center and WFM. A semi-retired industry consultant, Maggie serves as an Educational Advisor for QATC. She may be reached at Maggie.email@example.com.