Using Statistics to Find Outliers for QA
By Ric Kosiba, Sharpen Technologies
My First, In-person QATC!
I’ve been a huge Vicki Herrell fan for 20 years, having been a member of SWPP since its beginning. During COVID, I attended my first, but virtual, QATC event. But this year, I went in person to QATC in Nashville and had such a blast. It was a great event. I met so many people and learned a ton. For those who haven’t attended one of these, (like me!), you must come next year. Please budget it now — it will be so worth your while!
Outliers are Worth Exploring
There are several ways to spot problems in an operation, and in contact centers, a new cottage industry has popped up to help. Touting advanced AI, companies are using analyses of conversations to categorize problematic exchanges and to automatically QA every call. This is very cool.
But I imagine there are calls that go smoothly, are polite, and where the agents follow the script exactly, but also, may not help the customer as desired. Would those conversations get flagged as an issue? Well maybe. And maybe not.
Are there other markers, or other ways of determining calls that may be problematic, without having to listen to them all, or buy an AI system? Of course. Contact center data can be used to find outliers — contacts, or agent behaviors that are different from the norm, that indicate a likely problem.
So, what metrics might point to a service issue?
Back to ACR
I hate to repeat ideas, but in our last two articles in The Connection, we discussed the metric Active Contact Resolution (ACR) as a great measure of whether our agents are doing their jobs well. It simply measures callbacks and assumes that (most) quick callbacks are service failures.
Increasing ACR scores has huge and measurable customer satisfaction and cost benefits, and we have found several ways to improve it. It turns out that this may be one of those special metrics that, I believe, will be one of the primary measures used by most of the contact center industry in a few years. It is that powerful.
ACR can help answer our hypothetical above. A calm, well-scripted conversation that just happened to not answer the customer’s question fully, would likely show up with a repeat call. ACR is designed to find exactly those calls that might otherwise slip through QA or an automated QA system. Because all the elements of a successful call may exist in the call transcript, even when it has been unsatisfactory, the only marker that would show failure is the fact that the customer called back.
These are specifically the types of calls that should be queued for QA to investigate; they are expensive, lead to a poor experience, but are otherwise indiscernible.
Other Metrics that are Also Important
In an environment where we must manage a remote or hybrid workforce, our only real choice is to manage through metrics. There are a lot of contact center metrics, in addition to ACR, where outlying performance may suggest that there are problems in agent execution. For example:
- Many short calls: Why would an agent have a disproportionate number of super short calls? Short calls point to a possible behavioral issue with an agent, and certainly a poor customer experience. Are they hanging up on some customers or pretending to not hear them to keep AHT low?
- Long or sporadically long After Call Work: Are agents giving themselves additional breaks?
- Multiple transfers: Does your operation have a confusing customer issue, some sort of question that agents do not know how to handle?
- Excessive cumulative hold times, or multiple holds per call: Why do agents need to research specific issues? Are there types of contacts that some agents have a hard time with?
- Low CSAT scores: This is simply the customer telling you straight up that the interaction was substandard.
These metrics all can be used to improve our CSAT, through targeted QA. All calls that are shown to be outliers, should be listened to, so we can detect both personal and systemic issues in our contact centers.
QA Can Find Systematic Problems in the Contact Center Operation!
Of course, whether you have an AI auto-QA system or whether you are using outliers to identify potentially problematic calls (or both!), the art of quality assurance becomes more important, because the purpose of it becomes
elevated. While it is very important that agents know their calls are being graded, and that agents see their scores, it may be even more important to the organization that issues that cause inefficiencies get surfaced to management. For example, a problematic call could mean a few things:
- The agent needs help: In the “old days,” an agent could raise their hand to ask a question or ask their neighbor. With agents working from home, it is more difficult to get a quick question answered.
- The agent needs training: Outlying performance may surface systemic training issues.
- There is a process that is broken: Confused customers and difficult calls may reflect confusing information on your website, or a business process that is broken.
- You have needy customers: This happens.
- Agents are gaming “the system:” One of my first jobs in call centers was to figure out how agents would manipulate our incentive structure. This happens everywhere.
Even AI Needs Complementary Data Analytics
In the last few weeks, I have spent a fair amount of time speaking with contact center folks about analytics and AI. There seems to be a common theme among contact center AI vendors that AI will solve most difficult contact center problems. I don’t believe this. There are certainly some great applications for AI in the contact center, but I do not believe AI is a panacea.
Covering 100% of interactions with automatic QA is surely helpful, and if QA professionals can use AI-based QA as a starting point for their analyses and calibration, then the process can be a whole lot more efficient and meaningful. But I expect that these new systems will start to add plain old data analytics to their platforms to help identify problems that may only surface outside of the call transcript.
There is some satisfaction in this: When you do something long enough, many of the old problems and solutions start to come back around. In contact centers, good old data analyses still has its place.
Ric Kosiba is the Chief Data Scientist at Sharpen Technologies. He can be reached at rkosiba@sharpencx.com or (410) 562-1217.
Sharpen Technologies builds the Agent First contact center platform, designed from the ground up with the agent’s
experience in mind. A happy agent makes happy customers!