NPS, CSAT, FCR are Dead. Long Live ACR!

By Ric Kosiba, Ph.D., Chief Data Scientist, Sharpen Technologies

I’ve never liked Net Promoter Score (NPS). 

When NPS became fashionable in contact centers, I had an immediate negative and visceral reaction to the metric; it seemed kind of off to me.  We will get into the reasons in a bit, but I think a gut check — even in mathematics and measures — is highly underrated; we should always listen to our inner voice and try to understand why it might be whispering to us.  Similarly, although I tried to read up on it at the time, I was never quite convinced that the claims of the Net Promotor Score authors seemed true.

NPS was birthed into the world in 2002 by Bain and Company in a Harvard Business Review article (The One Number You Need To Grow).  I know I have a personal bias against high price consultants that I fully acknowledge.  This bias came out of my early experience after working on projects with immaculately dressed and fantastically articulate external consultants, who delivered way too little to justify their billing rates or our time.  Note: ironically, I am writing this while jetting home from the SCTC conference, where I hung out with extremely sharp, practical, and fun contact center consultants, a very different breed with nary a set of cufflinks.  But I digress.

My dislike of NPS has to do with a couple of its obvious attributes. 

  • Selection Bias: NPS is a survey with all the regular biases associated with customer surveys, low participation rates, and respondents with either time on their hands or a beef with the company.  Phone surveys typically measure only 10% of interactions (although text surveys do better). 
  • How NPS is Scored: NPS asks respondents to score based on whether they are likely to recommend the company to family or friends.  A 9 or 10 score assumes the respondent will promote the company, a 7 or 8 is assumed neutral, and a 1-6 is considered a company detractor.  But do respondents understand this?  Going from scoring a 9 to an 8 assumes you are no longer a promoter.  Does this calibrate with the real world and all parts of the world equally?  A mathematical cliff is not a good attribute to have in a metric and this seems sketchy.
  • Culture: Are there cultural differences with how you might respond, meaning does someone in England think the same way as someone in Japan?  Or does someone in New York City rate similarly to someone in Newport Beach, California?  I think not.
  • What's Being Measured: Are we measuring agent performance or brand loyalty?  We all have had the experience where we are ticked off at a company or a process, but the agent was so very nice, that we might answer as though the company is measuring the agent’s performance and not the company’s performance. Or vice versa.
  • Looking for Outliers: Only in the extreme case, where an interaction can seriously go off the rails, can an agent affect customer loyalty.  I know there are consultant’s words-of-wisdom that say that a single agent can tarnish a company’s brand, but really?  Or, only when an agent turns an otherwise fraught situation around, can they affect customer loyalty.  The vast majority of contacts are routine and non-eventful. Isn’t this NPS concept just consultants over-thinking and over-generalizing the very edge case where an agent or a customer is belligerent?
  • Overuse of Question: When Bain studied NPS, the promoter question was rarely used, and now it is ubiquitous.  I know that, for me, the “would you recommend” question has no meaning anymore, since it is used absolutely everywhere, and is used to judge both individuals and transactions (which I expect was not NPS’ original purpose). The specific (and wrongly used) question now has a different meaning and its time has passed.
  • Forcing a Lie: NPS asks whether the respondent would recommend the company.  If we use NPS, are we sure that we are really measuring what we seem to ask?  Who promotes their airline or their credit card?  Who promotes their health care processor or their big-box retailer or your cell phone company?  Other than our cute and delicious hole-in-the-wall restaurant, do we ever truly recommend any company?  By giving the happy and earnest contact center agent a score of 10, we are forced to tell the company that we will tell our friends that we will recommend the company when we know dang well that we won’t ever tell a soul.  It’s a false measure.

Customer Satisfaction and
First Call Resolution

I am much more comfortable with CSAT and FCR measures, although both have a smaller subset of NPS’s issues.  First, CSAT and FCR are usually measured via survey, with all the same survey population biases.  However, both attempt to measure something more honest, whether the customer is satisfied with the interaction (CSAT) or whether their problem was solved (FCR).  Second, low participation rates hamper the statistical significance of an agent’s score. Some other thoughts:

FCR is sometimes measured by agent, via disposition codes.  Agent self-reporting seems like a recipe for failure.

FCR has one other problem.  Some companies use FCR to measure agents’ performance, but FCR doesn’t measure agent’s performance, instead it measures the contact center or center department’s performance. The issue is that an agent cannot help it if they are the first agent or the third agent to interact with a customer. 

What are the qualities of a good agent performance metric?

  • The agent must have the power to directly affect what you are measuring.  CSAT meets this criterion, but NPS and FCR do not.
  • The measure must be important to the business.  CSAT and FCR clearly are important to most businesses, but because NPS does not measure what it purports to measure (instead customers are answering a different question than what was asked), it does not meet this test.
  • It should have broad interaction coverage and count most if not all interactions.  Each of the metrics fails this test, coverage is low.
  • The measure should have low (or measurable and correctable) bias.  Each of our three measures fails this test, too.  In a perfect world, we should be able to automatically measure every interaction to remove bias and maximize coverage. 

Active Contact Resolution

In a recent issue of SWPP's On Target newsletter, we discussed Active Contact Resolution (ACR), a metric that can be automated to cover 100% of each agent’s calls.  It measures whether an agent is able to answer the customer’s questions and anticipate further questions.  It is not survey based and it is without all the biases of survey questions.  In all data I’ve analyzed, it correlates directly with customer satisfaction scores.  It has none of the hang-ups associated with our normal qualitative metrics (because it is not qualitative), but it tracks well with all quality scores. It goes to the heart of what we are trying to measure, without any of the problems.

But here is the fun part — our experience has shown that ACR is eminently coachable and improvable.  Agents can, with management focus, improve their ACR with just a few tools. We have many fantastic success stories that show that agents can improve their ACR, the company can reduce their costs, and customers are happier for it.  And I have done the gut check, along with those who have seen it in action, and it is fine.

Ric Kosiba is a charter member of SWPP and is the Chief Data Scientist at Sharpen Technologies. He can be reached at rkosiba@sharpencx.com or (410) 562-1217.

Sharpen Technologies builds the Agent First contact center platform, designed from the ground up with the agent’s experience in mind. A happy agent makes happy customers!