Unless we’re in a churn-and-burn business, customer retention is critical to achieve long-term profitability. Loyal customers tend to buy more each year and the cost to maintain the customer account drops as the relationship grows. Acquiring new customers is far more expensive. But how do we know what drives customer loyalty? How do we know who is a loyal customer? We may have lots of anecdotal data, but can we be scientific in identifying those customers who are very likely to be repeat purchasers and to give good word-of-mouth to prospective customers?
This summarizes the quest for the Holy Grail of customer measurements. Can we accurately and consistently identify truly loyal customers — so that we can then identify what makes them loyal? Can we identify those previously loyal customers who are now at risk of defecting?
Surveying customer satisfaction is decades old at most, and the overall customer satisfaction question, known as CSAT in the parlance, has typically been asked as a summary question. It was assumed, and some research showed, that more satisfied customers were more likely to be repeat purchasers. Recently, other key metrics have arisen to reflect the customer loyalty sentiment. Their claim is that these questions better identify the truly loyal and the at-risk customers.
The Net Promoter Score® (NPS®) is perhaps the best known, but the Customer Effort Score (CES) is also used in many companies. Both are controversial. Yet, many organizations are adopting these metrics without truly assessing whether they are valid measurements of customer loyalty for any business, let alone for their business.
A crucial factor in assessing the various customer metrics rests on the concepts of validity and reproducibility. We’ll turn to those now to set up the later discussion. A quick summary of validity and reproducibility is:
“Survey says…” doesn’t mean “Survey right...”
Validity is a key requirement for sound research. Simply put, validity means, “Are you measuring what you’re intending to measure with the instrument you are using?” That may seem downright dumb. How can I not measure what I’m intending to measure?
But imagine you have a glass bulb thermometer where the glass tube filled with alcohol has slipped from its original glued position against the degree markers with no way of knowing where the original position was. Will it provide valid measurements of temperature? Or imagine a household thermostat that is mounted next to a drafty window. Is the temperature reading by the window an accurate, valid measurement of the temperature in the household overall?
Try this simple experiment. Gather up all the thermometers you can and put them in the same place in your house. Wait 15 minutes for them to acclimate. Do they all read exactly the same? Probably not. Different technologies, different manufacturers, different ages, different levels of abuse.
Think of any instrument you use to measure something, be it a ruler to measure length, a speedometer to measure your car’s speed, or your bathroom scale to measure weight. You probably know someone — not yourself, of course — who skews the bathroom scale to feel lighter. To take action on the measurements we need to be confident that the readings are valid.
The same goes for surveys. We should be confident that what we’re measuring with our survey instrument truly reflects the views of those surveyed. The wording of our questions, the sequencing of those questions, the scales we choose, and even the statistics we use can misrepresent or distort what our respondents feel.
The differences you will find exemplify why it is dangerous to compare survey findings across companies where the instruments differ. The comparisons are not valid if the instrument (and the administration practices) are not identical. While true of surveys as a whole, this is equally true of summary customer insight metrics.
When we look at the summary customer insight metrics, we must ask if they are truly valid measures of customer loyalty.
A second key requirement of good research is that others can replicate a study and get the same results. This is known as reliability or reproducibility. Just because someone says, “We did a study that proves…” does not mean it’s true. For us to believe the findings and perhaps literally make million-dollar decisions based on the findings, we should want to know that others replicated the original study and reached the same conclusions.
The reliability requirement does more than simply catch the nefarious researchers who falsify data to support a conclusion. Yes, people falsify data — or exclude “wrong” data — more frequently than we’d like to believe. More importantly, the process of reproducing a study may bring out many factors that are in the mix that the original researcher didn’t recognize were important to the outcome.
The Wall Street Journal published a front-page article, “Scientists’ Elusive Goal: Reproducing Study Results.” The article focused on reproducibility of medical research studies and quotes Glenn Begley, vice president of research at Amgen, “More often than not, we are unable to reproduce findings.” Bruce Albert, editor of Science, added, “It’s a very serious and disturbing issue because it obviously misleads people.”
The article includes a chart showing the results of 67 studies that Bayer tried to replicate. None of the studies were ones where data was fraudulent or findings had been retracted. 64% of the studies could not be replicated. Search on the phrase, “medical study retracted” and you’ll find how common it is for the findings from accepted studies to be found wanting upon further review.
You may be thinking, “but that’s medical research where lives are at stake. I’m just doing a survey.” True, but if you’re running a survey program, you are, in fact, a researcher. (At your next performance review, you may want to make that point!) In service organizations we make service delivery design decisions and personnel decisions based in part on data from surveys. Wouldn’t you want to be sure that the survey data are legitimate? Before applying the Net Promoter Score or the Customer Effort Score shouldn’t we know that the research that led to the advancement of those customer insight metrics as indicators of customer loyalty is valid and reproducible?
It’s easy to make claims. It’s harder to prove.