The Ultimate Question 2.0 (Revised and Expanded Edition). Fred Reichheld. Читать онлайн. Newlib. NEWLIB.NET

Автор: Fred Reichheld
Издательство: Ingram
Серия:
Жанр произведения: Экономика
Год издания: 0
isbn: 9781422142394
Скачать книгу
relationship quality. It tests for both the rational and the emotional dimensions.

      We don’t want to overstate the case. Though the “would recommend” question is far and away the best single-question predictor of customer behavior across a range of industries—and not just referrals but repeat and expanded purchases, along with willingness to provide constructive feedback—it’s not the best for every industry. In certain business-to-business settings, a question such as “How likely is it that you will continue to purchase products or services from Company X?” or “How likely is it that you would recommend that we do more of our business with Company X?” may work better. So companies need to do their homework. They need to validate the empirical link between survey answers and subsequent customer behavior for their own business. But once that link is established, as we will see in chapter 3, the effect is powerful: it provides the means for gauging performance, establishing accountability, and making investments. It shows the connection between this measure of customer centricity and profitable growth.

      Scoring the Answers

      Of course, finding the right question to ask was only the beginning. We now had to establish a good way of scoring the responses.

      To be useful, the scoring of responses must be as simple and unambiguous as the question itself. The scale must make sense to customers who are answering the question. The categorization of answers must make sense to the managers and employees responsible for interpreting the results and taking action. The right categorization will effectively divide customers into groups that deserve different attention and different responses from the company based on their behavior, their value to the company, and their differing needs. Ideally, the scale and categorization would be so easy to understand that even outsiders—investors, regulators, journalists—could grasp the basic messages without the need for a handbook and a course in statistics.

      For all these reasons we settled on a simple zero-to-ten scale, where ten means extremely likely to recommend and zero means not at all likely. When we mapped customers’ behaviors on this scale, we found—and have continued to find in our subsequent work with clients—three clusters corresponding to different patterns of behavior:

       One segment was the customers who gave a company a nine or ten rating. We called them promoters, because they behaved like promoters. They reported the highest repurchase rates by far, and they accounted for more than 80 percent of referrals.

       A second segment was the “passively satisfied” or passives; they rated the company seven or eight. This group’s repurchase and referral rates were a lot lower than those of promoters, often by 50 percent or more. Motivated more by inertia than by loyalty or enthusiasm, these customers typically stay on only until somebody offers them a better deal.

       Finally, we called the group who gave ratings from zero to six detractors. This group accounts for more than 80 percent of negative word-of-mouth comments. Some of these customers may appear profitable from an accounting standpoint, but their criticisms and attitudes diminish a company’s reputation, discourage new customers, and demotivate employees. They suck the life out of a firm.

      Grouping customers into these three categories—promoters, passives, and detractors—provides a simple, intuitive scheme that accurately predicts customer behavior. Most important, it’s a scheme that drives action. Frontline managers can grasp the idea of increasing the number of promoters and reducing the number of detractors a lot more readily than the idea of raising the customer-satisfaction index by one standard deviation. The ultimate test for any customer-relationship metric is whether it helps the organization act in a customer-centric manner, thereby tuning the growth engine to operate at peak efficiency. Does it help employees clarify and simplify the job of delighting customers? Does it allow employees to compare their performance from week to week and month to month? The notion of promoters, passives, and detractors does all this.

      We also found that what we began to call Net Promoter score, or NPS—the percentage of promoters minus the percentage of detractors—provided the easiest-to-understand, most effective summary of how a company was performing in this context.

      We didn’t come to this language or this precise metric lightly. For example, we considered referring to the group scoring a company nine or ten as “delighted,” in keeping with the aspiration of so many companies to delight their customers. But the business goal here isn’t merely to delight customers; it’s to turn them into promoters—customers who buy more and who actively refer friends and colleagues. That’s the behavior that contributes to growth. We also wrestled with the idea of keeping it even simpler—measuring only the percentage of customers who are promoters. But as we’ll see in later chapters, a company seeking growth must increase the percentage of promoters and decrease the percentage of detractors. These are two distinct processes that are best managed separately. Companies that must serve a wide variety of customers in addition to their targeted core—retailers, banks, airlines, and so on—need to minimize detractors among noncore customers, since these customers’ negative word of mouth is just as destructive as anybody’s. But investing to delight customers other than those in the core may yield little economic return. Net Promoter scores provide the requisite information for fine-tuning customer management in this way.

      Individual customers, of course, can’t have an NPS; they can only be promoters, passives, or detractors. But companies can calculate their Net Promoter scores for particular segments of customers, for divisions or geographic regions, and for individual branches or stores. NPS is to customer relationships what a company’s net profit or net worth is to financial performance. It provides a bottom line that can drive learning and accountability. That is not to say this or any other bottom line is the only number you need to manage a business. Just as you need to know the details of revenues and costs to analyze that most famous of bottom lines, net profit, so too do you need detailed data on promoters, passives, and detractors to peel the onion of your Net Promoter score. But the clarity and focus that come from tracking a single number for loyalty—Net Promoter score—simplifies communication and calls attention to the instances that require deeper analysis.

      Solving Intuit’s Problem

      Intuit—worried as it was about slipping customer relationships—jumped at the idea of measuring its NPS and began an implementation program in the spring of 2003. (“Just one number—it makes so much sense!” exclaimed Scott Cook when he learned of the idea.) The company’s experience shows some of what’s involved in measuring promoters and detractors. It also shows how this measurement can transform a company’s day-to-day priorities.

      Intuit’s first step was to determine the existing mix of promoters, passives, and detractors in each major business line. Cook suggested that this initial phone-survey process focus on only two questions. The team settled on these: first, What is the likelihood you would recommend (TurboTax, for example) to a friend or colleague? Second, What is the most important reason for the score you gave?

      Customer responses revealed initial Net Promoter scores for Intuit’s business lines ranging from 27 to 52 percent. That wasn’t bad, given that the average U.S. company has an NPS of 10 to 20 percent, but Intuit has never been interested in being average. In later years, the company’s leadership team came to understand that the most relevant NPS comparisons were with competitive alternatives in each market. At the time, though, the team was looking at absolute numbers—and the scores simply weren’t consistent with the company’s self-image as a firm that values doing right by its customers. There was, they believed, plenty of room for improvement.

      The initial research revealed something else as well: the telephone-survey process used by the company’s market-research vendor was woefully inadequate. First, there was no way to close the loop with customers who identified themselves as detractors—no way to apologize or probe for the root cause of the problem, no way to develop a solution for whatever was troubling them. Second, the open-ended responses the vendor reported were intriguing, but managers had a tendency to read into them whatever they already believed. Third,