Net Promoter Score (NPS) is a proprietary metric developed by (and a registered trademark of) Fred Reichheld, Bain & Company, and Satmetrix. It is widely used in market research as a measure of customer loyalty, often as an an alternative to measuring customer satisfaction. NPS is based on one question:
How likely are you to recommend [our company/product/service] to your friends and colleagues?
The responses for this question range from 0 (not at all likely to recommend) to 10 (extremely likely to recommend), as the graphic below illustrates.
Those who provide a score of 0 to 6 are detractors, those giving a score of 9 and 10 are promoters whilst the remainder (7 and 8) are neutral. The NPS is calculated by subtracting the percentage of detractors from the percentage of promoters, and is usually given in the form of a percentage.
However, is this a useful metric, and does it provide useable information? The remainder of this post argues that the NPS is a flawed metric as it is based upon flawed mathematics and flawed questioning.
The flawed mathematics
There are several ways that you can get the same NPS score, for example, 20% promoters/ 80% neutral/ 0% detractors and 60% promoters/ 0% neutral/ 40% detractors both give you an NPS of 20, even though the responses are massively different. (In the first example 20% promoters minus 0% detractors = 20 and in the second 60% promoters minus 40% detractors = 20).
The scoring therefore masks massive differences in the views of respondents. With a scoring metric like this it is my opinion that this is a fundamental problem. A company who has no detractors and 100% promoters or neutrals is doing something markedly different from a company that has 60% promoters, as well as 40% detractors.
The flawed questioning
As I have already said, the question asked is ‘how likely are you to recommend [our company/product/service] to your friends and colleagues?’ If the respondent gives an answer of 0 to 6 they are seen as detractors; however, just because someone is unlikely to recommend a company or product to friends, it does not automatically follow that they will recommend against the company or product. If someone gives an answer of 5, they have picked the middle value and may feel that that they would recommend the company or product to some extent. This, however, would not be reflected in the NPS.
Only the top two responses (9 and 10) are seen as promoters; however, respondents can be very reluctant to give the highest scores, even if they are likely to recommend and satisfied with the company. From my experience with various customer satisfaction surveys, some respondents will never say that they are very satisfied (saying instead that they are quite satisfied) because they do not give the top mark on principle; there is always room for improvement.
If, as I have argued, the NPS is flawed, what should you use instead? I suggest that you stick to traditional measures of customer satisfaction (how satisfied or dissatisfied with company/ product are you?), using a 5-point scale. If you are interested in likelihood to recommend, (which is important to understand), I suggest that you ask the question ‘how likely are you to recommend company/ product to a friend?’, but using the following scale (or something similar):
- Very likely to recommend
- Quite likely to recommend
- Not likely to recommend for or against
- Quite likely to recommend against
- Very likely to recommend against
If you must use the conventional NPS question and answers, think about adding a ‘why do you say this?’ follow-up question to try to understand people’s responses better. Also, in addition to calculating the metric in the standard way I would also suggest looking at the averages (mean, mode and median), as well as the standard deviation. This will give you a much better idea of the spread of responses than the NPS on its own.