The history of behavioural science, and how it has led us to a richer, more accurate understanding of what drives our behaviour, is really a classic underdog story. It is the story of how the 300-year-old goliath of traditional economic theory, which sees humans as rational operators, has been felled by a group of determined academics – in particular the ground-breaking work of Daniel Kahneman and Amos Tversky3 – using the slingshot of behavioural economics.
What is heartening is that this new way of thinking – that humans are not purely rational, utility-maximising calculators, as neoclassical economics often assumed – is fast becoming the norm. Forward-thinking governments around the world are embedding this approach into their policy. The world of business is actually lagging (badly) behind.
But times are changing. When running training courses in behavioural science, I like to gauge knowledge in the room by asking which of the leading popular science books on the topic (like Thinking Fast and Slow and Nudge) people have read. Ten years ago, between 10–20% of people in the room had one or more of those books. These days the ratio is usually over 50%, as the base level of knowledge has grown.
In addition to building that knowledge and understanding within a business, either by hiring those lucky enough to study it or buying it in from outside, what can a business do to ensure it is always focused on changing behaviour? And what, structurally and strategically, does focusing on behaviour mean in a business?
That is what we shall explore in this part of the book. But first, let’s examine what is new about this way of thinking, and what it means for businesses in general.4
Why is behavioural science important for business?
After the 18th century,5 academic theory about how people behaved was mostly based on a traditional economic theory of utility maximisation. That is, every human decision was based on a simple rational weighing up of the pros and cons of a particular action. Professors Richard Thaler and Cass Sunstein, the authors of Nudge, call this straw-man version of humans an ‘econ’, or homo economicus – because it is someone who only exists in the pages of an economics textbook. Like the character Spock from Star Trek, they are a totally rational operator, ungoverned by emotions.
And, like Spock, they are not human.
Let’s take an example – why people commit crime – to show the flaws of this approach. The University of Chicago economist and Nobel laureate Gary Becker devised what is known as the Simple Model of Rational Crime (SMORC) to explain why crime happens. This states that, in any given situation, a potential criminal weighs up the benefits of the crime (e.g. the financial gain) versus the potential costs (e.g. likelihood of being caught and going to jail).
According to this theory, the Enron fraudsters, for example, made this cost-benefit analysis and (wrongly with hindsight) decided the money they made was worth the risk of their actions. That risk being jail time and the future insolvency of the business.
The problem with SMORC, like most neoclassical economic theory (and financial models and management/marketing theory derived from it), is it assumes the Spock model of humans as totally rational beings who only act based on self-interest.
If this were true, all of us would be committing certain low-risk crimes on a daily basis. As Dan Ariely, James B. Duke Professor of Psychology and Behavioural Economics at Duke University writes:
“We wouldn’t make decisions based on emotions or trust, so we would most likely lock our wallets in a drawer when we stepped out of our office for a minute … There would be no value in shaking hands as a form of agreement; legal contracts would be necessary for any transaction … We might decide not to have kids because when they grew up, they, too, would try to steal everything we have, and living in our homes gives them plenty of opportunities to do so.”6
If you have ever tried to stop some form of dishonest behaviour in business – over-claiming on expenses, or stealing lunches from the communal fridge, for example – you will know this is not an accurate picture of how people behave. Increasing the chances of being caught (emailing the company saying you are checking expenses more thoroughly in future, for example)7 will not solve the problem.
This is because emotions play a key part in decision-making: our behaviour is not just governed by financial benefit. Although it fell in 2018 to 72% from normal levels of 90%,8 the murder detection rate in London is so high that no one would ever rationally consider it a wise course of action. Yet over 130 murders were committed in the capital that year, most of which had no obvious utility to the perpetrator. Clearly, there are other drivers of criminal behaviour.
Ariely’s work has shown that emotions can be effectively used to combat dishonest and illegal behaviour. In one case, he reduced the proportion of over-claiming (cheating) made on a simple insurance form by 15%.
How? Simply by moving the standard ‘I promise that the information I am providing is true’ declaration from the end of the form, to the beginning. This made the honesty requirement more salient 9 – and made no difference to the costs or benefits of the crime.
This is one example of how social psychology, the discipline that looks at social interactions (i.e. how people behave in the real world as social beings) has given us insights into how external factors affect our behaviour. This gives us a better model of understanding how people make decisions – in particular that our behaviour is subject to many behavioural biases, and mental short-cuts, to help us navigate the world around us.
The genius of the work of Kahneman and Tversky (and others) was to start testing and codifying some of these biases – coining the term ‘heuristics’ to cover some of the most prevalent decision-making short-cuts – and then to devise a coherent model to explain why these heuristics lead us to often make non-rational, counter-intuitive or erroneous decisions.
In short, they told us how humans actually behave. And businesses are run for, by, and with humans – for the time being at least.
Social norms and social proof
Have you ever been in an unfamiliar place and been looking to find somewhere good to eat? Imagine you see two restaurants – both look reasonable, clean places, with good menus serving food you like.
One is busy, bustling and full of happy, laughing customers. The other has a sad-looking man in the window, eating alone. Which do you choose?
Most of us would choose the former. This is an example of how social proof (our behavioural bias to look to others like us to validate our behaviour) and social norms (our perception of what most other people like us are doing) are powerful influences on our behaviour. If something is popular with our ‘in-group’, we desire it more. Even though, in this case, the second restaurant would serve us quicker and possibly give us better service, since they might be more grateful for the custom.
There is an evolutionary logic to this, as with most behavioural biases. Seeing others like us behaving in a certain way shows that it is a safe, validated and rewarding course of action. The restaurant must be good if all those other people are using it, right?
Most people are familiar with colloquial versions of this effect, like peer pressure and herd mentality, and the restaurant trade uses the effect better than most. A TripAdvisor certificate at the door showing a 5-star rating is leveraging social proof.10
Simply showing something is popular can influence behaviour. In 2010, Facebook deployed an ‘I Voted’ button (below) showing how many users had voted as part of a campaign to encourage turnout in the US Congressional election. Versions of the button, or no button at all, were shown to 61m people in a joint study by the University of California in San Diego and Facebook data scientists. They used voting records to determine the button’s impact on real-world voting. It turns out the button’s call to action increased the total vote count by 340,000 votes.
But more interestingly, the version of the button which showed whether the individual user’s friends (i.e. people they actually knew) had voted was four times more