The Success Equation. Michael J. Mauboussin. Читать онлайн. Newlib. NEWLIB.NET

Автор: Michael J. Mauboussin
Издательство: Ingram
Серия:
Жанр произведения: Экономика
Год издания: 0
isbn: 9781422184240
Скачать книгу
makes for a great narrative and emphasizes that stories are vehicles for communicating how to act. We use stories, especially those about history, to learn what to do. “Narrative is deeply connected with ethics,” he notes, “and narratives tell us how we should and should not behave.” But when we try to learn from history, we naturally look for causes even when there may be none. Glavin adds, “For a story to work, someone has to be responsible.”12 History is a great teacher, but the lessons are often unreliable.

      Undersampling and Sony's Miraculous Failure

      The most common method for teaching a manager how to thrive in business is to find successful businesses, identify the common practices of those businesses, and recommend that the manager imitate them. Perhaps the best-known book about this method is Jim Collins's Good to Great. Collins and his team analyzed thousands of companies and isolated eleven whose performance went from good to great. They then identified the concepts that they believed had caused those companies to improve—these include leadership, people, a fact-based approach, focus, discipline, and the use of technology—and suggested that other companies adopt the same concepts to achieve the same sort of results. This formula is intuitive, includes some great narrative, and has sold millions of books for Collins.13

      No one questions that Collins has good intentions. He really is trying to figure out how to help executives. And if causality were clear, this approach would work. The trouble is that the performance of a company always depends on both skill and luck, which means that a given strategy will succeed only part of the time. So attributing success to any strategy may be wrong simply because you're sampling only the winners. The more important question is: How many of the companies that tried that strategy actually succeeded?

      Jerker Denrell, a professor of strategy at Oxford, calls this the undersampling of failure. He argues that one of the main ways that companies learn is by observing the performance and characteristics of successful organizations. The problem is that firms with poor performance are unlikely to survive, so they are inconspicuously absent from the group that any one person observes. Say two companies pursue the same strategy, and one succeeds because of luck while the other fails. Since we draw our sample from the outcome, not the strategy, we observe the successful company and assume that the strategy was good. In other words, we assume that the favorable outcome was the result of a skillful strategy and overlook the influence of luck. We connect cause and effect where there is no connection.14 We don't observe the unsuccessful company because it no longer exists. If we had observed it, we would have seen the same strategy failing rather than succeeding and realized that copying the strategy blindly might not work.

      Denrell illustrates the idea by offering a scenario in which firms that pursue risky strategies achieve either high or low performance, whereas those that choose low-risk strategies achieve average performance. A high-risk strategy might put all of a company's resources into one technology, while a low-risk strategy would spread resources across various alternatives. The best performers are those that bet on one option and happen to succeed, and the worst performers are those that make a similar bet but fail. As time passes, the successful firms thrive and the failed firms go out of business or get acquired.

      Someone attempting to draw lessons from this observation would therefore see only those companies that enjoyed good performance and would infer, incorrectly, that the risky strategies led to high performance. Denrell emphasizes that he is not judging the relative merits of a high- or low-risk strategy. He's saying that you need to consider a full sample of strategies and the results of those strategies in order to learn from the experiences of other organizations. When luck plays a part in determining the consequences of your actions, you don't want to study success to learn what strategy was used but rather study strategy to see whether it consistently led to success.

      In chapter 1, we met Michael Raynor, a consultant at Deloitte. Raynor defines what he calls the strategy paradox—situations where “the same behaviors and characteristics that maximize a firm's probability of notable success also maximize its probability of failure.” To illustrate this paradox, he tells the story of Sony Betamax and MiniDiscs. At the time those products were launched, Sony was riding high on the success of its long string of winning products from the transistor radio to the Walkman and compact disc (CD) player. But when it came to Betamax and MiniDiscs, says Raynor, “the company's strategies failed not because they were bad strategies but because they were great strategies.15

      The case of the MiniDisc is particularly instructive. Sony developed MiniDiscs to replace cassette tapes and compete with CDs. The disks were smaller and less prone to skip than CDs and had the added benefit of being able to record as well as play music. Announced in 1992, MiniDiscs were an ideal format to replace cassettes in the Walkman to allow that device to remain the portable music player of choice.

      Sony made sure that the MiniDisc had a number of advantages that put it in a position to be a winner. For example, existing CD plants could produce MiniDiscs, allowing for a rapid reduction in the cost of each unit as sales grew. Furthermore, Sony owned CBS Records, so it could supply terrific music and make even more profit. The strategy behind the MiniDisc reflected the best use of Sony's vast resources and embodied all of the lessons that the company had learned from the successes and failures of past products.

      But just as the MiniDisc player was gaining a foothold, seemingly out of nowhere, everyone had tons of cheap computer memory, access to fast broadband networks, and they could swap files of a manageable size that contained all their favorite music essentially for free. Sony had been hard at work on a problem that vanished from beneath their feet. Suddenly, no one needed cassette tapes. No one needed disks either. And no one could possibly have foreseen that seismic shift in the world in the 1990s. In fact, much of it was unimaginable. But it happened. And it killed the MiniDisc. Raynor asserts, “Not only did everything that could go wrong for Sony actually go wrong, everything that went wrong had to go wrong in order to sink what was in fact a brilliantly conceived and executed strategy. In my view, it is a miracle that the MiniDisc did not succeed.”16

      One of the main reasons we are poor at untangling skill and luck is that we have a natural tendency to assume that success and failure are caused by skill on the one hand and a lack of skill on the other. But in activities where luck plays a role, such thinking is deeply misguided and leads to faulty conclusions.

      Most Research Is False

      In 2005, Dr. John Ioannidis published a paper, titled “Why Most Published Research Findings Are False,” that shook the foundation of the medical research community.17 Ioannidis, who has a PhD in biopathology, argues that the conclusions drawn from most research suffer from the fallacies of bias, such as researchers wanting to come to certain conclusions or from doing too much testing. Using simulations, he shows that a high percentage of the claims made by researchers are simply wrong. In a companion paper, he backed up his contention by analyzing forty-nine of the most highly regarded scientific papers of the prior thirteen years, based on the number of times those papers were cited. Three-quarters of the cases where researchers claimed an effective intervention (for example, vitamin E prevents heart attacks) were tested by other scientists. His analysis showed a stark difference between randomized trials and observational studies. In a randomized trial, subjects are assigned at random to one treatment or another (or none). These studies are considered the gold standard of research, because they do an effective job of finding genuine causes rather than simple correlations. They also eliminate bias in many cases, because the people running the experiment don't know who is getting which treatment. In an observational study, subjects volunteer for one treatment or another and researchers have to take what is available. Ioannidis found that more than 80 percent of the results from observational studies were either wrong or significantly exaggerated, while about three-quarters of the conclusions drawn from randomized studies proved to be true.18

      Ioannidis's work doesn't touch on skill as we have defined it, but it does address the essential issue of cause and effect. In matters of health, researchers want to understand what causes what. A randomized trial allows them to compare two groups of subjects who are similar but who receive different treatments to see whether the treatment makes a difference. By doing so, these trials make it less likely that the results derive from luck. But observational