The Failure of Risk Management. Douglas W. Hubbard. Читать онлайн. Newlib. NEWLIB.NET

Автор: Douglas W. Hubbard
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Ценные бумаги, инвестиции
Год издания: 0
isbn: 9781119522041
Скачать книгу

      Of course, a giant experiment is not usually very practical, at least for individual companies to conduct by themselves. Fortunately, we have some other ways to answer this question without necessarily conducting our own massive controlled experiments. For example, there are some situations in which the risk management method caught what obviously would have been a disaster, such as detecting a bomb in a suitcase, only because of the implementation of a new plastic explosives–sniffing device. Another example would be where an IT security audit uncovered an elaborate embezzling scheme. In those cases, we know it would have been extremely unlikely to have discovered—and addressed—the risk without that particular tool or procedure. Likewise, there are examples of disastrous events that obviously would have been avoidable if some prudent amount of risk management had been taken. For example, if a bank was overexposed on bad debts and reasonable procedures would never have allowed such an overexposure, then we can confidently blame the risk management procedures (or lack thereof) for the problem.

      Component Testing

      Lacking large controlled experiments, or obvious instances of cause and effect, we still have ways of evaluating the validity of a risk management method. The component testing approach looks at the gears of risk management instead of the entire machine. If the entire method has not been scientifically tested, we can at least look at how specific components of the method have fared under controlled experiments. Even if the data is from different industries or laboratory settings, consistent findings from several sources should give us some information about the problem.

       The synthesis of data: One key component of risk management is how we synthesize historical experience. Where we rely on experts to synthesize data and draw conclusions, we should look at research into the relative performance of expert opinion versus statistical models.

       Known human errors and biases: If we rely on expert opinion to assess probabilities, we should be interested in reviewing the research on how well experts do at assessing the likelihood of events, their level of inconsistency, and common biases. We should consider research into how hidden or explicit incentives or irrelevant factors affect judgment. We should know how estimates can be improved by accounting for these issues.

       Aggregation of estimates: In many cases several experts will be consulted for estimates, and their estimates will be aggregated in some way. We should consider the research about the relative performance of different expert-aggregation methods.

       Behavioral research into qualitative scales: If we rely on various scoring or classification methods (e.g., a scale of 1 to 5 or high/medium/low), we should consider the results of empirical research on how these methods are actually used and how much arbitrary features of the scales effect how they are used.

       Decomposition: We can look into research about how estimates can be improved by how we break up a problem into pieces and how we assess uncertainty about those pieces.

       Errors in quantitative models: If we are using more quantitative models and computer simulations, we should be aware of the most common known errors in such models. We also need to check to see whether the sources of the data in the model are based on methods that have proven track records of making realistic forecasts.

      Formal Errors

      Outright math errors should be the most obvious disqualifiers of a method, and we will find them in some cases. This isn't just a matter of making simplifying assumptions or using shortcut rules of thumb. Those can be useful as long as there is at least empirical evidence that they are helpful. But where we deviate from the math, empirical evidence is even more important. This is especially true when deviations from known mathematics provide no benefits in simplicity compared to perfectly valid mathematical solutions—which is often the main case for taking mathematical shortcuts.

      In some cases, it can be shown that mathematically irregular methods may actually lead to dangerously misguided decisions. For example, we shouldn't be adding and multiplying ordinal scales, as is done in many risk assessment methods. We will show later some formal analysis how such procedures lead to misguided conclusions.

      A Check of Completeness

      Even if we use the best methods, we can't apply them to a risk if we don't even think to identify it as a risk. If a firm thinks of risk management as “enterprise risk management,” then it ought to be considering all the major risks of the enterprise—not just legal, not just investment portfolio, not just product liability, not just worker safety, not just business continuity, not just security, and so on. This criterion is not, however, the same as saying that risk management can succeed only if all possible risks are identified. Even the most prudent organization will exclude risks that nobody could conceivably have considered.

      The surveys previously mentioned and many “formal methodologies” developed detailed taxonomies of risks to consider, and each taxonomy is different from the others. But completeness in risk management is a matter of degree. The use of a detailed taxonomy is helpful, but it is no guarantee that relevant risks will be identified.

      More important, risks should not be excluded simply because they are speaking about risks in completely different languages. For example, cyber risk, financial portfolio risk, safety risk, and project risk do not need to use fundamentally different languages when discussing risk. If project risks are 42, cyber risks are yellow, safety risks are moderate, portfolio risks have a Sharpe Ratio of 1.1, and there is a 5 percent chance a new product will fail to break even, what is the total risk? They can and should be using the same types of metrics so risks