Inside the Crystal Ball. Harris Maury. Читать онлайн. Newlib. NEWLIB.NET

Автор: Harris Maury
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Зарубежная образовательная литература
Год издания: 0
isbn: 9781118865101
Скачать книгу
risks.

      John Maynard Keynes famously said: “Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually the slaves of some defunct economist.” Forecasters subconsciously or consciously risk becoming the slaves of their intended audience of colleagues, employers, and clients. In other words, seers often fret about the reaction of their audience, especially if their proffered advice is errant. How the forecaster frames these risks is known as the loss function.

      In some situations, such pressures can be constructive. The first trader I met on my first day working as a Wall Street economist had this greeting: “I like bulls and I like bears but I don't like chickens.” The message was clear: No one wants to hear anything from a two-handed economist. That was constructive pressure for a young forecaster embarking on a career.

      That said, audience pressures might not be so benign. Yet they are inescapable. The ability to deal with them in a field in which periodic costly errors are inevitable is the key to a long, successful career for anyone giving advice about the future.

      8. Statistics courses are not enough. It takes both math and experience to succeed.

      To be sure, many dedicated statistics educators are also scholars working to advance the science of statistics. However, teaching and its attendant focus on academic research inevitably leaves less time for building a considerable body of practical experience.

      No amount of schooling could have prepared me for what I experienced during my first week as a Wall Street economist in 1980. Neither a PhD in economics from Columbia University nor a half-dozen years as an economist at the Federal Reserve Bank of New York and the Bank for International Settlements in Basel, Switzerland had given me the slightest clue as to how to handle my duties as PaineWebber's Chief Money Market Economist.

      At the New York Fed, my ability to digest freshly released labor market statistics, and to write a report about them before the close of business, helped trigger an early promotion for me. But on PaineWebber's New York fixed-income trading floor, I was expected to digest and opine on those same very important monthly data no more than five minutes after they hit the tape at 8:30 a. m.

      There were other surprises as well. In graduate school, for example, macroeconomics courses usually skipped national income accounting and measurement. These topics were regarded as simply descriptive and too elementary for a graduate level academic curriculum. Instead, courses focused on the mathematical properties of macroeconomic mechanics and econometrics as the arbiters of economic “truth.” On Wall Street, however, the ability to understand and explain the accounting that underlies any important government or company data report is key to earning credibility with a firm's professional investor clients. In graduate school we did study more advanced statistical techniques. But they were mainly applied to testing hypotheses and studying statistical economic history, not forecasting per se.

In short, when I first peered into my crystal ball, I was behind the eight ball! As in the game of pool, survival would depend on bank shots that combined skill, nerve, and good luck. Fortunately, experience pays: More seasoned forecasters generally do better. (See Figure 1.3. The methodology for calculating the illustrated forecaster scores is discussed in Chapter 2.)

Figure 1.3 More Experienced Forecasters Usually Fare Better

      *Number of surveys in which forecaster participated.Source: Andy Bauer, Robert A. Eisenbeis, Daniel F. Waggoner, and Tao Zha, “Forecast Evaluation with Cross-Sectional Data: The Blue Chip Survey,” Federal Reserve Bank of Atlanta, Second Quarter, 2003.

      In summation, then, it is difficult to be prescient because:

      • Behavioral sciences are inevitably limited.

      • Interpreting current events and history is challenging.

      • Important causal factors may not be quantifiable.

      • Work environments and audiences can bias forecasts.

      • Experience counts more than statistical courses.

      Bad Forecasters: One-Hit Wonders, Perennial Outliers, and Copycats

      Some seers do much better than others in addressing the difficulties cited earlier. But what makes these individuals more accurate? The answer is critical for learning how to make better predictions and for selecting needed inputs from other forecasters. We first review some studies identifying characteristics of both successful and unsuccessful forecasters. That is followed in Chapter 2 by a discussion of my experience in striving for better forecasting accuracy throughout my career.

       What Is “Success” in Forecasting?

      A forecast is any statement regarding the future. With this broad definition in mind, there are several ways to evaluate success or failure. Statistics texts offer a number of conventional gauges for judging how close a forecaster comes to being right over a number of forecast periods. (See an explanation and examples of these measures in Chapter 2.) Sometimes, as in investing, where the direction of change is more important than the magnitude of change, success can be defined as being right more often than being wrong. Another criteria can be whether a forecaster is correct about outcomes that are especially important in terms of costs of being wrong and benefits of being right (i.e., forecasting the big one.)

      Over a forecaster's career, success will be judged by all three criteria – accuracy, frequency of being correct, and the ability to forecast the big one. And, as we see, it's rare to be highly successful in addressing all of these challenges. The sometimes famous forecasters who nail the big one are often neither accurate nor even directionally correct most of the time. On the other hand, the most reliable forecasters are less likely to forecast rare and very important events.

       One-Hit Wonders

      Reputations often are based on an entrepreneur, marketer, or forecaster “being really right when it counted most.” Our society lauds and rewards such individuals. They may attain a guru status, with hordes of people seeking and following their advice after their “home run.” However, an impressive body of research suggests that these one-hit wonders are usually unreliable sources of advice and forecasts. In other words, they strike out a lot. There is much to learn about how to make and evaluate forecasts from this phenomenon.

      In the decade since it was published in 2005, Phillip E. Tetlock's book Expert Political Judgment – How Good Is It? How Can We Know? has become a classic in the development of standards for evaluating political opinion.17 In assessing predictions from experts in different fields, Tetlock draws important conclusions for successful business and economic forecasting and for selecting appropriate decision-making/forecasting inputs. For instance:

       “Experts” successfully predicting rare events were often wrong both before and after their highly visible success. Tetlock reports that “When we pit experts against minimalist performance benchmarks – dilettantes, dart-throwing chimps, and assorted extrapolation algorithms, we find few signs that expertise translates into greater ability to make either ‘well-calibrated’ or ‘discriminating’ forecasts.”

       The one-hit wonders can be like broken clocks. They were more likely than most forecasters to occasionally predict extreme events, but only because they make extreme forecasts more frequently.

       Tetlock's “hedgehogs” (generally inaccurate forecasters who manage to correctly forecast some hard-to-forecast rare event) have a very different approach to reasoning than his more reliable “foxes.” For example, hedgehogs often used one big idea or theme to explain a variety of occurrences. However, “the more eclectic foxes knew many little things and were content to improvise ad hoc solutions to keep pace with a rapidly changing world.”

       While hedgehogs are less reliable as forecasters, foxes may be less stimulating analysts. The former encourage out-of-the-box thinking. The latter are more likely to be less decisive, two-handed economists.

      Tetlock's findings