Inside the Crystal Ball. Harris Maury. Читать онлайн. Newlib. NEWLIB.NET

Автор: Harris Maury
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Зарубежная образовательная литература
Год издания: 0
isbn: 9781118865101
Скачать книгу
and economic forecasts. Jerker Denrell and Christina Fang have provided such illustrations in their 2010 Management Science article titled “Predicting the Next Big Thing: Success as a Signal of Poor Judgement.”18 They conclude that “accurate predictions of an extreme event are likely to be an indication of poor overall forecasting ability, when judgment or forecasting ability is defined as the average level of forecast accuracy over a wide range of forecasts.”

      Denrell and Fang assessed the forecasting accuracy of professional forecasters participating in Wall Street Journal semi-annual forecasting surveys between July 2002 and July 2005. (Every six months at the start of January and July around 50 economists and analysts provided six-month-ahead forecasts of key economic variables, such as GNP, inflation, unemployment, interest rates, and exchange rates.) Their study focused on the overall accuracy of forecasters projecting extreme events, which were defined as results either 20 percent above or below average forecasts. For each forecaster, they compared overall accuracy for all of the forecast variables with the accuracy of each forecaster's projections of the defined extreme events.

       Forecasters who were more accurate than the average forecaster in predicting extreme outcomes were less accurate in predicting all outcomes. Also, the prognosticators who were comparatively more accurate in predicting extreme outcomes had extreme outcomes as a higher percentage of their overall forecasts. In the authors' assessment, “Forecasting ability should be based on all predictions, not only a selected subset of extreme predictions.”

      What do these results tell us about the characteristics of forecasters with these two different types of success? Denrell and Fang offer the following four observations:

      1. Extreme outcomes are, by definition, less likely than all outcomes. Therefore, extreme outcomes are more likely to be successfully predicted by forecasters who rely on intuition or who emphasize a single determinant than by those forecasters extrapolating from a more comprehensive sense of history.

      2. Forecasters who happen to be correct about extreme outcomes become overconfident in their judgment. The authors cite research indicating that securities analysts who have been relatively more accurate in predicting earnings over the previous four quarters tend to be less accurate compared to their peers in subsequent quarters.19

      3. Forecasters can be motivated by their past successes and failures. Forecasters with relatively bold past forecasts may be tempted to alter their tarnished reputations with more bold forecasts. (Denrell and Fang cite research by Chevalier and Ellison and by Leone and Wu.20,21) On the other hand, successful forecasters might subsequently move closer to consensus expectations in order to avoid the risk of incorrect bold forecasts on their reputations and track records. (The authors cite research by Pendergrast and Stole.22) In other words, in the world of forecasting both successes and failures can feed on each other.

      4. Some forecasters may be motivated to go out on a limb if the rewards for being either relatively right or highly visible with such forecasts are worth taking the reputational risk of building up a bad track record with many long shots that don't pan out. This may be especially so if forecasters perceive that they will be more commercially successful with the attention garnered by comparatively unique projections. However, Denrell and Fang cite research suggesting that securities analysts are more likely to be terminated if they make bold and inaccurate forecasts.23

       Who Is More Likely to Go Out on a Limb?

      It can be dangerous for a forecaster to go out on a limb too often, especially if the proverbial limb usually gets sawed off. But why do some forecasters make that choice? Do we know who they are, so that we can consider the source when hearing their forecasts?

      Researchers have examined popular, well-publicized forecast surveys to identify who is most likely to go against the grain and what are the consequences of doing so. Among the studied forecaster surveys have been those appearing in The Blue Chip Economic Indicators, the Wall Street Journal, and Business Week.

      Karlyn Mitchell and Douglas Pearce have examined six-month-ahead interest rate and foreign exchange rate forecasts appearing in past Wall Street Journal forecaster surveys.24 They asked whether a forecaster's employment influenced various forecast characteristics. Specifically, they compared forecasters employed by banks, securities firms, finance departments of corporations, econometric modelers, and independent firms.

      Their research indicated that economists with firms bearing their own name deviated more from the consensus than did other forecasters of interest rates and foreign exchange rates. In the authors' view, such behavior could have been motivated by the desire to gain publicity. (Note: Large sell-side firms employing economists and analysts have the financial means for large advertising budgets and are presumably not as hungry for free publicity.)

      While Mitchell and Pearce studied the Wall Street Journal surveys, Owen Lamont examined participants in Business Week's December surveys of forecasts for GDP growth, unemployment, and inflation in successive years.25 His statistical results indicated that economists who own their consulting firms are more likely to deviate from the consensus and are relatively less accurate in forecasting growth, unemployment, and inflation. His findings indicated to him that some forecasters are “optimizing the market value of their reputations” instead of “acting to minimize mean squared error.”

David Laster, Paul Bennett, and In Sun Geoum reviewed U.S. real GDP forecast behavior of economists participating in Blue Chip Economic Indicator surveys in the 1976 to 1995 period.26 They conclude that “forecasters' deviations from the consensus are related to the types of firms for which they work” and illustrate that “professional forecasting has a strong strategic component.” Studying mean absolute deviation (MAD) from the consensus, they report that “independent forecasters with firms bearing their own names tend to make unconventional forecasts.” (See Table 1.5.)

Table 1.5 Average Forecast Deviations from Consensus GDP Growth Forecasts 1977–1996

      Source: David Laster, Paul Bennett, and In Sum Geoum, “Rational Basis in Macroeconomic Forecasts,” Federal Reserve Bank of New York Research Papers, July 1996.

       Consensus Copycats

      Sticking with the consensus forecast is the opposite of going out on a limb. One motivation for doing so is that there's safety in numbers. Some forecasters might think that they'll never be fired when they are wrong if so many others also were incorrect. But is it wise to copy the consensus?

      When the economics profession is criticized for inaccurate forecasts, the judgments reflect comparisons of actual outcomes to consensus forecasts. But the reason for copying consensus economic forecasts is that the consensus average of all forecasts usually outperforms any individual forecaster.27

      Why? Because when we consider all forecasts, we incorporate more input information than is represented in any individual pronouncement. Also, the averaging process can iron out individual forecasters' biases. In addition to their relative accuracy, consensus economic forecasts also have the advantage of often being free of charge and time-saving. However, relying on consensus forecasts does have some important drawbacks:

      • First, consensus economic growth forecasts almost always fail at critical turning points in the business cycle – especially when heading into a recession.28 For companies and investors, not being prepared for a recession can often turn success into failure.

      • Second, if one's objective is to outperform one's peers, sticking with the consensus


<p>18</p>

Jerker Denrell and Christina Fang, “Predicting the Next Big Thing: Success as a Signal of Poor Judgment,” Management Science 56, no. 10 (2010): 1653–1667.

<p>19</p>

G. Hilary and L. Menzly, “Does Past Success Lead Analysts to Become Overconfident?” Management Science 52, no. 4 (2006): 489–500.

<p>20</p>

J. Chevalier and G. Ellison, “Risk Taking by Mutual Funds in Response to Incentives,” Journal of Political Economy 105, no. 6 (1997): 1167–1200.

<p>21</p>

Leone and Wu, “What Does It Take?”

<p>22</p>

Canice Prendergrast and Lars Stole, “Impetuous Youngsters and Jaded Old-Timers: Acquiring a Reputation for Learning,” Journal of Political Economy 104, no. 6 (1996): 1105–1134.

<p>23</p>

H. Hong, J. Kubik, and A. Solomon, “Securities Analysts' Career Concerns and Herding of Earnings Forecasts,” Rand Journal of Economics 31 (2000): 122–144.

<p>24</p>

Karlyn Mitchell and Douglas K. Pearce, “Professional Forecasts of Interest Rates and Exchange Rates: Evidence from the Wall Street Journal's Panels of Economists,” North Carolina State University Working Paper 004, March 2005.

<p>25</p>

Owen A. Lamont, “Macroeconomic Forecasts and Microeconomic Forecasters,” Journal of Economic Behavior & Organization 48 (2002): 265–280.

<p>26</p>

Laster, Bennett, and Geoum, “Rational Basis in Macroeconomic Forecasts.”

<p>28</p>

Clive W. J. Granger, “Can We Improve the Perceived Quality of Economic Forecasts?” Journal of Applied Econometrics 11 (1996): 455–473.