How likely is it that, someday, we will look back and pity the economists from today? Are we going to empathize with those social scientists who were trying to grasp their way into the dark or are we more going to resent the evil that was done? As was the case with medicine in the Middle Ages, it is about time that we face a tough question: is our presumed knowledge bringing societies more harm than good?

First, let us define the nature of our problem: the modern economic machines are notoriously complex. Full of uncertainties, feedback loops and dynamic changes that renders futile most attempts to find any definitive or predictive knowledge. In contrast with the realm of physics, economists have to deal with decentralized, frequently “irrational” and self-reinforcing phenomena. Precisely because of the lack of fixed natural laws, our attempts to describe normatively in models the way economic agents will behave is generally a big leap of faith… Most of us acknowledge the limitations of such modeling and understand that they can, at best, yield us only some general intuition of where things should go, but almost never precise, predictive results.

Despite the huge computing difficulties of our subject, it is still extremely common to watch a very diverse crowd of economists and pundits that pretend that they have what it takes to tame uncertainty and tell us the truth about the future. They frequently give us one, five or even ten year forecasts on the price of oil, on the USD/EUR exchange rate, GDP, inflation and even minute predictions of the price of that trendy stock. Vanity of vanities, they often do not even make explicit the probabilistic nature of their expectations, but instead prefer to use apparently objective wordings such as “fair price”, “our call” or “valuation”!

Alchemists of the Modern Times?

2867197512_47c8e2ffb5_bDuring the Middle Ages, it is now commonly accepted that the discipline of Medicine lacked the basic knowledge and scientific background to provide consistent results or even not to cause harm. Superstition abounded and a dangerous mix of religion, astrology and anecdotal believes only yielded ambiguous results at best. It was usual to mention that many illnesses were attributed to an excess of blood in the body, and the very common treatment for most diseases was blood letting (the cut of a major or minor artery depended upon the amount of blood believed to be in excess). In such cases, it is a certainty that those who were supposed to intervene for cure ended up causing more harm than good.

Could it be possible that some fields of a so-called “science” such as Modern Economics might be as imprecise and even harmful as alchemists were in the Middle Ages? You do not have to believe my word – actually, you should not, as it would be purely anecdotal or inductive « knowledge », the very basis of some of the problems we face, as economists. For the answer we can look to more robust, methodological results.

Philip_E._Tetlock
Philip E. Tetlock

Philip Tetlock is an American-Canadian political scientist with more than 200 peer-reviewed articles published on the subject of forecasting in social sciences. In his earlier work, Tetlock did small forecasting tournaments that involved more than 200 experts (ranging from chief-economists to government officials and professors) in order to assess the accuracy and consistency of their guesses.

After Tetlock eliminated some of the common imprecisions that make most “forecasts” unaccountable such as not attributing a deadline to predictions or being imprecise about the exact nature of the expected outcome and which included more than 28.000 guesses in total, his conclusions were staggering. According to the latter, it turned out that, such as the now famous and somewhat unjust metaphor he once used, the average expert is not significantly more accurate than a dart-throwing chimpanzee. In other words, the average expert does only slightly better than pure luck, which is much worse than what a simple extrapolation algorithm could accomplish.

These dim results were notably true for the most famous and renowned forecasters. The more media coverage a particular expert has, the more likely he or she is to be off the mark. But why would that be so?

Foxes and Hedgehogs

In his 2015 book “Superforecasters”, Philip Tetlock draws a much more optimistic, although cautious, outlook on our predictive abilities and on our capacity to improve good judgement and accuracy. Surprisingly to some, the most famous experts fell on average prey to at least two basic mistakes that “superforecasters” avoid altogether: 1. They updated their predictions far too infrequently (maybe because of a bigger confirmation bias and social cost of admitting being somewhat “wrong” in the first place?) and 2. They tended to exhibit more of a “hedgehog” behaviour, as opposed to those who act more like “foxes”, according to Isaiah Berlin’s dichotomy in his classic essay “The Hedgehog and The Fox”.

In this now classic essay, Berlin draws inspiration from a quote by Greek poet Archilocus: “a fox knows many things, a hedgehog knows one important thing”. In parallel, two types of public intellectuals can be distinguished: the hedgehogs such as Plato, Nietzsche or Hegel, who tend to read the world through one defining idea or paradigm, and the foxes such as Aristotle, Goethe or Shakespeare who tend to see the world through a multifaceted lens, drawing on varied and complex experience to try to make sense of the world around them.

In what is of interest for us here, it is very plausible that most famous experts refrain from changing their minds and predictions not only because of the fact that they have their “public image” at stake but also because they might acquire fame and public exposure in the first place precisely due to the extreme nature of their “analysis”. The (financial) media is not so much a business of accurate forecasting as a business of portraying opinions that seem to eliminate the inextricable uncertainties of the world.

Think for a minute about your emotions when you hear that “there is a 71% probability that Apple won’t be able to repeat in 2016 its revenue growth of the last year”. How to make sense of that 71%? How different would it be to hear that there is only a 70% chance? Also, what does this ultimately mean to the stock price, given that revenues might fall but profits might rise due to cost decreases and many other elements? There is indeed a lot of uncertainty to bake into the cake even if this prediction is right.

Now, think for a second about a different statement, by a different famous pundit, Mr. Confident, who states that “Apple stock is going to fall significantly because they simply can’t repeat these growth numbers forever”. Wow. Apparently it is a very powerful and precise statement. But, really, what does even the word “significantly” means? A 10% fall is significant, but he might mean 20 to 40%. And when speaking about growth, does he mean revenue, profits or even volume growth? Finally, he clearly said that the company cannot grow these numbers forever, but eternity can be a pretty long time. What is the intended deadline of this forecast? Maybe none.

In the media, all this imprecision and hedgehog behaviour work pretty much in favour of Mr. Confident’s fame. It is also an almost sure recipe for forecast inaccuracy and inconsistency. It makes of Mr. Confident a “king without clothes”, but who cares? Most people do not even remember the forecasts after they are made. Besides, the media outlets never ask for these pundits’ records of accomplishment.

Towards consistent improvement

If the domain of economics is to escape the fate of doing more harm than good, it is clear that we need to better understand our forecasting abilities and limitations. If we are honest in analysing data, we would be surprised that economists still have difficulties even to “predict” the present. So, what to say about the future?

Even the world’s policymakers such as central bankers do have a hard time issuing an opinion about the state of an economy, despite their access to extensive privileged information. There is an emerging field in econometrics called “nowcasting”, developed, among others, by Mr Domenico Gianonne, former Professor at the SBS-EM, which dedicates intricate statistical methods exclusively to make sense of the present state of the economy. Such is the complexity of economics in the real world. Very different indeed from a weather forecast, for which one can simply look outside of the window in order to check its accuracy.

Now that we can grasp the true challenges of forecasting and even, more modestly, of “predicting the present”, we might be wondering how to avoid the despiteful fate of being a knowledgeable economist that knows a lot about many things, but is only good at predicting the past (or falling to the “narrative fallacy”, as Nassim Taleb would put it). After all, if we are not economic historians, we need to be able to say something about the future.

Some of the training routines for good judgement and forecasting are almost obvious (although frequently ignored) and even already known to those familiar with Bayesian Methods. I will mention some conclusions (and leave some to be discovered by the curious reader) that emerge from Tetlock’s work and tournaments (all of them can be found in his 2015 book called “Superforecasting”):

1. Triage: focus on questions in which hard work can pay off.

2. Break seemingly intractable problems into tractable, smaller, sub-problems;

3. Understand and accept uncertainty: no complex, multi-actor phenomenon has 100% chance of turning out this or that way. If you believe so, try to get the “outside view” and find ways in which things could unfold differently.

4. Always try to falsify your beliefs: since Karl Popper, it is standard good scientific attitude to understand what kind of evidence would disprove your main working hypothesis.

5. Be humble where evidence does not allow for even a small confidence level: such is the case with long term (i.e., more than 3 years down the road) forecasting. Knowing that you don’t know is better than operating on false beliefs.

6. Update frequently: good, old Bayesian updating of your beliefs and attributed probabilities is to good forecasting as brushing your teeth is to proper hygiene. There is nothing shameful in changing beliefs and positions. As John Maynard Keynes supposedly said once “When facts change, I change my mind. What do you do, sir?”.

7. Finally, keep record of your forecasts: our memories are terrible in… memorizing our old beliefs and positions. Keeping track and understanding the reasoning behind our mistakes or accurate guesses is of paramount importance and the surest path towards improvement.

Putting all these habits into place might be hard and even unnatural to most of us, but apparently it is the only way to avoid the so very common pitfalls of our need to forecast and make sense of an uncertain world. On top of that, it could be good to add a drop of humility to our forecasts and university degrees, as advocated by the public intellectual and theologian Reinhold Niebuhr in some of his lines:

“I ask for the courage to change the things that can be changed, serenity to accept what can’t be changed and wisdom to know the difference”.

Photo credit: Dennis Jarvis;  http://bit.ly/1Q9uWEb

Photo credit: Todd Mecklem; http://bit.ly/1ShCf4p

LEAVE A REPLY