Efficient markets as normative systems

Recently, I came across this outstanding interview with Eugene Fama published by The Market / NZZ. Besides the main subject discussed -the inability of central banks to control inflation-, the interview is intertwined with gripping assertions about the limits of knowledge, such as the following ones:

Bubbles are things people see in hindsight. They don’t identify them in advance. Sure, you can look at the behavior of prices, and you may be able to identify cases where they are too high. But if you only look back and say: «Oh, stocks went down a lot, so that was a bubble», then that’s 20/20 hindsight. At the time, there was no evidence that there was a bubble.

I don’t say markets are completely efficient, but they’re efficient for most questions that I address. Models are never a 100 % true. If they were, we would call them reality, not models. But for almost all purposes, market efficiency is a very good approximation.

The real question is: How do you pick Warren Buffett? The way you pick him is after the fact, since he has done very well. Now, suppose I take 100,000 investors and say: Let’s let them run for 30 years and pick out the winner. Because you roll the dice so many times, even if none of them is a good or bad investor, many investors will do well and many will do poorly purely by chance. Statistically there is also going to be a big winner, but solely due to chance. In other words: There will be extremely good outcomes and extremely bad outcomes, but you just can’t tell who is successful because of luck and who because of skill.

This quotations resemble the distinction made by Friedrich Hayek between relative and absolute limits to explanation (The Sensory Order, 1952):

8.67. Apart from these practical limits to explanation, which we may hope continuously to push further back, there also exists, however, an absolute limit to what the human brain can ever accomplish by way of explanation -a limit which is determined by the nature of the instrument of explanation itself, and which is particularly relevant to any attempt  to explain particular mental processes.

8.68. If our account of the process of explanation is correct, it would appear that any apparatus or organism which is to perform such operations must possess certain properties determined by the properties of the events which it is to explain. If explanation involves that kind of joint classification of many elements which we have described as “model-building”, the relation between the explaining agent and the explained object must satisfy such formal relations as must exist between any apparatus of classification and the individual objects which it classifies (Cf. 5.77-5.91).

5.90. The model building by such an apparatus of classification simplifies the task and extends the scope of successful adaptation in two ways: it selects some elements from a complex environment as relevant for the prediction of events which are important for the persistence of the structure, and it treats them as instances of classes of events. But while in this way a model building apparatus  (and particularly one that can be constantly improved by learning) is of much greater efficiency than could be any more mechanical apparatus which contained, as it were, a few fixed models of typical situations, there will clearly still exist definite limits to the extent to which such a microcosm can contain an adequate reproduction of the significant factors of the macrocosm.

8.69. The proposition which we shall attempt to establish is that any apparatus of classification must possess a structure of a higher degree of complexity than is possessed by the objects which it classifies; and that, therefore, the capacity of any explaining agent must be limited to objects with a structure possessing a degree of complexity lower than its own. […]

Being confronted with an absolute limit to explanation does not mean that chaos lies outside those limits. Indeed, what we have beyond the scope of our models is a complex order -in this case, efficient markets. A kind of order whose “[…] existence need not manifest itself to our senses but may be based on purely abstract relations which we can only mentally reconstruct” (F. A. Hayek, “Law, Legislation, and Liberty”, Chapter II; 1973), and because of that its explanation finds not practical limits but absolute ones. For example, in this field, “passive investing” would be homologous to a law-abiding behaviour or to the moral saying “being honest is the best policy”. Of course, for such systems -economic, legal or moral- to evolve there have to be some “prices”, i.e.: people who trade in the short term or who perform innovative behaviours which establish a new legal precedent or a new habit.

But for this innovation to happen it is indispensable for the agents to count on a framework of stable regularities -usually called abstract or spontaneous orders- upon which they could draw their own “maps”, create new expectations, and coordinate their plans with other agents. That indicates that we have already spent enough ink writing about the economic way of looking at the law, and perhaps it is time to start pondering markets as complex normative systems.

The Paradox of Prediction

In one of famous investor Howard Marks’ memos to clients of Oaktree Capital, the eccentric and successful fund manager hits on an interesting aspect of prediction markets and probability alike. In 1993 Marks wrote:

Being ‘right’ doesn’t lead to superior performance if the consensus forecast is also right. […] Extreme predictions are rarely right, but they’re the ones that make you big money.

Let’s unpack this.

In economics, the recent past is often a good indicator for the present: if GDP growth was 3% last quarter, it is likely around 3% the next quarter as well. Similarly, since CPI growth was 2.4% last year and 2.1% the year before, a reasonable forecast for CPI growth for 2019 is north of 2%.

If you forecast extrapolation like this, you’d be right most of the time – but you won’t make any money, neither in betting markets nor financial markets. That is, Marks explains, because the consensus among forecasters are also hoovering around extrapolations from the recent past (give or take some), and so buyers and sellers in these markets price the assets accordingly. We don’t have to go as far as the semi-strong versions of the Efficient Market Hypothesis which claim that the best guesses of all publicly available information is already incorporated into the prices of securities, but the tendency is the same.

  • if you forecasted 5% GDP growth when most everyone else forecasted 3%, and the S&P500 increased by say 50% when everyone estimated +5%, you presumably made a lot more money than most through, say, higher S&P500 exposure or insane bullish leverage.
  • If you forecasted -5% GDP growth when most everyone else forecasted 3%, and the S&P500 fell 40% when everyone estimated +5%, you presumably made a lot more money than most through staying out out S&P500 entirely (holding cash, bonds or gold etc).

But if you look at all the forecasts over time by people who predicted radically divergent outcomes, you’ll find that they quite frequently predict radically divergent outcomes – and so they would be spectacularly wrong most of the time since extrapolation is usually correct. But occasionally they do get it right. In hammering the point home, Marks says:

the fact that he was right once doesn’t tell you anything. The views of that forecaster would not be of any value to you unless he was right consistently. And nobody is right consistently in making deviant forecasts.

The forecasts that do make you serious money are those that radically deviate from the extrapolated past and/or current consensus. Once in a while – call it shocks, bubble mania or creative destruction – something large happens, and the real world outcomes land pretty far from the consensus predictions. If your forecast led you to act accordingly, and you happened to be right, you stand the make a lot of money:

Predicting future development of markets thus put us in an interesting position: the high-probability forecasts of extrapolated recent past are fairly useless, since they cannot make an investor any money; the low-probability forecasts of radically deviant change can make you money, but there is no way to identify them among the quacks, charlatans, and permabears. Indeed, the kind of people who accurately call radically deviant outcomes are the ones who frequently make such radically deviant projections and whose track record of accurately forecasting the future are therefore close to zero.

Provocatively enough, Marks concludes that forecasting is not valuable, but I think the bigger lesson applies in a wider intellectual sense to everyone claiming to have predicted certain events (market collapses, financial crises etc).

No, you didn’t. You’re a consistently bullish over-optimist, a consistent doomsday sayer, or you got lucky; correctly calling 1 outcome out of 647 attempts is not indicative of your forecasting skills; correctly calling 1 outcome on 1 attempt is called ‘luck’, even if it seems like an impressive feat. Indeed, once we realize that there are literally thousands of people doing that all the time, ex post there will invariably be somebody who *predicted* it.

Stay skeptical.