Some derivations from the uses of the terms “knowledge” and “information” in F. A. Hayek’s works.

In 1945, Friedrich A. Hayek published under the title “The Use of Knowledge in Society,” in The American Economic Review, one of his most celebrated essays -both at the time of its appearance and today- and probably, together with other studies also later compiled in the volume Individualism and Economic Order (1948), one of those that have earned him the award of the Nobel Prize in Economics, in 1974.

His interpretation generates certain perplexities about the meaning of the term “knowledge”, which the author himself would clear up years later, in the prologue to the third volume of Law, Legislation and Liberty (1979). Being his native language German, Hayek explains there that it would have been more appropriate to have used the term “information”, since such was the prevailing meaning of “knowledge” in the years in which such essays had been written. Incidentally, a similar clarification is also made regarding the confusions raised around the “spontaneous order” turn, which he later replaced by that of “abstract order”, with further subsequent replacements:

Though I still like and occasionally use the term ‘spontaneous order’, I agree that ‘self-generating order’ or ‘self-organizing structures’ are sometimes more precise and unambiguous and therefore frequently use them instead of the former term. Similarly, instead of ‘order’, in conformity with today’s predominant usage, I occasionally now use ‘system’. Also ‘information’ is clearly often preferable to where I usually spoke of ‘knowledge’, since the former clearly refers to the knowledge of particular facts rather than theoretical knowledge to which plain ‘knowledge’ might be thought to prefer” . (Hayek, F.A., “Law, Legislation and Liberty”, Volume 3, Preface to “The Political Order of a Free People”.)

Although it is already impossible to substitute in current use the term “knowledge” for “information” and “spontaneous” for “abstract”;  it is worth always keeping in mind what ultimate meaning should be given to such concepts, at least in order to respect the original intention of the author and perform a consistent interpretation of his texts.

By “the use of knowledge in society”, we will have to refer, then, to the result of the use of information available to each individual who is inserted in a particular situation of time and place and who interacts directly or indirectly with countless of other individuals, whose special circumstances of time and place differ from each other and, therefore, also have fragments of information that are in some respects compatible and in others divergent. 

In the economic field, this is manifested by the variations in the relative scarcity of the different goods that are exchanged in the market, expressed in the variations of their relative prices. An increase in the market price of a good expresses an increase in its relative scarcity, although we do not know if this is due to a drop in supply, an increase in demand, or a combined effect of both phenomena, which vary joint or disparate. The same is true of a fall in the price of a given good. In turn, such variations in relative prices lead to a change in individual expectations and plans, since this may mean a change in the relationship between the prices of substitute or complementary goods, inputs or final products, factors of production, etc. In a feedback process, such changes in plans will in turn generate new variations in relative prices. Such bits of information available to each individual can be synthesized by the price system, which generates incentives at the individual level, but could never be concentrated by a central committee of planners. In the same essay, Hayek emphasizes that such a process of spontaneous coordination is also manifested in other aspects of social interactions, in addition to the exchange of economic goods. They are the spontaneous –or abstract- phenomena, such as language or behavioral norms, which structure the coordination of human interaction without the need for a central direction.

“The Use of Knowledge in Society” appears halfway through the life of Friedrich Hayek and in the middle of the dispute over economic calculation in socialism. His implicit assumptions will be revealed later in his book The Sensory Order (1952) and in the already mentioned Law, Legislation and Liberty (1973, 1976 and 1979). In the first of them, we can find the distinction between relative limits and absolute limits of information / knowledge. The relative ones are those concerning the instruments of measurement and exploration: better microscopes, better techniques or better statistics push forward the frontiers of knowledge, making it more specific. However, if we go up in classification levels, among which are the coordination phenomena between various individual plans, which are explained by increasingly abstract behavior patterns, we will have to find an insurmountable barrier when configuring a coherent and totalizer of the social order resulting from these interactions. This is what Hayek will later call the theory of complex phenomena.

The latter was collected in Law, Legislation and Liberty, in which he will have to apply the same principles enunciated incipiently in “The Use of Knowledge in Society” regarding the phenomena of spontaneous coordination of individual life plans in the plane of the norms of conduct and of the political organization. Whether in the economic, legal and political spheres, the issue of the impossibility of centralized planning and the need to trust the results of free interaction between individuals is found again.

In this regard, the Marxist philosopher and economist Adolph Löwe argued that Hayek, John Maynard Keynes, and himself, considered that such interaction between individuals generated a feedback process by itself: the data obtained from the environment by the agents generated a readjustment of individual plans, which in turn meant new data that would readjust those plans again. Löwe stressed that both he and Keynes understood that they were facing a positive feedback phenomenon (one deviation led to another amplified deviation, which required state intervention), while Hayek argued that the dynamics of society, structured around values such like respect for property rights, it involved a negative feedback process, in which continuous endogenous readjustments maintained a stable order of events. Hayek’s own express references to such negative feedback processes and to the value of cybernetics confirm Lowe’s assessment.

Today, the dispute over the possibility or impossibility of centralized planning returns to the public debate with the recent developments in the field of Artificial Intelligence, Internet of Things and genetic engineering, in which the previous committee of experts would be replaced by programmers, biologists and other scientists. Surely the notions of spontaneous coordination, abstract orders, complex phenomena and relative and absolute limits for information / knowledge will allow fruitful contributions to be made in such aspects.

It is appropriate to ask then how Hayek would have considered the phenomenon of Artificial Intelligence (A.I.), or rather: how he would have valued the estimates that we make today about its possible consequences. But to adequately answer such a question, we must not only agree on what we understand by Artificial Intelligence, but it is also interesting and essential to discuss, prior to that, how Hayek conceptualized the faculty of understanding.

Friedrich Hayek had been strongly influenced in his youth by the Empirical Criticism of his teacher Ernst Mach. Although in The Sensory Order he considers that his own philosophical version called “pure empiricism” overcomes the difficulties of the former as well as David Hume’s empiricism, it must be recognized that the critique of Cartesian Dualism inherited from his former teacher was maintained by Hayek -even in his older works- in a central role. Hayek characterizes Cartesian Dualism as the radical separation between the subject of knowledge and the object of knowledge, in such a way that the former has the full capabilities to formulate a total and coherent representation of reality external to said subject, but at the same time consists of the whole world. This is because the representational synthesis carried out by the subject acts as a kind of mirror of reality: the res intensa expresses the content of the res extensa, in a kind of transcendent duplication, in parallel.

On the contrary, Hayek considers that the subject is an inseparable part of the experience. The subject of knowledge is also experience, integrating what is given. Hayek, thus, also relates his conception of the impossibility for a given mind to account for the totality of experience, since it itself integrates it, with Gödel’s Theorem, which concludes that it is impossible for a system of knowledge to be complete and consistent in terms of its representation of reality, thus demolishing the Leibznian project of the mechanization of thought.

It is in the essays “Degrees of Explanation” and “The Theory of Complex Phenomena” –later collected in the volume of Studies in Philosophy, Politics, and Economics, 1967- in which Hayek expressly recognizes in that Gödel’s Theorem and also in Ludwig Wittgenstein’s paradoxes about the impossibility of forming a “set of all sets” his foundation about the impossibility for a human mind to know and control the totality of human events at the social, political and legal levels.

In short, what Hayek was doing with this was to re-edit the arguments of his past debate on the impossibility of socialism in order to apply them, in a more sophisticated and refined way, to the problem of the deliberate construction and direction of a social order by part of a political body devoid of rules and endowed with a pure political will.

However, such impossibility of mechanization of thought does not in itself imply chaos, but on the contrary the Kosmos. Hayek rescues the old Greek notion of an uncreated and stable order, which relentlessly punishes the hybris of those who seek to emulate and replace the cosmic order, such as the myth of Oedipus the King, who killed his father and married his mother, as a way of creating himself likewise and whose arrogance caused the plague in Thebes. Like every negative feedback system, the old Greek Kosmos was an order which restored its lost inner equilibrium by itself, whose complexities humiliated human reason and urged to replace calculus with virtue. Nevertheless, what we should understand for that “virtue” would be a subject to be discussed many centuries later from the old Greeks and Romans, in the Northern Italy of the Renaissance.

Nightcap

  1. The miseducation of America’s elites Bari Weiss, City Journal
  2. Fear, loathing, and surrealism in Russia Emina Melonic, Law & Liberty
  3. Cancelling Adam Smith Brian Micklethwait, Samizdata
  4. Artificial Intelligence and humanity Kazuo Ishiguro (interview), Wired

Offensive advantage and the vanity of ethics

I have recently shifted my “person I am obsessed with listening to”: my new guy is George Hotz, who is an eccentric innovator who built a cell phone that can drive your car. His best conversations come from Lex Fridman’s podcasts (in 2019 and 2020).

Hotz’s ideas bring into question the efficacy of any ethical strategy to address ‘scary’ innovations. For instance, based on his experience playing “Capture the Flag” in hacking challenges, he noted that he never plays defense: a defender must cover all vulnerabilities, and loses if he fails once. An attacker only needs to find one vulnerability to win. Basically, in CTF, attacking is anti-fragile, and defense is fragile.

Hotz’s work centers around reinforcement learning systems, which learn from AI errors in automated driving to iterate toward a model that mimics ‘good’ drivers. Along the way, he has been bombarded with questions about ethics and safety, and I was startled by the frankness of his answer: there is no way to guarantee safety, and Comma.ai still depends on human drivers to intervene to protect themselves. Hotz basically dismisses any system that claims to take an approach to “Level 5 automation” that is not learning-based and iterative, as driving in any condition, on any road, is an ‘infinite’ problem. Infinite problems have natural vulnerabilities to errors and are usually closer to impossible where finite problems often have effective and world-changing solutions. Here are some of his ideas, and some of mine that spawned from his:

The Seldon fallacy: In short, 1) It is possible to model complex, chaotic systems with simplified, non-chaotic models; 2) Combining chaotic elements makes the whole more predictable. See my other post for more details!

Finite solutions to infinite problems: In Hotz’s words regarding how autonomous vehicles take in their environments, “If your perception system can be written as a spec, you have a problem”. When faced with any potential obstacle in the world, a set of plans–no matter how extensive–will never be exhaustive.

Trolling the trolley problem: Every ethicist looks at autonomous vehicles and almost immediately sees a rarity–a chance for an actual direct application of a philosophical riddle! What if a car has to choose between running into several people or alter path to hit only one? I love Hotz’s answer: we give the driver the choice. It is hard to solve the trolley problem, but not hard to notice it, so the software alerts the driver whenever one may occur–just like any other disengagement. To me, this takes the hot air out of the question, since it shows that, as with many ethical worries about robots, the problem is not unique to autonomous AIs, but inherent in driving–and if you really are concerned, you can choose yourself which people to run over.

Vehicle-to-vehicle insanity: While some autonomous vehicle innovators promise “V2V” connections, through which all cars ‘tell’ each other where they are and where they are going and thus gain tremendously from shared information. Hotz cautions (OK, he straight up said ‘this is insane’) that any V2V system depends, for the safety of each vehicle and rider, on 1) no communication errors and 2) no liars. V2V is just a gigantic target waiting for a black hat, and by connecting the vehicles, the potential damage inflicted is magnified thousands-fold. That is not to say the cars should not connect to the internet (e.g. having Google Maps to inform on static obstacles is useful), just that safety of passengers should never depend on a single system evading any errors or malfeasance.

Permissioned innovation is a contradiction in terms: As Hotz says, the only way forward in autonomous driving is incremental innovation. Trial and error. Now, there are less ethically worrisome ways to err–such as requiring a human driver who can correct the system. However, there is no future for innovations that must emerge fully formed before they are tried out. And, unfortunately, ethicists–whose only skin in the game is getting their voice heard over the other loud protesters–have an incentive to promote the precautionary principle, loudly chastise any innovator who causes any harm (like Uber’s first-pedestrian-killed), and demand that ethical frameworks precede new ideas. I would argue back that ‘permissionless innovation‘ leads to more inventions and long-term benefits, but others have done so quite persuasively. So I will just say, even the idea of ethics-before-inventions contradicts itself. If the ethicist could make such a framework effectively, the framework would include the invention itself–making the ethicist the inventor! Since instead, what we get is ethicists hypothesizing as to what the invention will be, and then restricting those hypotheses, we end up with two potential outcomes: one, the ethicist hypothesizes correctly, bringing the invention within the realm of regulatory control, and thus kills it. Two, the ethicist has a blind spot, and someone invents something in it.

“The Attention”: I shamelessly stole this one from video games. Gamers are very focused on optimal strategies, and rather than just focusing on cost-benefit analysis, gamers have another axis of consideration: “the attention.” Whoever forces their opponent to focus on responding to their own actions ‘has the attention,’ which is the gamer equivalent of the weather gauge. The lesson? Advantage is not just about outscoring your opponent, it is about occupying his mind. While he is occupied with lower-level micromanaging, you can build winning macro-strategies. How does this apply to innovation? See “permissioned innovation” above–and imagine if all ethicists were busy fighting internally, or reacting to a topic that was not related to your invention…

The Maginot ideology: All military historians shake their heads in disappointment at the Maginot Line, which Hitler easily circumvented. To me, the Maginot planners suffered from two fallacies: one, they prepared for the war of the past, solving a problem that was no longer extant. Second, they defended all known paths, and thus forgot that, on defense, you fail if you fail once, and that attackers tend to exploit vulnerabilities, not prepared positions. As Hotz puts it, it is far easier to invent a new weapon–say, a new ICBM that splits into 100 tiny AI-controlled warheads–than to defend against it, such as by inventing a tracking-and-elimination “Star Wars” defense system that can shoot down all 100 warheads. If you are the defender, don’t even try to shoot down nukes.

The Pharsalus counter: What, then, can a defender do? Hotz says he never plays defense in CTF–but what if that is your job? The answer is never easy, but should include some level of shifting the vulnerability to uncertainty onto the attacker (as with “the Attention”). As I outlined in my previous overview of Paradoxical genius, one way to do so is to intentionally limit your own options, but double down on the one strategy that remains. Thomas Schelling won the “Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel” for outlining this idea in The Strategy of Conflict, but more importantly, Julius Caesar himself pioneered it by deliberately backing his troops into a corner. As remembered in HBO’s Rome, at the seminal engagement of Pharsalus, Caesar said: “Our men must fight or die. Pompey’s men have other options.” However, he also made another underappreciated innovation, the idea of ‘floating’ reserves. He held back several cohorts of his best men to be deployed wherever vulnerabilities cropped up–thus enabling him to be reactive, and forcing his opponent to react to his counter. Lastly, Caesar knew that Pompey’s ace-in-the-hole, his cavalry, was made up of vain higher-class nobles, so he told his troops, instead of inflicting maximum damage indiscriminately, to focus on stabbing their faces and thus disfigure them. Indeed, Pompey’s cavalry did not flee from death, but did from facial scars. To summarize, the Pharsalus counter is: 1) create a commitment asymmetry, 2) keep reserves to fill vulnerabilities, and 3) deface your opponents.

Offensive privacy and the leprechaun flag: Another way to shift the vulnerability is to give false signals meant to deceive black hats. In Hotz’s parable, imagine that you capture a leprechaun. You know his gold is buried in a field, and you force the leprechaun to plant a flag where he buried it. However, when you show up to the field, you find it planted with thousands of flags over its whole surface. The leprechaun gave you a nugget of information–but it became meaningless in the storm of falsehood. This is a way that privacy may need to evolve in the realm of security; we will never stop all quests for information, but planting false (leprechaun) flags could deter black hats regardless of their information retrieval abilities.

The best ethics is innovation: When asked what his goal in life is, Hotz says ‘winning.’ What does winning mean? It means constantly improving one’s skills and information, while also seeking to find a purpose that changes the world in a way you are willing to dedicate yourself to. I think the important part of this that Hotz does not say “create a good ethical framework, then innovate.” Instead, he is effectively saying do the opposite–learn and innovate to build abilities, and figure out how to apply them later. The insight underlying this is that the ethics are irrelevant until the innovation is there, and once the innovation is there, the ethics are actually easier to nail down. Rather than discussing ‘will AIs drive cars morally,’ he is building the AIs and anticipating that new tech will mean new solutions to the ethical questions, not just the practical considerations. So, in summary, if you care about innovation, focus on building skills and knowledge bases. If you care about ethics, innovate.

Nightcap

  1. Public and private pleasures (the coffeehouse) Phil Withington, History Today
  2. The historical state and economic development in Vietnam Dell, Lane, & Querubin, Econometrica
  3. The liberal world order was built with blood Vincent Blevins, New York Times
  4. Bowling alone with robots Kori Schake, War on the Rocks

Nightcap

  1. The coming automation of propaganda Adkins & Hibbard, War on the Rocks
  2. It’s been 25 years since Apartheid ended Zeb Larson, Origins
  3. Protest is not enough to topple a dictator Jean-Baptiste Gallopin, Aeon
  4. In defense of 1980s British pop music Sophie Ratcliffe, 1843

Nightcap

  1. The US does not have a hukou system. We have zoning. And border controls.” Scott Sumner, EconLog
  2. We prefer bad decisions taken by humans to good ones taken by machines.” Chris Dillow, Stumbling & Mumbling
  3. How the Fed might more perfectly fulfill its mandate” George Selgin, Alt-M
  4. If I were in charge of Facebook, I would run it very differently.” Arnold Kling, askblog

Nightcap

  1. The year the singularity was cancelled Scott Alexander, Slate Star Codex
  2. The nature of sex Andrew Sullivan, Interesting Times
  3. We should firmly shut the open door William Ruger, Law & Liberty
  4. What’s different about the Sri Lanka attacks? Krishnadev Calamur, Atlantic

Nightcap

  1. The plight of the political convert Corey Robin, New Yorker
  2. Fine grain futarchy zoning via Harberger taxes Robin Hanson, Overcoming Bias
  3. What happens to cognitive diversity when everyone is more WEIRD? Kensy Cooperrider, Aeon
  4. StarCraft is a deep, complicated war strategy game. Google’s AlphaStar AI crushed it. Kelsey Piper, Vox

Nightcap

  1. Against the Politicisation of Museums Michael Savage, Quillette
  2. Tech’s many political problems Tyler Cowen, Marginal Revolution
  3. The robot paradox Chris Dillow, Stumbling and Mumbling
  4. Scientific abstraction and scientific historiography Nick Nielsen, Grand Strategy Annex

Nightcap

  1. How to decolonize a museum Sarah Jilani, Times Literary Supplement
  2. The American island that once belonged to Russia John Zada, BBC
  3. America still has a heartland (it’s just an artificial one) Venkatesh Rao, Aeon
  4. Why Westerners fear robots and the Japanese do not Joi Ito, Wired

Nightcap

  1. Artificial Intelligence: How the Enlightenment ends Henry Kissinger, the Atlantic
  2. What if we have already been ruled by an Intelligent Machine – and we are better off being so? Federico Sosa Valle, NOL
  3. We are in a very, very grave period for the world Henry Kissinger (interview), Financial Times
  4. What should universities do? Rick Weber, NOL

Nightcap

  1. The rule of robots in Stiglitz and Marx Branko Milanovic, globalinequality
  2. Freud and property rights Bill Rein, NOL
  3. China at its limits: beyond its borders Joshua Bird, Asian Review of Books
  4. Who’s Afraid of Tribalism Blake Smith, Quillette

Nightcap

  1. Gonzo philosophy Scott Bradfield, New Statesman
  2. Three contrarian opinions Scott Sumner, EconLog
  3. The dark, complicated reality of Tibetan Buddhism Mark Hay, Aeon
  4. AI and the limits of deep learning Robert Richbourg, War on the Rocks

Nightcap

  1. A Brief History of Tomorrow David Berlinski, Inference
  2. The Invention of World History S. Frederick Starr, History Today
  3. Actually, Western Progress Stems from Christianity Nick Spencer, Theos
  4. Correcting for the Historian’s Middle Eastern Biases Luma Simms, Law & Liberty

Deep Learning and Abstract Orders

It is well known that Friedrich Hayek once rejoiced at Noam Chomsky’s evolutionary theory of language, which stated that the faculty of speaking depends upon a biological device which human beings are enabled with. There is no blank slate and our experience of the world relies on structures that come from the experience in itself.

Hayek would be now delighted if he were told about the recent discoveries on the importance of background knowledge in the arms race between human beings and Artificial Intelligence. When decisions are to be taken by trial and error at the inside of a feedback system, humans are still ahead because they apply a framework of abstract patterns to interpret the connections among the different elements of the system. These patterns are acquired from previous experiences in other closed systems and provide with a semantic meaning to the new one. Thus, humans outperform machines, which work as blank slates, since they take information only from the closed system.

The report of the cited study finishes with the common place of asking what would happen if some day machines learn to handle with abstract patterns of a higher degree of complexity and, then, keep up with that human relative advantage.

As we stated in another place, those abstract machines already exist and they are the legal codes and law systems that enable their users with a set of patterns to interpret controversies concerning human behaviour.

What is worth being asked is not whether Artificial Intelligence eventually will surpass human beings, but what group of individuals will overcome the other: the one which uses technology or the one which refuses to do so.

The answer seems quite obvious when the term “technology” is related to concrete machines, but it is not so clear when we apply it to abstract devices. I tried to ponder the latter problem when I outlined an imaginary arms race between policy wonks and lawyers.

Now, we can extend these concepts to whole populations. Which of these nations will prevail over the other ones: the countries whose citizens are enabled with a set of abstract rules to based their decisions on (the rule of law) or the despotic countries, ruled by the whim of men?

The conclusion to be drawn is quite obvious when we are confronted with a so polarised question. Nevertheless, the problem becomes more subtle when the disjunction concerns on rule of law vs deliberate central planning.

The rule of law is the supplementary set of abstract patterns of conduct that gives sense to the events of the social reality in order to interpret human social action, including the political authority.

In the case of central planning, those abstract patterns are replaced by a concrete model of society whose elements are defined by the authority (after all, that is the main function of Thomas Hobbes’ Leviathan).

Superficially considered, the former – the rule of law as an abstract machine – is irrational while the latter – the Leviathan’s central planning – seems to respond to a rational construction of the society. Our approach states that, paradoxically, the more abstract is the order of a society, the more rational are the decisions and plans that the individuals undertake, since they are based on the supplementary and general patterns provided by the law, whereas central planning offers to the individuals a poorer set of concrete information, which limits the scope of the decisions to those to be based on expediency.

That is why we like to state that law is spontaneous. Not because nobody had created it -in fact, someone did – but because law stands by itself the test of time as the result of an evolutionary process in which populations following the rule of law outperform the rival ones.