Nightcap

  1. The miseducation of America’s elites Bari Weiss, City Journal
  2. Fear, loathing, and surrealism in Russia Emina Melonic, Law & Liberty
  3. Cancelling Adam Smith Brian Micklethwait, Samizdata
  4. Artificial Intelligence and humanity Kazuo Ishiguro (interview), Wired

Offensive advantage and the vanity of ethics

I have recently shifted my “person I am obsessed with listening to”: my new guy is George Hotz, who is an eccentric innovator who built a cell phone that can drive your car. His best conversations come from Lex Fridman’s podcasts (in 2019 and 2020).

Hotz’s ideas bring into question the efficacy of any ethical strategy to address ‘scary’ innovations. For instance, based on his experience playing “Capture the Flag” in hacking challenges, he noted that he never plays defense: a defender must cover all vulnerabilities, and loses if he fails once. An attacker only needs to find one vulnerability to win. Basically, in CTF, attacking is anti-fragile, and defense is fragile.

Hotz’s work centers around reinforcement learning systems, which learn from AI errors in automated driving to iterate toward a model that mimics ‘good’ drivers. Along the way, he has been bombarded with questions about ethics and safety, and I was startled by the frankness of his answer: there is no way to guarantee safety, and Comma.ai still depends on human drivers to intervene to protect themselves. Hotz basically dismisses any system that claims to take an approach to “Level 5 automation” that is not learning-based and iterative, as driving in any condition, on any road, is an ‘infinite’ problem. Infinite problems have natural vulnerabilities to errors and are usually closer to impossible where finite problems often have effective and world-changing solutions. Here are some of his ideas, and some of mine that spawned from his:

The Seldon fallacy: In short, 1) It is possible to model complex, chaotic systems with simplified, non-chaotic models; 2) Combining chaotic elements makes the whole more predictable. See my other post for more details!

Finite solutions to infinite problems: In Hotz’s words regarding how autonomous vehicles take in their environments, “If your perception system can be written as a spec, you have a problem”. When faced with any potential obstacle in the world, a set of plans–no matter how extensive–will never be exhaustive.

Trolling the trolley problem: Every ethicist looks at autonomous vehicles and almost immediately sees a rarity–a chance for an actual direct application of a philosophical riddle! What if a car has to choose between running into several people or alter path to hit only one? I love Hotz’s answer: we give the driver the choice. It is hard to solve the trolley problem, but not hard to notice it, so the software alerts the driver whenever one may occur–just like any other disengagement. To me, this takes the hot air out of the question, since it shows that, as with many ethical worries about robots, the problem is not unique to autonomous AIs, but inherent in driving–and if you really are concerned, you can choose yourself which people to run over.

Vehicle-to-vehicle insanity: While some autonomous vehicle innovators promise “V2V” connections, through which all cars ‘tell’ each other where they are and where they are going and thus gain tremendously from shared information. Hotz cautions (OK, he straight up said ‘this is insane’) that any V2V system depends, for the safety of each vehicle and rider, on 1) no communication errors and 2) no liars. V2V is just a gigantic target waiting for a black hat, and by connecting the vehicles, the potential damage inflicted is magnified thousands-fold. That is not to say the cars should not connect to the internet (e.g. having Google Maps to inform on static obstacles is useful), just that safety of passengers should never depend on a single system evading any errors or malfeasance.

Permissioned innovation is a contradiction in terms: As Hotz says, the only way forward in autonomous driving is incremental innovation. Trial and error. Now, there are less ethically worrisome ways to err–such as requiring a human driver who can correct the system. However, there is no future for innovations that must emerge fully formed before they are tried out. And, unfortunately, ethicists–whose only skin in the game is getting their voice heard over the other loud protesters–have an incentive to promote the precautionary principle, loudly chastise any innovator who causes any harm (like Uber’s first-pedestrian-killed), and demand that ethical frameworks precede new ideas. I would argue back that ‘permissionless innovation‘ leads to more inventions and long-term benefits, but others have done so quite persuasively. So I will just say, even the idea of ethics-before-inventions contradicts itself. If the ethicist could make such a framework effectively, the framework would include the invention itself–making the ethicist the inventor! Since instead, what we get is ethicists hypothesizing as to what the invention will be, and then restricting those hypotheses, we end up with two potential outcomes: one, the ethicist hypothesizes correctly, bringing the invention within the realm of regulatory control, and thus kills it. Two, the ethicist has a blind spot, and someone invents something in it.

“The Attention”: I shamelessly stole this one from video games. Gamers are very focused on optimal strategies, and rather than just focusing on cost-benefit analysis, gamers have another axis of consideration: “the attention.” Whoever forces their opponent to focus on responding to their own actions ‘has the attention,’ which is the gamer equivalent of the weather gauge. The lesson? Advantage is not just about outscoring your opponent, it is about occupying his mind. While he is occupied with lower-level micromanaging, you can build winning macro-strategies. How does this apply to innovation? See “permissioned innovation” above–and imagine if all ethicists were busy fighting internally, or reacting to a topic that was not related to your invention…

The Maginot ideology: All military historians shake their heads in disappointment at the Maginot Line, which Hitler easily circumvented. To me, the Maginot planners suffered from two fallacies: one, they prepared for the war of the past, solving a problem that was no longer extant. Second, they defended all known paths, and thus forgot that, on defense, you fail if you fail once, and that attackers tend to exploit vulnerabilities, not prepared positions. As Hotz puts it, it is far easier to invent a new weapon–say, a new ICBM that splits into 100 tiny AI-controlled warheads–than to defend against it, such as by inventing a tracking-and-elimination “Star Wars” defense system that can shoot down all 100 warheads. If you are the defender, don’t even try to shoot down nukes.

The Pharsalus counter: What, then, can a defender do? Hotz says he never plays defense in CTF–but what if that is your job? The answer is never easy, but should include some level of shifting the vulnerability to uncertainty onto the attacker (as with “the Attention”). As I outlined in my previous overview of Paradoxical genius, one way to do so is to intentionally limit your own options, but double down on the one strategy that remains. Thomas Schelling won the “Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel” for outlining this idea in The Strategy of Conflict, but more importantly, Julius Caesar himself pioneered it by deliberately backing his troops into a corner. As remembered in HBO’s Rome, at the seminal engagement of Pharsalus, Caesar said: “Our men must fight or die. Pompey’s men have other options.” However, he also made another underappreciated innovation, the idea of ‘floating’ reserves. He held back several cohorts of his best men to be deployed wherever vulnerabilities cropped up–thus enabling him to be reactive, and forcing his opponent to react to his counter. Lastly, Caesar knew that Pompey’s ace-in-the-hole, his cavalry, was made up of vain higher-class nobles, so he told his troops, instead of inflicting maximum damage indiscriminately, to focus on stabbing their faces and thus disfigure them. Indeed, Pompey’s cavalry did not flee from death, but did from facial scars. To summarize, the Pharsalus counter is: 1) create a commitment asymmetry, 2) keep reserves to fill vulnerabilities, and 3) deface your opponents.

Offensive privacy and the leprechaun flag: Another way to shift the vulnerability is to give false signals meant to deceive black hats. In Hotz’s parable, imagine that you capture a leprechaun. You know his gold is buried in a field, and you force the leprechaun to plant a flag where he buried it. However, when you show up to the field, you find it planted with thousands of flags over its whole surface. The leprechaun gave you a nugget of information–but it became meaningless in the storm of falsehood. This is a way that privacy may need to evolve in the realm of security; we will never stop all quests for information, but planting false (leprechaun) flags could deter black hats regardless of their information retrieval abilities.

The best ethics is innovation: When asked what his goal in life is, Hotz says ‘winning.’ What does winning mean? It means constantly improving one’s skills and information, while also seeking to find a purpose that changes the world in a way you are willing to dedicate yourself to. I think the important part of this that Hotz does not say “create a good ethical framework, then innovate.” Instead, he is effectively saying do the opposite–learn and innovate to build abilities, and figure out how to apply them later. The insight underlying this is that the ethics are irrelevant until the innovation is there, and once the innovation is there, the ethics are actually easier to nail down. Rather than discussing ‘will AIs drive cars morally,’ he is building the AIs and anticipating that new tech will mean new solutions to the ethical questions, not just the practical considerations. So, in summary, if you care about innovation, focus on building skills and knowledge bases. If you care about ethics, innovate.

Nightcap

  1. Public and private pleasures (the coffeehouse) Phil Withington, History Today
  2. The historical state and economic development in Vietnam Dell, Lane, & Querubin, Econometrica
  3. The liberal world order was built with blood Vincent Blevins, New York Times
  4. Bowling alone with robots Kori Schake, War on the Rocks

Nightcap

  1. The coming automation of propaganda Adkins & Hibbard, War on the Rocks
  2. It’s been 25 years since Apartheid ended Zeb Larson, Origins
  3. Protest is not enough to topple a dictator Jean-Baptiste Gallopin, Aeon
  4. In defense of 1980s British pop music Sophie Ratcliffe, 1843

Nightcap

  1. The US does not have a hukou system. We have zoning. And border controls.” Scott Sumner, EconLog
  2. We prefer bad decisions taken by humans to good ones taken by machines.” Chris Dillow, Stumbling & Mumbling
  3. How the Fed might more perfectly fulfill its mandate” George Selgin, Alt-M
  4. If I were in charge of Facebook, I would run it very differently.” Arnold Kling, askblog

Nightcap

  1. The year the singularity was cancelled Scott Alexander, Slate Star Codex
  2. The nature of sex Andrew Sullivan, Interesting Times
  3. We should firmly shut the open door William Ruger, Law & Liberty
  4. What’s different about the Sri Lanka attacks? Krishnadev Calamur, Atlantic

Nightcap

  1. The plight of the political convert Corey Robin, New Yorker
  2. Fine grain futarchy zoning via Harberger taxes Robin Hanson, Overcoming Bias
  3. What happens to cognitive diversity when everyone is more WEIRD? Kensy Cooperrider, Aeon
  4. StarCraft is a deep, complicated war strategy game. Google’s AlphaStar AI crushed it. Kelsey Piper, Vox

Nightcap

  1. Against the Politicisation of Museums Michael Savage, Quillette
  2. Tech’s many political problems Tyler Cowen, Marginal Revolution
  3. The robot paradox Chris Dillow, Stumbling and Mumbling
  4. Scientific abstraction and scientific historiography Nick Nielsen, Grand Strategy Annex

Nightcap

  1. How to decolonize a museum Sarah Jilani, Times Literary Supplement
  2. The American island that once belonged to Russia John Zada, BBC
  3. America still has a heartland (it’s just an artificial one) Venkatesh Rao, Aeon
  4. Why Westerners fear robots and the Japanese do not Joi Ito, Wired

Nightcap

  1. Artificial Intelligence: How the Enlightenment ends Henry Kissinger, the Atlantic
  2. What if we have already been ruled by an Intelligent Machine – and we are better off being so? Federico Sosa Valle, NOL
  3. We are in a very, very grave period for the world Henry Kissinger (interview), Financial Times
  4. What should universities do? Rick Weber, NOL

Nightcap

  1. The rule of robots in Stiglitz and Marx Branko Milanovic, globalinequality
  2. Freud and property rights Bill Rein, NOL
  3. China at its limits: beyond its borders Joshua Bird, Asian Review of Books
  4. Who’s Afraid of Tribalism Blake Smith, Quillette

Nightcap

  1. Gonzo philosophy Scott Bradfield, New Statesman
  2. Three contrarian opinions Scott Sumner, EconLog
  3. The dark, complicated reality of Tibetan Buddhism Mark Hay, Aeon
  4. AI and the limits of deep learning Robert Richbourg, War on the Rocks

Nightcap

  1. A Brief History of Tomorrow David Berlinski, Inference
  2. The Invention of World History S. Frederick Starr, History Today
  3. Actually, Western Progress Stems from Christianity Nick Spencer, Theos
  4. Correcting for the Historian’s Middle Eastern Biases Luma Simms, Law & Liberty

Deep Learning and Abstract Orders

It is well known that Friedrich Hayek once rejoiced at Noam Chomsky’s evolutionary theory of language, which stated that the faculty of speaking depends upon a biological device which human beings are enabled with. There is no blank slate and our experience of the world relies on structures that come from the experience in itself.

Hayek would be now delighted if he were told about the recent discoveries on the importance of background knowledge in the arms race between human beings and Artificial Intelligence. When decisions are to be taken by trial and error at the inside of a feedback system, humans are still ahead because they apply a framework of abstract patterns to interpret the connections among the different elements of the system. These patterns are acquired from previous experiences in other closed systems and provide with a semantic meaning to the new one. Thus, humans outperform machines, which work as blank slates, since they take information only from the closed system.

The report of the cited study finishes with the common place of asking what would happen if some day machines learn to handle with abstract patterns of a higher degree of complexity and, then, keep up with that human relative advantage.

As we stated in another place, those abstract machines already exist and they are the legal codes and law systems that enable their users with a set of patterns to interpret controversies concerning human behaviour.

What is worth being asked is not whether Artificial Intelligence eventually will surpass human beings, but what group of individuals will overcome the other: the one which uses technology or the one which refuses to do so.

The answer seems quite obvious when the term “technology” is related to concrete machines, but it is not so clear when we apply it to abstract devices. I tried to ponder the latter problem when I outlined an imaginary arms race between policy wonks and lawyers.

Now, we can extend these concepts to whole populations. Which of these nations will prevail over the other ones: the countries whose citizens are enabled with a set of abstract rules to based their decisions on (the rule of law) or the despotic countries, ruled by the whim of men?

The conclusion to be drawn is quite obvious when we are confronted with a so polarised question. Nevertheless, the problem becomes more subtle when the disjunction concerns on rule of law vs deliberate central planning.

The rule of law is the supplementary set of abstract patterns of conduct that gives sense to the events of the social reality in order to interpret human social action, including the political authority.

In the case of central planning, those abstract patterns are replaced by a concrete model of society whose elements are defined by the authority (after all, that is the main function of Thomas Hobbes’ Leviathan).

Superficially considered, the former – the rule of law as an abstract machine – is irrational while the latter – the Leviathan’s central planning – seems to respond to a rational construction of the society. Our approach states that, paradoxically, the more abstract is the order of a society, the more rational are the decisions and plans that the individuals undertake, since they are based on the supplementary and general patterns provided by the law, whereas central planning offers to the individuals a poorer set of concrete information, which limits the scope of the decisions to those to be based on expediency.

That is why we like to state that law is spontaneous. Not because nobody had created it -in fact, someone did – but because law stands by itself the test of time as the result of an evolutionary process in which populations following the rule of law outperform the rival ones.

What if we have already been ruled by an Intelligent Machine – and we are better off being so?

Common people and even reputed scientists, such as Stephen Hawking, have been worrying about the very menace of machines provided with Artificial Intelligence that could rule the whole human genre in detriment of our liberty and welfare. This fear has two inner components: the first one, that the Artificial Intelligence will outshine human intellectual capabilities; and the second one, that the Intelligent Machines will be endowed with their own volition.

Obviously, it would be an evil volition or, at least, a very egotistic one. Or maybe the Intelligent Machines will not necessarily be evil or egotistic, but only as fearful of humans as they are of machines – although more powerful. Moreover, depending on their morality on a multiplicity of reasonings we cannot grasp, we could not ascertain whether their superior intelligence (as we suppose the feared machines would be enabled with) is good or evil, or just more complex than ours.

Nevertheless, there is still a additional third assumption which accompanies all the warnings about the perils of thinking machines: that they are a physical shell inhabited by an Artificial Intelligence. Inspired by Gilbert Ryle’s critique of Cartesian Dualism, we can state that the belief of Intelligent Machines provided with an autonomous volition rests upon the said assumption of an intelligence independent from its physical body: a self-conscious being whose thoughts are fully independent from the sensory apparatus of its body and whose sensations are fully independent from the abstract classification which its mind operates by.

The word “machine” evokes a physical device. However, a machine might as well be an abstract one. Abstract Machines are thought experiments compounded by algorithms which delivers an output from an input of information which, in turn, could be used as an input for another circuit. Theses algorithms can emulate a decision making process, providing a set of consequences for a given set of antecedents.

In fact, all recent cybernetic innovations are the result of the merging of abstract machines with physical ones: machines that play chess, drive cars, recognize faces, etc.. Since they do not have an autonomous will and the sensory data they produce are determined by their algorithms, whose output, in turn, depends on the limitation of their hardware, people are reluctant to call their capabilities “real intelligence.” Perhaps the reason of that reluctance is that people are expecting automata which accomplish the Cartesian Dualism paradigm of a thinking being.

But what if an automaton enabled with an intelligence superior to ours has already existed and is ruling at least part of our lives? We do not know of any being of that kind, if for a ruling intelligent machine we regard a self-conscious and will-driven one. But the ones who are acquainted with the notion of law as a spontaneous and abstract order will not find any major difficulty to grasp the analogy between the algorithms that form an abstract machine and general and abstract laws that compound a legal system.

The first volume of Law, Legislation, and Liberty by Friedrich A. Hayek, subtitled “Norms [Rules] and Order” (1973), is until today the most complete account of the law seen as an autonomous system, which adapts itself to the changes in its environment through a process of negative feedback that brings about marginal changes in its structure. Abstract and general notions of rights and duties are well-known by the agents of the system and that allows to everyone to form expectations about the behaviour of each other. When a conflict between two agents arises, a judge establishes the correct content of the law to be applied to the given case.

Notwithstanding our human intelligence -using its knowledge about the law- is capable of determining the right decision to each concrete controversy between two given agents, the system of the law as whole achieves a higher degree of complexity than any human mind might reach. Whereas our knowledge of a given case depends on acquiring more and more concrete data, our knowledge of the law as a whole is related to more and more abstract degrees of classifications. Thus, we cannot fully predict the complete chain of consequences of a singular decision upon the legal system as a whole. This last characteristic of the law does not mean its power of coercion is arbitrary. As individuals, we are enabled with enough information about the legal system to design our own plans and to form correct expectations about other people’s behaviour. Thus, legal constraints do not interfere with individual liberty.

On the other hand, the absolute boundary to the knowledge of the legal system as a whole works as a limitation to the political power over the law and, thence, over individuals. But, after all, that is what the concept of rule of law is about: we are much better off being ruled by an abstract and impersonal entity, more complex than the human mind, than by the self-conscious -but discretional- rule of man. Perhaps, law is not at all an automaton which rules our lives, but we can ascertain that law -as a spontaneous order- prevents other men from doing so.