Offensive advantage and the vanity of ethics

I have recently shifted my “person I am obsessed with listening to”: my new guy is George Hotz, who is an eccentric innovator who built a cell phone that can drive your car. His best conversations come from Lex Fridman’s podcasts (in 2019 and 2020).

Hotz’s ideas bring into question the efficacy of any ethical strategy to address ‘scary’ innovations. For instance, based on his experience playing “Capture the Flag” in hacking challenges, he noted that he never plays defense: a defender must cover all vulnerabilities, and loses if he fails once. An attacker only needs to find one vulnerability to win. Basically, in CTF, attacking is anti-fragile, and defense is fragile.

Hotz’s work centers around reinforcement learning systems, which learn from AI errors in automated driving to iterate toward a model that mimics ‘good’ drivers. Along the way, he has been bombarded with questions about ethics and safety, and I was startled by the frankness of his answer: there is no way to guarantee safety, and Comma.ai still depends on human drivers to intervene to protect themselves. Hotz basically dismisses any system that claims to take an approach to “Level 5 automation” that is not learning-based and iterative, as driving in any condition, on any road, is an ‘infinite’ problem. Infinite problems have natural vulnerabilities to errors and are usually closer to impossible where finite problems often have effective and world-changing solutions. Here are some of his ideas, and some of mine that spawned from his:

The Seldon fallacy: In short, 1) It is possible to model complex, chaotic systems with simplified, non-chaotic models; 2) Combining chaotic elements makes the whole more predictable. See my other post for more details!

Finite solutions to infinite problems: In Hotz’s words regarding how autonomous vehicles take in their environments, “If your perception system can be written as a spec, you have a problem”. When faced with any potential obstacle in the world, a set of plans–no matter how extensive–will never be exhaustive.

Trolling the trolley problem: Every ethicist looks at autonomous vehicles and almost immediately sees a rarity–a chance for an actual direct application of a philosophical riddle! What if a car has to choose between running into several people or alter path to hit only one? I love Hotz’s answer: we give the driver the choice. It is hard to solve the trolley problem, but not hard to notice it, so the software alerts the driver whenever one may occur–just like any other disengagement. To me, this takes the hot air out of the question, since it shows that, as with many ethical worries about robots, the problem is not unique to autonomous AIs, but inherent in driving–and if you really are concerned, you can choose yourself which people to run over.

Vehicle-to-vehicle insanity: While some autonomous vehicle innovators promise “V2V” connections, through which all cars ‘tell’ each other where they are and where they are going and thus gain tremendously from shared information. Hotz cautions (OK, he straight up said ‘this is insane’) that any V2V system depends, for the safety of each vehicle and rider, on 1) no communication errors and 2) no liars. V2V is just a gigantic target waiting for a black hat, and by connecting the vehicles, the potential damage inflicted is magnified thousands-fold. That is not to say the cars should not connect to the internet (e.g. having Google Maps to inform on static obstacles is useful), just that safety of passengers should never depend on a single system evading any errors or malfeasance.

Permissioned innovation is a contradiction in terms: As Hotz says, the only way forward in autonomous driving is incremental innovation. Trial and error. Now, there are less ethically worrisome ways to err–such as requiring a human driver who can correct the system. However, there is no future for innovations that must emerge fully formed before they are tried out. And, unfortunately, ethicists–whose only skin in the game is getting their voice heard over the other loud protesters–have an incentive to promote the precautionary principle, loudly chastise any innovator who causes any harm (like Uber’s first-pedestrian-killed), and demand that ethical frameworks precede new ideas. I would argue back that ‘permissionless innovation‘ leads to more inventions and long-term benefits, but others have done so quite persuasively. So I will just say, even the idea of ethics-before-inventions contradicts itself. If the ethicist could make such a framework effectively, the framework would include the invention itself–making the ethicist the inventor! Since instead, what we get is ethicists hypothesizing as to what the invention will be, and then restricting those hypotheses, we end up with two potential outcomes: one, the ethicist hypothesizes correctly, bringing the invention within the realm of regulatory control, and thus kills it. Two, the ethicist has a blind spot, and someone invents something in it.

“The Attention”: I shamelessly stole this one from video games. Gamers are very focused on optimal strategies, and rather than just focusing on cost-benefit analysis, gamers have another axis of consideration: “the attention.” Whoever forces their opponent to focus on responding to their own actions ‘has the attention,’ which is the gamer equivalent of the weather gauge. The lesson? Advantage is not just about outscoring your opponent, it is about occupying his mind. While he is occupied with lower-level micromanaging, you can build winning macro-strategies. How does this apply to innovation? See “permissioned innovation” above–and imagine if all ethicists were busy fighting internally, or reacting to a topic that was not related to your invention…

The Maginot ideology: All military historians shake their heads in disappointment at the Maginot Line, which Hitler easily circumvented. To me, the Maginot planners suffered from two fallacies: one, they prepared for the war of the past, solving a problem that was no longer extant. Second, they defended all known paths, and thus forgot that, on defense, you fail if you fail once, and that attackers tend to exploit vulnerabilities, not prepared positions. As Hotz puts it, it is far easier to invent a new weapon–say, a new ICBM that splits into 100 tiny AI-controlled warheads–than to defend against it, such as by inventing a tracking-and-elimination “Star Wars” defense system that can shoot down all 100 warheads. If you are the defender, don’t even try to shoot down nukes.

The Pharsalus counter: What, then, can a defender do? Hotz says he never plays defense in CTF–but what if that is your job? The answer is never easy, but should include some level of shifting the vulnerability to uncertainty onto the attacker (as with “the Attention”). As I outlined in my previous overview of Paradoxical genius, one way to do so is to intentionally limit your own options, but double down on the one strategy that remains. Thomas Schelling won the “Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel” for outlining this idea in The Strategy of Conflict, but more importantly, Julius Caesar himself pioneered it by deliberately backing his troops into a corner. As remembered in HBO’s Rome, at the seminal engagement of Pharsalus, Caesar said: “Our men must fight or die. Pompey’s men have other options.” However, he also made another underappreciated innovation, the idea of ‘floating’ reserves. He held back several cohorts of his best men to be deployed wherever vulnerabilities cropped up–thus enabling him to be reactive, and forcing his opponent to react to his counter. Lastly, Caesar knew that Pompey’s ace-in-the-hole, his cavalry, was made up of vain higher-class nobles, so he told his troops, instead of inflicting maximum damage indiscriminately, to focus on stabbing their faces and thus disfigure them. Indeed, Pompey’s cavalry did not flee from death, but did from facial scars. To summarize, the Pharsalus counter is: 1) create a commitment asymmetry, 2) keep reserves to fill vulnerabilities, and 3) deface your opponents.

Offensive privacy and the leprechaun flag: Another way to shift the vulnerability is to give false signals meant to deceive black hats. In Hotz’s parable, imagine that you capture a leprechaun. You know his gold is buried in a field, and you force the leprechaun to plant a flag where he buried it. However, when you show up to the field, you find it planted with thousands of flags over its whole surface. The leprechaun gave you a nugget of information–but it became meaningless in the storm of falsehood. This is a way that privacy may need to evolve in the realm of security; we will never stop all quests for information, but planting false (leprechaun) flags could deter black hats regardless of their information retrieval abilities.

The best ethics is innovation: When asked what his goal in life is, Hotz says ‘winning.’ What does winning mean? It means constantly improving one’s skills and information, while also seeking to find a purpose that changes the world in a way you are willing to dedicate yourself to. I think the important part of this that Hotz does not say “create a good ethical framework, then innovate.” Instead, he is effectively saying do the opposite–learn and innovate to build abilities, and figure out how to apply them later. The insight underlying this is that the ethics are irrelevant until the innovation is there, and once the innovation is there, the ethics are actually easier to nail down. Rather than discussing ‘will AIs drive cars morally,’ he is building the AIs and anticipating that new tech will mean new solutions to the ethical questions, not just the practical considerations. So, in summary, if you care about innovation, focus on building skills and knowledge bases. If you care about ethics, innovate.

Nightcap

  1. Can Francis change the Church? Nancy Dallavalle, Commonweal
  2. A Catholic debate over liberalism Park MacDougald, City Journal
  3. Why Hari Seldon was part of the problem Nick Nielsen, Grand Strategy Annex
  4. How not to die (soon) Robin Hanson, Overcoming Bias

The Case Against Galactic Government?

Samuel Hammond, a friend of a friend, has recently written a blog post musing about whether trade between Mars and Earth should be discouraged. The basic premise is that the case for colonizing Mars is to decrease the likelihood of a catastrophe leading to the extinction of humanity due to a black crow event.

Inter-planetary trade would allow both planets to minimize the harms of minor to moderate events, Samuel seems to acknowledge this. This is how international trade today helps nations minimize the harms of localized events. The harm of the ongoing drought in California has been lessened due to ability of consumers to tap into markets elsewhere to meet their demands.  What Samuel is concerned about is those events whose danger increases in proportion to the inter-connectivity of markets. Samuel gives the example of financial markets, but allow me to introduce another similar danger: the Mule.

2-mule-foundation
The Mule, primary antagonist in I. Asimov’s Foundation & Empire

In Isaac Asimov’s sci-fi universe much of the known galaxy in the distant future comes under the rule of the Foundation Federation. The Foundation is a liberal galactic government that promotes intra-galactic trade, but grants each planet wide freedom to settle its internal matters. It has waged wars of defense, but is notable in that it has never waged a war of conquest and its members have all joined voluntarily. I would go as far to say it is an ideal form of galactic government. However the Foundation’s promotion of galactic inter-connectivity backfires when the Mule, a mutant human with the ability to influence minds, takes control of the government elite. The Mule is a single man but, due to the hyper-connectivity of the Foundation, can assume control with a few well placed followers. Almost overnight the Mule transform the liberal Foundation into his personal dictatorship. The last bastions of freedom are those regions of space controlled by pirates free traders.

Eventually the Mule is defeated and liberal government restored, but only because of the efforts of those polities outside the Foundation’s control. If the Foundation had been a monopolis, a government that controlled all of humanity, then it is doubtful the Mule would have been defeated. Inter-connectivity can yield significant benefits, but as outlined above it can also maximize the damage of black crow events.

Does this mean that Samuel is correct and that any further space colonies must be separated from Earth in terms of trade and governance? Not quite. Although I think Samuel’s concerns serve as an argument against extreme inter-connectivity between worlds, I do not think it is sufficient to justify actively building barriers between worlds. Rather I interpret black crow events as arguments in favor of tolerating the existence of rogue nations, such as North Korea, Somalia, and other contemporary nations that exist outside the primary world system.

As space exploration becomes a reality I think all efforts should be made to promote inter-connectivity between the various worlds. We should promote Earth-Mars relations. We should not however oppose those who wish to live in the asteroid belt and minimize their contact with the rest of us. This break away colonies will arise naturally and need not be actively created, only tolerated. These break away colonies will be founded by an assortment of pirates, religious zealots, political dissidents, and other outcasts. By tolerating their existence we will reap the benefit of space exploration while minimizing the likelihood of black crow events to destroy all of humanity.

What is the proper role of government? Galactic Edition

Mordanicus of Fascinating Future, a sci-fi blog, is musing over the purpose of galactic government. As Mordanicus points out, galactic empires are a staple of science fiction. They can be found in the Star Wars, Star Trek, Dune, Firefly and Foundation universes.

…the feasibility of a galactic empire is questionable.

In Asimov’s description of the galactic empire, it consists of 25 million inhabited planets and 500 quadrillion people, 20 billion per planet on average. It is hard to even imagine a planetary empire, and no such thing has ever existed in human history, let alone such enormous empire.

The fundamental issue with an empire of this size is effective control by the central government. Its sheer size makes it inevitable to delegate many administrative powers to “local” planetary official. But the more power is transferred to individual planets, the less power remains with the central government. The question is then what is the proper function of the imperial government?

What is the purpose of these empires though? In those sci-fi universes with aliens these empires serve some defensive role for our Milky Way galaxy, but in many sci-fi universes there is no clear visible external threat.  What is the purpose of the empire then? Or is it simply a way for wealth distribution by those living in the Saturn beltway?

I personally view merit in a galactic empire if it were able to maintain internal peace. I have no doubt that in a space faring civilization there will be pirates and I believe that there are economies of scale in galactic trade route policing.

There is also merit in an empire that can keep rogue planetary governments in check. A galactic empire would be restrained in its ability to govern on its own given the largess of space and would need to delegate many functions to different layers of government. An empire would however still serve as a last layer of resort for those petitioning against their planetary government.

What about NOL readers? Are you convinced that space piracy warrants an empire? Or would a space faring civilization be better government by planetary or sub-planetary governments?

Read the full post from Mordanicus here.