- Public and private pleasures (the coffeehouse) Phil Withington, History Today
- The historical state and economic development in Vietnam Dell, Lane, & Querubin, Econometrica
- The liberal world order was built with blood Vincent Blevins, New York Times
- Bowling alone with robots Kori Schake, War on the Rocks
- “The US does not have a hukou system. We have zoning. And border controls.” Scott Sumner, EconLog
- “We prefer bad decisions taken by humans to good ones taken by machines.” Chris Dillow, Stumbling & Mumbling
- “How the Fed might more perfectly fulfill its mandate” George Selgin, Alt-M
- “If I were in charge of Facebook, I would run it very differently.” Arnold Kling, askblog
- The plight of the political convert Corey Robin, New Yorker
- Fine grain futarchy zoning via Harberger taxes Robin Hanson, Overcoming Bias
- What happens to cognitive diversity when everyone is more WEIRD? Kensy Cooperrider, Aeon
- StarCraft is a deep, complicated war strategy game. Google’s AlphaStar AI crushed it. Kelsey Piper, Vox
- Against the Politicisation of Museums Michael Savage, Quillette
- Tech’s many political problems Tyler Cowen, Marginal Revolution
- The robot paradox Chris Dillow, Stumbling and Mumbling
- Scientific abstraction and scientific historiography Nick Nielsen, Grand Strategy Annex
- How to decolonize a museum Sarah Jilani, Times Literary Supplement
- The American island that once belonged to Russia John Zada, BBC
- America still has a heartland (it’s just an artificial one) Venkatesh Rao, Aeon
- Why Westerners fear robots and the Japanese do not Joi Ito, Wired
- Artificial Intelligence: How the Enlightenment ends Henry Kissinger, the Atlantic
- What if we have already been ruled by an Intelligent Machine – and we are better off being so? Federico Sosa Valle, NOL
- We are in a very, very grave period for the world Henry Kissinger (interview), Financial Times
- What should universities do? Rick Weber, NOL
- A Brief History of Tomorrow David Berlinski, Inference
- The Invention of World History S. Frederick Starr, History Today
- Actually, Western Progress Stems from Christianity Nick Spencer, Theos
- Correcting for the Historian’s Middle Eastern Biases Luma Simms, Law & Liberty
It is well known that Friedrich Hayek once rejoiced at Noam Chomsky’s evolutionary theory of language, which stated that the faculty of speaking depends upon a biological device which human beings are enabled with. There is no blank slate and our experience of the world relies on structures that come from the experience in itself.
Hayek would be now delighted if he were told about the recent discoveries on the importance of background knowledge in the arms race between human beings and Artificial Intelligence. When decisions are to be taken by trial and error at the inside of a feedback system, humans are still ahead because they apply a framework of abstract patterns to interpret the connections among the different elements of the system. These patterns are acquired from previous experiences in other closed systems and provide with a semantic meaning to the new one. Thus, humans outperform machines, which work as blank slates, since they take information only from the closed system.
The report of the cited study finishes with the common place of asking what would happen if some day machines learn to handle with abstract patterns of a higher degree of complexity and, then, keep up with that human relative advantage.
As we stated in another place, those abstract machines already exist and they are the legal codes and law systems that enable their users with a set of patterns to interpret controversies concerning human behaviour.
What is worth being asked is not whether Artificial Intelligence eventually will surpass human beings, but what group of individuals will overcome the other: the one which uses technology or the one which refuses to do so.
The answer seems quite obvious when the term “technology” is related to concrete machines, but it is not so clear when we apply it to abstract devices. I tried to ponder the latter problem when I outlined an imaginary arms race between policy wonks and lawyers.
Now, we can extend these concepts to whole populations. Which of these nations will prevail over the other ones: the countries whose citizens are enabled with a set of abstract rules to based their decisions on (the rule of law) or the despotic countries, ruled by the whim of men?
The conclusion to be drawn is quite obvious when we are confronted with a so polarised question. Nevertheless, the problem becomes more subtle when the disjunction concerns on rule of law vs deliberate central planning.
The rule of law is the supplementary set of abstract patterns of conduct that gives sense to the events of the social reality in order to interpret human social action, including the political authority.
In the case of central planning, those abstract patterns are replaced by a concrete model of society whose elements are defined by the authority (after all, that is the main function of Thomas Hobbes’ Leviathan).
Superficially considered, the former – the rule of law as an abstract machine – is irrational while the latter – the Leviathan’s central planning – seems to respond to a rational construction of the society. Our approach states that, paradoxically, the more abstract is the order of a society, the more rational are the decisions and plans that the individuals undertake, since they are based on the supplementary and general patterns provided by the law, whereas central planning offers to the individuals a poorer set of concrete information, which limits the scope of the decisions to those to be based on expediency.
That is why we like to state that law is spontaneous. Not because nobody had created it -in fact, someone did – but because law stands by itself the test of time as the result of an evolutionary process in which populations following the rule of law outperform the rival ones.
Common people and even reputed scientists, such as Stephen Hawking, have been worrying about the very menace of machines provided with Artificial Intelligence that could rule the whole human genre in detriment of our liberty and welfare. This fear has two inner components: the first one, that the Artificial Intelligence will outshine human intellectual capabilities; and the second one, that the Intelligent Machines will be endowed with their own volition.
Obviously, it would be an evil volition or, at least, a very egotistic one. Or maybe the Intelligent Machines will not necessarily be evil or egotistic, but only as fearful of humans as they are of machines – although more powerful. Moreover, depending on their morality on a multiplicity of reasonings we cannot grasp, we could not ascertain whether their superior intelligence (as we suppose the feared machines would be enabled with) is good or evil, or just more complex than ours.
Nevertheless, there is still a additional third assumption which accompanies all the warnings about the perils of thinking machines: that they are a physical shell inhabited by an Artificial Intelligence. Inspired by Gilbert Ryle’s critique of Cartesian Dualism, we can state that the belief of Intelligent Machines provided with an autonomous volition rests upon the said assumption of an intelligence independent from its physical body: a self-conscious being whose thoughts are fully independent from the sensory apparatus of its body and whose sensations are fully independent from the abstract classification which its mind operates by.
The word “machine” evokes a physical device. However, a machine might as well be an abstract one. Abstract Machines are thought experiments compounded by algorithms which delivers an output from an input of information which, in turn, could be used as an input for another circuit. Theses algorithms can emulate a decision making process, providing a set of consequences for a given set of antecedents.
In fact, all recent cybernetic innovations are the result of the merging of abstract machines with physical ones: machines that play chess, drive cars, recognize faces, etc.. Since they do not have an autonomous will and the sensory data they produce are determined by their algorithms, whose output, in turn, depends on the limitation of their hardware, people are reluctant to call their capabilities “real intelligence.” Perhaps the reason of that reluctance is that people are expecting automata which accomplish the Cartesian Dualism paradigm of a thinking being.
But what if an automaton enabled with an intelligence superior to ours has already existed and is ruling at least part of our lives? We do not know of any being of that kind, if for a ruling intelligent machine we regard a self-conscious and will-driven one. But the ones who are acquainted with the notion of law as a spontaneous and abstract order will not find any major difficulty to grasp the analogy between the algorithms that form an abstract machine and general and abstract laws that compound a legal system.
The first volume of Law, Legislation, and Liberty by Friedrich A. Hayek, subtitled “Norms [Rules] and Order” (1973), is until today the most complete account of the law seen as an autonomous system, which adapts itself to the changes in its environment through a process of negative feedback that brings about marginal changes in its structure. Abstract and general notions of rights and duties are well-known by the agents of the system and that allows to everyone to form expectations about the behaviour of each other. When a conflict between two agents arises, a judge establishes the correct content of the law to be applied to the given case.
Notwithstanding our human intelligence -using its knowledge about the law- is capable of determining the right decision to each concrete controversy between two given agents, the system of the law as whole achieves a higher degree of complexity than any human mind might reach. Whereas our knowledge of a given case depends on acquiring more and more concrete data, our knowledge of the law as a whole is related to more and more abstract degrees of classifications. Thus, we cannot fully predict the complete chain of consequences of a singular decision upon the legal system as a whole. This last characteristic of the law does not mean its power of coercion is arbitrary. As individuals, we are enabled with enough information about the legal system to design our own plans and to form correct expectations about other people’s behaviour. Thus, legal constraints do not interfere with individual liberty.
On the other hand, the absolute boundary to the knowledge of the legal system as a whole works as a limitation to the political power over the law and, thence, over individuals. But, after all, that is what the concept of rule of law is about: we are much better off being ruled by an abstract and impersonal entity, more complex than the human mind, than by the self-conscious -but discretional- rule of man. Perhaps, law is not at all an automaton which rules our lives, but we can ascertain that law -as a spontaneous order- prevents other men from doing so.
“Elon Musk Is Wrong about Artificial Intelligence and the Precautionary Principle” – Reason.com via @nuzzel
(disclaimer: I haven’t dug any deeper than reading the above linked article.)
Apparently Elon Musk is afraid of the potential downsides of artificial intelligence enough to declare it “a rare case where we should be proactive in regulation instead of reactive. By the time we are reactive in AI regulation, it is too late.”
Like literally everything else, AI does have downsides. And, like anything that touches so many areas of our lives, those downsides could be significant (even catastrophic). But the most likely outcome of regulating AI is that people already investing in that space (i.e. Elon Musk) would set the rules of competition in the biggest markets. (A more insidious possible outcome is that those who would use AI for bad would be left alone.) To me this looks like a classic Bootleggers and Baptists story.
When teaching the machine, the team had to take some care with the images. Thrun hoped that people could one day simply submit smartphone pictures of their worrisome lesions, and that meant that the system had to be undaunted by a wide range of angles and lighting conditions. But, he recalled, “In some pictures, the melanomas had been marked with yellow disks. We had to crop them out—otherwise, we might teach the computer to pick out a yellow disk as a sign of cancer.”
It was an old conundrum: a century ago, the German public became entranced by Clever Hans, a horse that could supposedly add and subtract, and would relay the answer by tapping its hoof. As it turns out, Clever Hans was actually sensing its handler’s bearing. As the horse’s hoof-taps approached the correct answer, the handler’s expression and posture relaxed. The animal’s neural network had not learned arithmetic; it had learned to detect changes in human body language. “That’s the bizarre thing about neural networks,” Thrun said. “You cannot tell what they are picking up. They are like black boxes whose inner workings are mysterious.”
The “black box” problem is endemic in deep learning. The system isn’t guided by an explicit store of medical knowledge and a list of diagnostic rules; it has effectively taught itself to differentiate moles from melanomas by making vast numbers of internal adjustments—something analogous to strengthening and weakening synaptic connections in the brain. Exactly how did it determine that a lesion was a melanoma? We can’t know, and it can’t tell us.
And, in the same vein, here are some thoughts on terrorism.