What if we have already been ruled by an Intelligent Machine – and we are better off being so?

Common people and even reputed scientists, such as Stephen Hawking, have been worrying about the very menace of machines provided with Artificial Intelligence that could rule the whole human genre in detriment of our liberty and welfare. This fear has two inner components: the first one, that the Artificial Intelligence will outshine human intellectual capabilities; and the second one, that the Intelligent Machines will be endowed with their own volition.

Obviously, it would be an evil volition or, at least, a very egotistic one. Or maybe the Intelligent Machines will not necessarily be evil or egotistic, but only as fearful of humans as they are of machines – although more powerful. Moreover, depending on their morality on a multiplicity of reasonings we cannot grasp, we could not ascertain whether their superior intelligence (as we suppose the feared machines would be enabled with) is good or evil, or just more complex than ours.

Nevertheless, there is still a additional third assumption which accompanies all the warnings about the perils of thinking machines: that they are a physical shell inhabited by an Artificial Intelligence. Inspired by Gilbert Ryle’s critique of Cartesian Dualism, we can state that the belief of Intelligent Machines provided with an autonomous volition rests upon the said assumption of an intelligence independent from its physical body: a self-conscious being whose thoughts are fully independent from the sensory apparatus of its body and whose sensations are fully independent from the abstract classification which its mind operates by.

The word “machine” evokes a physical device. However, a machine might as well be an abstract one. Abstract Machines are thought experiments compounded by algorithms which delivers an output from an input of information which, in turn, could be used as an input for another circuit. Theses algorithms can emulate a decision making process, providing a set of consequences for a given set of antecedents.

In fact, all recent cybernetic innovations are the result of the merging of abstract machines with physical ones: machines that play chess, drive cars, recognize faces, etc.. Since they do not have an autonomous will and the sensory data they produce are determined by their algorithms, whose output, in turn, depends on the limitation of their hardware, people are reluctant to call their capabilities “real intelligence.” Perhaps the reason of that reluctance is that people are expecting automata which accomplish the Cartesian Dualism paradigm of a thinking being.

But what if an automaton enabled with an intelligence superior to ours has already existed and is ruling at least part of our lives? We do not know of any being of that kind, if for a ruling intelligent machine we regard a self-conscious and will-driven one. But the ones who are acquainted with the notion of law as a spontaneous and abstract order will not find any major difficulty to grasp the analogy between the algorithms that form an abstract machine and general and abstract laws that compound a legal system.

The first volume of Law, Legislation, and Liberty by Friedrich A. Hayek, subtitled “Norms [Rules] and Order” (1973), is until today the most complete account of the law seen as an autonomous system, which adapts itself to the changes in its environment through a process of negative feedback that brings about marginal changes in its structure. Abstract and general notions of rights and duties are well-known by the agents of the system and that allows to everyone to form expectations about the behaviour of each other. When a conflict between two agents arises, a judge establishes the correct content of the law to be applied to the given case.

Notwithstanding our human intelligence -using its knowledge about the law- is capable of determining the right decision to each concrete controversy between two given agents, the system of the law as whole achieves a higher degree of complexity than any human mind might reach. Whereas our knowledge of a given case depends on acquiring more and more concrete data, our knowledge of the law as a whole is related to more and more abstract degrees of classifications. Thus, we cannot fully predict the complete chain of consequences of a singular decision upon the legal system as a whole. This last characteristic of the law does not mean its power of coercion is arbitrary. As individuals, we are enabled with enough information about the legal system to design our own plans and to form correct expectations about other people’s behaviour. Thus, legal constraints do not interfere with individual liberty.

On the other hand, the absolute boundary to the knowledge of the legal system as a whole works as a limitation to the political power over the law and, thence, over individuals. But, after all, that is what the concept of rule of law is about: we are much better off being ruled by an abstract and impersonal entity, more complex than the human mind, than by the self-conscious -but discretional- rule of man. Perhaps, law is not at all an automaton which rules our lives, but we can ascertain that law -as a spontaneous order- prevents other men from doing so.

Advertisements

AI: Bootleggers and Baptists Edition

“Elon Musk Is Wrong about Artificial Intelligence and the Precautionary Principle” – Reason.com via @nuzzel

(disclaimer: I haven’t dug any deeper than reading the above linked article.)

Apparently Elon Musk is afraid of the potential downsides of artificial intelligence enough to declare it “a rare case where we should be proactive in regulation instead of reactive. By the time we are reactive in AI regulation, it is too late.”

Like literally everything else, AI does have downsides. And, like anything that touches so many areas of our lives, those downsides could be significant (even catastrophic). But the most likely outcome of regulating AI is that people already investing in that space (i.e. Elon Musk) would set the rules of competition in the biggest markets. (A more insidious possible outcome is that those who would use AI for bad would be left alone.) To me this looks like a classic Bootleggers and Baptists story.

What is the optimal investment in quantitative skills?

As I plan out my summer plans I am debating how to allocate my time in skill investment. The general advice I have gotten is to increase my quantitative skills and pick up as much about coding as possible. However I am skeptical that I really should invest too much in quantitative skills. There are diminishing returns for starters.

More importantly though artificial intelligence/computing is increasing every day. When my older professors were trained they had to use IBM punch cards to run simple regressions. Today my phone has several times more the computing power, not to mention my PC. I would not be surprised if performing quantitative analysis is taken over entirely by AI within a decade or two. Even if it isn’t, it will surely be easier and require minimal knowledge of what is happening. In which case I should invest more heavily in skills that cannot be done by AI.

I am thinking, for example, of research design or substantive knowledge of research areas. AI can beat humans in chess, but I can’t think of any who have written a half decent history text.

Mind you I cannot abandon learning a base level of quantitative knowledge. AI may take over in the nex decade, but I will be on the job market and seeking tenure before then (hopefully!).