What if we have already been ruled by an Intelligent Machine – and we are better off being so?

Common people and even reputed scientists, such as Stephen Hawking, have been worrying about the very menace of machines provided with Artificial Intelligence that could rule the whole human genre in detriment of our liberty and welfare. This fear has two inner components: the first one, that the Artificial Intelligence will outshine human intellectual capabilities; and the second one, that the Intelligent Machines will be endowed with their own volition.

Obviously, it would be an evil volition or, at least, a very egotistic one. Or maybe the Intelligent Machines will not necessarily be evil or egotistic, but only as fearful of humans as they are of machines – although more powerful. Moreover, depending on their morality on a multiplicity of reasonings we cannot grasp, we could not ascertain whether their superior intelligence (as we suppose the feared machines would be enabled with) is good or evil, or just more complex than ours.

Nevertheless, there is still a additional third assumption which accompanies all the warnings about the perils of thinking machines: that they are a physical shell inhabited by an Artificial Intelligence. Inspired by Gilbert Ryle’s critique of Cartesian Dualism, we can state that the belief of Intelligent Machines provided with an autonomous volition rests upon the said assumption of an intelligence independent from its physical body: a self-conscious being whose thoughts are fully independent from the sensory apparatus of its body and whose sensations are fully independent from the abstract classification which its mind operates by.

The word “machine” evokes a physical device. However, a machine might as well be an abstract one. Abstract Machines are thought experiments compounded by algorithms which delivers an output from an input of information which, in turn, could be used as an input for another circuit. Theses algorithms can emulate a decision making process, providing a set of consequences for a given set of antecedents.

In fact, all recent cybernetic innovations are the result of the merging of abstract machines with physical ones: machines that play chess, drive cars, recognize faces, etc.. Since they do not have an autonomous will and the sensory data they produce are determined by their algorithms, whose output, in turn, depends on the limitation of their hardware, people are reluctant to call their capabilities “real intelligence.” Perhaps the reason of that reluctance is that people are expecting automata which accomplish the Cartesian Dualism paradigm of a thinking being.

But what if an automaton enabled with an intelligence superior to ours has already existed and is ruling at least part of our lives? We do not know of any being of that kind, if for a ruling intelligent machine we regard a self-conscious and will-driven one. But the ones who are acquainted with the notion of law as a spontaneous and abstract order will not find any major difficulty to grasp the analogy between the algorithms that form an abstract machine and general and abstract laws that compound a legal system.

The first volume of Law, Legislation, and Liberty by Friedrich A. Hayek, subtitled “Norms [Rules] and Order” (1973), is until today the most complete account of the law seen as an autonomous system, which adapts itself to the changes in its environment through a process of negative feedback that brings about marginal changes in its structure. Abstract and general notions of rights and duties are well-known by the agents of the system and that allows to everyone to form expectations about the behaviour of each other. When a conflict between two agents arises, a judge establishes the correct content of the law to be applied to the given case.

Notwithstanding our human intelligence -using its knowledge about the law- is capable of determining the right decision to each concrete controversy between two given agents, the system of the law as whole achieves a higher degree of complexity than any human mind might reach. Whereas our knowledge of a given case depends on acquiring more and more concrete data, our knowledge of the law as a whole is related to more and more abstract degrees of classifications. Thus, we cannot fully predict the complete chain of consequences of a singular decision upon the legal system as a whole. This last characteristic of the law does not mean its power of coercion is arbitrary. As individuals, we are enabled with enough information about the legal system to design our own plans and to form correct expectations about other people’s behaviour. Thus, legal constraints do not interfere with individual liberty.

On the other hand, the absolute boundary to the knowledge of the legal system as a whole works as a limitation to the political power over the law and, thence, over individuals. But, after all, that is what the concept of rule of law is about: we are much better off being ruled by an abstract and impersonal entity, more complex than the human mind, than by the self-conscious -but discretional- rule of man. Perhaps, law is not at all an automaton which rules our lives, but we can ascertain that law -as a spontaneous order- prevents other men from doing so.

Advertisements

AI: Bootleggers and Baptists Edition

“Elon Musk Is Wrong about Artificial Intelligence and the Precautionary Principle” – Reason.com via @nuzzel

(disclaimer: I haven’t dug any deeper than reading the above linked article.)

Apparently Elon Musk is afraid of the potential downsides of artificial intelligence enough to declare it “a rare case where we should be proactive in regulation instead of reactive. By the time we are reactive in AI regulation, it is too late.”

Like literally everything else, AI does have downsides. And, like anything that touches so many areas of our lives, those downsides could be significant (even catastrophic). But the most likely outcome of regulating AI is that people already investing in that space (i.e. Elon Musk) would set the rules of competition in the biggest markets. (A more insidious possible outcome is that those who would use AI for bad would be left alone.) To me this looks like a classic Bootleggers and Baptists story.

Artificial Intelligence and Medicine

When teaching the machine, the team had to take some care with the images. Thrun hoped that people could one day simply submit smartphone pictures of their worrisome lesions, and that meant that the system had to be undaunted by a wide range of angles and lighting conditions. But, he recalled, “In some pictures, the melanomas had been marked with yellow disks. We had to crop them out—otherwise, we might teach the computer to pick out a yellow disk as a sign of cancer.”

It was an old conundrum: a century ago, the German public became entranced by Clever Hans, a horse that could supposedly add and subtract, and would relay the answer by tapping its hoof. As it turns out, Clever Hans was actually sensing its handler’s bearing. As the horse’s hoof-taps approached the correct answer, the handler’s expression and posture relaxed. The animal’s neural network had not learned arithmetic; it had learned to detect changes in human body language. “That’s the bizarre thing about neural networks,” Thrun said. “You cannot tell what they are picking up. They are like black boxes whose inner workings are mysterious.”

The “black box” problem is endemic in deep learning. The system isn’t guided by an explicit store of medical knowledge and a list of diagnostic rules; it has effectively taught itself to differentiate moles from melanomas by making vast numbers of internal adjustments—something analogous to strengthening and weakening synaptic connections in the brain. Exactly how did it determine that a lesion was a melanoma? We can’t know, and it can’t tell us.

More here, from Siddhartha Mukherjee in the New Yorker (h/t Azra Raza).

And, in the same vein, here are some thoughts on terrorism.

Cristianismo, socialismo, heresia e vale da estranheza

Eu sou viciado em YouTube. Uma das coisas que mais gosto de fazer nas horas livres é assistir vídeos, e assim, ao longo dos anos tenho aprendido muitas coisas novas. Um dos meus canais favoritos é o Vsauce, um canal de popularização de ciência, ou uma versão para jovens e adultos de O Mundo de Beakman. Foi num vídeo do Vsauce chamado “Why Are Things Creepy?” que aprendi o conceito de uncanny valley. Creepy é uma palavra inglesa de difícil tradução para o português. Alguns traduzem como assustador ou arrepiante, mas penso que isso não traz o significado exato. Creepy é algo que causa uma sensação desagradável de medo ou desconforto. Uma arma apontada para você é assustadora, pois é uma ameaça clara à sua integridade. Creepy é usado para coisas que não são ameaças óbvias, mas que ainda assim causam desconforto. Um bom exemplo é o uncanny valley.

Uncanny valley é igualmente um conceito de difícil tradução. O artigo em português da Wikipédia traduz como vale da estranheza. Provavelmente é um falso cognato, mas canny me faz lembrar canonical, e assim quando ouço ou leio uncanny valley penso em vale não canônico, ou vale fora do padrão. Talvez seja minha confusão entre inglês e português, mas me ajuda a compreender melhor o conceito. Uncanny valley é um conceito criado pelo professor de robótica, Masahiro Mori e utilizado atualmente na robótica e na animação 3D para descrever a reação de seres humanos a réplicas humanas se comportam de forma muito parecida — mas não idêntica — a seres humanos reais. Derivado do conceito há a hipótese de que “à medida que a aparência do robô vai ficando mais humana, a resposta emocional do observador humano em relação ao robô vai se tornando mais positiva e empática, até um dado ponto onde a resposta rapidamente se torna uma forte repulsa”. Ou seja, réplicas humanas quase reais são muito creepy: elas causam alguma repulsa, embora a razão da repulsa não seja clara. O fato é que sabemos instintivamente que um robô ou um personagem de animação 3D não é um ser humano real, por maiores que sejam as semelhanças com um.

Os conceitos de creepy e uncanny valley me vieram à cabeça pensando a respeito de socialismo e cristianismo. A meu ver o socialismo é uma heresia do cristianismo. Mas uma maneira mais popular que pensei de falar isso é dizer que o socialismo é um clone deformado do cristianismo que causa essa sensação de creepy. É um robô ou um personagem 3D que tenta copiar a coisa real, mas instintivamente sei que não é a mesma coisa. A diferença é que Masahiro Mori acredita que o uncanny valley pode ser superado, levando inclusive à interessante hipótese de não podermos mais distinguir entre o que é um ser humano natural e um ser humano artificial. Já o socialismo jamais irá se equiparar ao cristianismo desta forma. Ao contrário: num estágio inicial o socialismo se parece com o cristianismo, e pode causar alguma empatia. Porém, quanto mais o socialismo se aprofunda, mais seu caráter artificial causa repulsa a quem conhece bem o cristianismo.

Para ser totalmente honesto, estou consciente de que há variedades de socialismo e não quero cometer a falácia do espantalho. O socialismo que tenho em mente consiste numa preocupação com os mais pobres e num desejo por mais igualdade econômica e social. Considerando o que ouço de pessoas ao meu redor, este é o socialismo corrente, e não o marxismo. A maioria das pessoas não leu Marx e não conhece realmente a definição de socialismo dele. Seria interessante saber o que aconteceria caso conhecessem. Seja como for: esta preocupação com os pobres e este anseio por maior igualdade econômica e social também está presente no cristianismo. Na verdade, se você não tem uma preocupação especial com os pobres, você não pode ser chamado de cristão. Porém, as semelhanças são superficiais. O cristianismo possui uma densidade e profundidade ausentes neste socialismo que descrevi. O cristianismo é a coisa real. O socialismo a cópia infeliz que causa repulsa.

Dentro da perspectiva cristã as causas para a pobreza podem ser muitas, variando entre a injustiça e a preguiça. As soluções também são variadas, e vão de alguma ação do governo à caridade ou simplesmente disciplina. A antropologia cristã é extremamente densa, marcada especialmente pelo conceito de pecado original. Somos criados à imagem e semelhança de um Deus perfeito, mas também somos adulterados pelo pecado. Na concepção calvinista, totalmente depravados. Na concepção luterana, ainda que convertidos ao cristianismo e salvos, justos e pecadores. Outro conceito profundo do cristianismo, especialmente do calvinismo, é a dinâmica relação entre a soberania de Deus e a responsabilidade humana. Em geral esta discussão vira os olhos das pessoas, mas esta é apenas uma demonstração de como o cristianismo é profundo ao tratar da nossa condição de indivíduos racionais, tomando decisões, mas confrontados com situações que estão além do nosso controle.

Mesmo pensadores não cristãos têm sido beneficiados ao longo do tempo por autores clássicos como Agostinho, Tomás de Aquino, Pascal e João Calvino. Seus insights a respeito da natureza humana e da fragilidade da nossa existência são densos como chumbo. Em comparação, o socialismo, sendo o sofisticado marxismo acadêmico ou a versão mais popular, são apenas cópias superficiais e sem a mesma essência.

Se você não tem uma preocupação especial com os pobres e um desejo por justiça social, você não pode ser chamado de cristão. Ainda que você não seja cristão, a filosofia produzida por cristãos ao longo de 2 mil anos pode ser uma rica fonte de reflexão a respeito da nossa vida como indivíduos ou em sociedade. Caso você se considere cristão e socialista, você certamente ainda não conhece realmente uma dessas duas coisas. Ou as duas. Caso você se considere socialista por se preocupar com os pobres e ter um desejo de justiça social, suas ideias e sua ação podem melhorar muito se você desviar o olhar do clone e olhar para a coisa real.

On Robots and Personal Identity

When I came across this documentary on robots and their ability to carry on a conversation between each other, the well-known ideas on the spontaneous emergence of language inevitably crossed my mind. The resemblances to Hayek’s Sensory Order are obvious as well, notwithstanding his later remarks on negative feedback processes, which involve spontaneous orders that are borrowed, precisely, from cybernetics. But what grabbed my attention the most was the importance attached to the fact that the robots had a body. According to the documentary, the shape of the body of the robots allows them to develop certain patterns of classification for facts and behavior that would be different if their bodies were different as well. In this sense, “to have a body” is a requisite characteristic of the robots to make possible artificial intelligence; to evolve following a process of negative feedback.

That brought me back the works of Peter Geach on personal identity. He confronted John Locke’s notion of personal identity as mere memory and stated, instead, that the body was essential to the said concept. Memory and human body are, in order to develop an individual personality, inherent to each other.

This is relevant to our discussions about the definition of individual freedom. If the body is inherent to our personal identity, there are not much place left for Spinoza’s freedom of thought, or inner liberty, as the ultimate definition of individual liberty. Besides freedom of thought, we need freedom to move in order to be regarded as free individuals, and our sphere of individual autonomy should be extended to our body and its surroundings. Moreover, it would be impossible to exercise any freedom of thought and expression if such individual liberties are not protected.

Transhumanism via Libertarianism

Contemporary libertarianism has to consider the fundamental elements and quandaries of transhumanism as it relates to freedom in the next robotic revolution. The two schools share many of the same philosophic principles, and if you identify with one, chances are you can find solace in the other.

Transhumanism in its most inclusive capacity is best described as an intellectual reaction to the burgeoning advancements in biotech. To proclaim transhumanism a movement (its common definition) would be to overestimate the current technological landscape, particularly since its more pivotal concerns assess neural uploading and robotic and organic alloying. (Ray Kurzweil predicts mind uploading will not be possible until the late 2030’s. I’m personally skeptical of anything remotely demonstrative before the 40’s.) The ideology has numerous cultural and online manifestations vying for the scientific alteration of human beings, now and in the future, but for its presidential party and organization campaigners to receive national attention, the foreign world of future robotics will need to start materializing in everyday life.

But when has the momentary lack of observational ruminative infrastructure ever stopped or even hampered philosophy? In the 21st century an in-depth discussion of libertarianism cannot progress to a presently or futuristically valuable extent without at least contemplating transhumanism, and in turn transhumanism’s most natural political philosophy is libertarianism. Many transhumanist libertarians move that the free market would best protect the idiosyncratic “right to human advancement,” as most introductions to the ideology simplistically put. The goal, then, is to greatly enhance the human condition and protect that enhancement while overseeing technology’s influence on liberty and individuality, individuality which should be able to reach unprecedented levels once posthuman modifications go commercial.

The transhumanist party has more responsibilities than simple advocacy of body augmentation. It asks about the economics of a post-scarcity society and explores philosophy often parallel to objectivism. One of its most burdening issues is its implicit alignment with robotic growth of all ostensible varieties. To paraphrase Matt Gaylord: when Stephen Hawking, Marc Goodman of Singularity University and Bill Gates raise concerns about the existential threats posed by scientific advancements leading to strong artificial intelligence, virus bioengineering and like prophetic consequences – it’s time to pay attention. When transhumanism addresses these concerns, it’s progressing cautiously and intelligently. When it doesn’t address these, it’s being too optimistic and entirely neglectful.

This critical factor in all futuristic anticipations has led to branch-offs along the transhumanist ideological lineage, with distinct schools of thought focusing on, essentially, the loss prevention of human life. Libertarianism, itself concerned with the fullest potential of human dignity through individual choices, cooperates smoothly with transhumanism where freedom and potential meet. Caring about humans, or indeed allowing humans to care for themselves unimpeded, is the common principle. Transhumanism deserves libertarian attention, and in fact may be libertarian in nature.