1. A Brief History of Tomorrow David Berlinski, Inference
  2. The Invention of World History S. Frederick Starr, History Today
  3. Actually, Western Progress Stems from Christianity Nick Spencer, Theos
  4. Correcting for the Historian’s Middle Eastern Biases Luma Simms, Law & Liberty

Deep Learning and Abstract Orders

It is well known that Friedrich Hayek once rejoiced at Noam Chomsky’s evolutionary theory of language, which stated that the faculty of speaking depends upon a biological device which human beings are enabled with. There is no blank slate and our experience of the world relies on structures that come from the experience in itself.

Hayek would be now delighted if he were told about the recent discoveries on the importance of background knowledge in the arms race between human beings and Artificial Intelligence. When decisions are to be taken by trial and error at the inside of a feedback system, humans are still ahead because they apply a framework of abstract patterns to interpret the connections among the different elements of the system. These patterns are acquired from previous experiences in other closed systems and provide with a semantic meaning to the new one. Thus, humans outperform machines, which work as blank slates, since they take information only from the closed system.

The report of the cited study finishes with the common place of asking what would happen if some day machines learn to handle with abstract patterns of a higher degree of complexity and, then, keep up with that human relative advantage.

As we stated in another place, those abstract machines already exist and they are the legal codes and law systems that enable their users with a set of patterns to interpret controversies concerning human behaviour.

What is worth being asked is not whether Artificial Intelligence eventually will surpass human beings, but what group of individuals will overcome the other: the one which uses technology or the one which refuses to do so.

The answer seems quite obvious when the term “technology” is related to concrete machines, but it is not so clear when we apply it to abstract devices. I tried to ponder the latter problem when I outlined an imaginary arms race between policy wonks and lawyers.

Now, we can extend these concepts to whole populations. Which of these nations will prevail over the other ones: the countries whose citizens are enabled with a set of abstract rules to based their decisions on (the rule of law) or the despotic countries, ruled by the whim of men?

The conclusion to be drawn is quite obvious when we are confronted with a so polarised question. Nevertheless, the problem becomes more subtle when the disjunction concerns on rule of law vs deliberate central planning.

The rule of law is the supplementary set of abstract patterns of conduct that gives sense to the events of the social reality in order to interpret human social action, including the political authority.

In the case of central planning, those abstract patterns are replaced by a concrete model of society whose elements are defined by the authority (after all, that is the main function of Thomas Hobbes’ Leviathan).

Superficially considered, the former – the rule of law as an abstract machine – is irrational while the latter – the Leviathan’s central planning – seems to respond to a rational construction of the society. Our approach states that, paradoxically, the more abstract is the order of a society, the more rational are the decisions and plans that the individuals undertake, since they are based on the supplementary and general patterns provided by the law, whereas central planning offers to the individuals a poorer set of concrete information, which limits the scope of the decisions to those to be based on expediency.

That is why we like to state that law is spontaneous. Not because nobody had created it -in fact, someone did – but because law stands by itself the test of time as the result of an evolutionary process in which populations following the rule of law outperform the rival ones.

What if we have already been ruled by an Intelligent Machine – and we are better off being so?

Common people and even reputed scientists, such as Stephen Hawking, have been worrying about the very menace of machines provided with Artificial Intelligence that could rule the whole human genre in detriment of our liberty and welfare. This fear has two inner components: the first one, that the Artificial Intelligence will outshine human intellectual capabilities; and the second one, that the Intelligent Machines will be endowed with their own volition.

Obviously, it would be an evil volition or, at least, a very egotistic one. Or maybe the Intelligent Machines will not necessarily be evil or egotistic, but only as fearful of humans as they are of machines – although more powerful. Moreover, depending on their morality on a multiplicity of reasonings we cannot grasp, we could not ascertain whether their superior intelligence (as we suppose the feared machines would be enabled with) is good or evil, or just more complex than ours.

Nevertheless, there is still a additional third assumption which accompanies all the warnings about the perils of thinking machines: that they are a physical shell inhabited by an Artificial Intelligence. Inspired by Gilbert Ryle’s critique of Cartesian Dualism, we can state that the belief of Intelligent Machines provided with an autonomous volition rests upon the said assumption of an intelligence independent from its physical body: a self-conscious being whose thoughts are fully independent from the sensory apparatus of its body and whose sensations are fully independent from the abstract classification which its mind operates by.

The word “machine” evokes a physical device. However, a machine might as well be an abstract one. Abstract Machines are thought experiments compounded by algorithms which delivers an output from an input of information which, in turn, could be used as an input for another circuit. Theses algorithms can emulate a decision making process, providing a set of consequences for a given set of antecedents.

In fact, all recent cybernetic innovations are the result of the merging of abstract machines with physical ones: machines that play chess, drive cars, recognize faces, etc.. Since they do not have an autonomous will and the sensory data they produce are determined by their algorithms, whose output, in turn, depends on the limitation of their hardware, people are reluctant to call their capabilities “real intelligence.” Perhaps the reason of that reluctance is that people are expecting automata which accomplish the Cartesian Dualism paradigm of a thinking being.

But what if an automaton enabled with an intelligence superior to ours has already existed and is ruling at least part of our lives? We do not know of any being of that kind, if for a ruling intelligent machine we regard a self-conscious and will-driven one. But the ones who are acquainted with the notion of law as a spontaneous and abstract order will not find any major difficulty to grasp the analogy between the algorithms that form an abstract machine and general and abstract laws that compound a legal system.

The first volume of Law, Legislation, and Liberty by Friedrich A. Hayek, subtitled “Norms [Rules] and Order” (1973), is until today the most complete account of the law seen as an autonomous system, which adapts itself to the changes in its environment through a process of negative feedback that brings about marginal changes in its structure. Abstract and general notions of rights and duties are well-known by the agents of the system and that allows to everyone to form expectations about the behaviour of each other. When a conflict between two agents arises, a judge establishes the correct content of the law to be applied to the given case.

Notwithstanding our human intelligence -using its knowledge about the law- is capable of determining the right decision to each concrete controversy between two given agents, the system of the law as whole achieves a higher degree of complexity than any human mind might reach. Whereas our knowledge of a given case depends on acquiring more and more concrete data, our knowledge of the law as a whole is related to more and more abstract degrees of classifications. Thus, we cannot fully predict the complete chain of consequences of a singular decision upon the legal system as a whole. This last characteristic of the law does not mean its power of coercion is arbitrary. As individuals, we are enabled with enough information about the legal system to design our own plans and to form correct expectations about other people’s behaviour. Thus, legal constraints do not interfere with individual liberty.

On the other hand, the absolute boundary to the knowledge of the legal system as a whole works as a limitation to the political power over the law and, thence, over individuals. But, after all, that is what the concept of rule of law is about: we are much better off being ruled by an abstract and impersonal entity, more complex than the human mind, than by the self-conscious -but discretional- rule of man. Perhaps, law is not at all an automaton which rules our lives, but we can ascertain that law -as a spontaneous order- prevents other men from doing so.

AI: Bootleggers and Baptists Edition

“Elon Musk Is Wrong about Artificial Intelligence and the Precautionary Principle” – via @nuzzel

(disclaimer: I haven’t dug any deeper than reading the above linked article.)

Apparently Elon Musk is afraid of the potential downsides of artificial intelligence enough to declare it “a rare case where we should be proactive in regulation instead of reactive. By the time we are reactive in AI regulation, it is too late.”

Like literally everything else, AI does have downsides. And, like anything that touches so many areas of our lives, those downsides could be significant (even catastrophic). But the most likely outcome of regulating AI is that people already investing in that space (i.e. Elon Musk) would set the rules of competition in the biggest markets. (A more insidious possible outcome is that those who would use AI for bad would be left alone.) To me this looks like a classic Bootleggers and Baptists story.

Artificial Intelligence and Medicine

When teaching the machine, the team had to take some care with the images. Thrun hoped that people could one day simply submit smartphone pictures of their worrisome lesions, and that meant that the system had to be undaunted by a wide range of angles and lighting conditions. But, he recalled, “In some pictures, the melanomas had been marked with yellow disks. We had to crop them out—otherwise, we might teach the computer to pick out a yellow disk as a sign of cancer.”

It was an old conundrum: a century ago, the German public became entranced by Clever Hans, a horse that could supposedly add and subtract, and would relay the answer by tapping its hoof. As it turns out, Clever Hans was actually sensing its handler’s bearing. As the horse’s hoof-taps approached the correct answer, the handler’s expression and posture relaxed. The animal’s neural network had not learned arithmetic; it had learned to detect changes in human body language. “That’s the bizarre thing about neural networks,” Thrun said. “You cannot tell what they are picking up. They are like black boxes whose inner workings are mysterious.”

The “black box” problem is endemic in deep learning. The system isn’t guided by an explicit store of medical knowledge and a list of diagnostic rules; it has effectively taught itself to differentiate moles from melanomas by making vast numbers of internal adjustments—something analogous to strengthening and weakening synaptic connections in the brain. Exactly how did it determine that a lesion was a melanoma? We can’t know, and it can’t tell us.

More here, from Siddhartha Mukherjee in the New Yorker (h/t Azra Raza).

And, in the same vein, here are some thoughts on terrorism.

Cristianismo, socialismo, heresia e vale da estranheza

Eu sou viciado em YouTube. Uma das coisas que mais gosto de fazer nas horas livres é assistir vídeos, e assim, ao longo dos anos tenho aprendido muitas coisas novas. Um dos meus canais favoritos é o Vsauce, um canal de popularização de ciência, ou uma versão para jovens e adultos de O Mundo de Beakman. Foi num vídeo do Vsauce chamado “Why Are Things Creepy?” que aprendi o conceito de uncanny valley. Creepy é uma palavra inglesa de difícil tradução para o português. Alguns traduzem como assustador ou arrepiante, mas penso que isso não traz o significado exato. Creepy é algo que causa uma sensação desagradável de medo ou desconforto. Uma arma apontada para você é assustadora, pois é uma ameaça clara à sua integridade. Creepy é usado para coisas que não são ameaças óbvias, mas que ainda assim causam desconforto. Um bom exemplo é o uncanny valley.

Uncanny valley é igualmente um conceito de difícil tradução. O artigo em português da Wikipédia traduz como vale da estranheza. Provavelmente é um falso cognato, mas canny me faz lembrar canonical, e assim quando ouço ou leio uncanny valley penso em vale não canônico, ou vale fora do padrão. Talvez seja minha confusão entre inglês e português, mas me ajuda a compreender melhor o conceito. Uncanny valley é um conceito criado pelo professor de robótica, Masahiro Mori e utilizado atualmente na robótica e na animação 3D para descrever a reação de seres humanos a réplicas humanas se comportam de forma muito parecida — mas não idêntica — a seres humanos reais. Derivado do conceito há a hipótese de que “à medida que a aparência do robô vai ficando mais humana, a resposta emocional do observador humano em relação ao robô vai se tornando mais positiva e empática, até um dado ponto onde a resposta rapidamente se torna uma forte repulsa”. Ou seja, réplicas humanas quase reais são muito creepy: elas causam alguma repulsa, embora a razão da repulsa não seja clara. O fato é que sabemos instintivamente que um robô ou um personagem de animação 3D não é um ser humano real, por maiores que sejam as semelhanças com um.

Os conceitos de creepy e uncanny valley me vieram à cabeça pensando a respeito de socialismo e cristianismo. A meu ver o socialismo é uma heresia do cristianismo. Mas uma maneira mais popular que pensei de falar isso é dizer que o socialismo é um clone deformado do cristianismo que causa essa sensação de creepy. É um robô ou um personagem 3D que tenta copiar a coisa real, mas instintivamente sei que não é a mesma coisa. A diferença é que Masahiro Mori acredita que o uncanny valley pode ser superado, levando inclusive à interessante hipótese de não podermos mais distinguir entre o que é um ser humano natural e um ser humano artificial. Já o socialismo jamais irá se equiparar ao cristianismo desta forma. Ao contrário: num estágio inicial o socialismo se parece com o cristianismo, e pode causar alguma empatia. Porém, quanto mais o socialismo se aprofunda, mais seu caráter artificial causa repulsa a quem conhece bem o cristianismo.

Para ser totalmente honesto, estou consciente de que há variedades de socialismo e não quero cometer a falácia do espantalho. O socialismo que tenho em mente consiste numa preocupação com os mais pobres e num desejo por mais igualdade econômica e social. Considerando o que ouço de pessoas ao meu redor, este é o socialismo corrente, e não o marxismo. A maioria das pessoas não leu Marx e não conhece realmente a definição de socialismo dele. Seria interessante saber o que aconteceria caso conhecessem. Seja como for: esta preocupação com os pobres e este anseio por maior igualdade econômica e social também está presente no cristianismo. Na verdade, se você não tem uma preocupação especial com os pobres, você não pode ser chamado de cristão. Porém, as semelhanças são superficiais. O cristianismo possui uma densidade e profundidade ausentes neste socialismo que descrevi. O cristianismo é a coisa real. O socialismo a cópia infeliz que causa repulsa.

Dentro da perspectiva cristã as causas para a pobreza podem ser muitas, variando entre a injustiça e a preguiça. As soluções também são variadas, e vão de alguma ação do governo à caridade ou simplesmente disciplina. A antropologia cristã é extremamente densa, marcada especialmente pelo conceito de pecado original. Somos criados à imagem e semelhança de um Deus perfeito, mas também somos adulterados pelo pecado. Na concepção calvinista, totalmente depravados. Na concepção luterana, ainda que convertidos ao cristianismo e salvos, justos e pecadores. Outro conceito profundo do cristianismo, especialmente do calvinismo, é a dinâmica relação entre a soberania de Deus e a responsabilidade humana. Em geral esta discussão vira os olhos das pessoas, mas esta é apenas uma demonstração de como o cristianismo é profundo ao tratar da nossa condição de indivíduos racionais, tomando decisões, mas confrontados com situações que estão além do nosso controle.

Mesmo pensadores não cristãos têm sido beneficiados ao longo do tempo por autores clássicos como Agostinho, Tomás de Aquino, Pascal e João Calvino. Seus insights a respeito da natureza humana e da fragilidade da nossa existência são densos como chumbo. Em comparação, o socialismo, sendo o sofisticado marxismo acadêmico ou a versão mais popular, são apenas cópias superficiais e sem a mesma essência.

Se você não tem uma preocupação especial com os pobres e um desejo por justiça social, você não pode ser chamado de cristão. Ainda que você não seja cristão, a filosofia produzida por cristãos ao longo de 2 mil anos pode ser uma rica fonte de reflexão a respeito da nossa vida como indivíduos ou em sociedade. Caso você se considere cristão e socialista, você certamente ainda não conhece realmente uma dessas duas coisas. Ou as duas. Caso você se considere socialista por se preocupar com os pobres e ter um desejo de justiça social, suas ideias e sua ação podem melhorar muito se você desviar o olhar do clone e olhar para a coisa real.

On Robots and Personal Identity

When I came across this documentary on robots and their ability to carry on a conversation between each other, the well-known ideas on the spontaneous emergence of language inevitably crossed my mind. The resemblances to Hayek’s Sensory Order are obvious as well, notwithstanding his later remarks on negative feedback processes, which involve spontaneous orders that are borrowed, precisely, from cybernetics. But what grabbed my attention the most was the importance attached to the fact that the robots had a body. According to the documentary, the shape of the body of the robots allows them to develop certain patterns of classification for facts and behavior that would be different if their bodies were different as well. In this sense, “to have a body” is a requisite characteristic of the robots to make possible artificial intelligence; to evolve following a process of negative feedback.

That brought me back the works of Peter Geach on personal identity. He confronted John Locke’s notion of personal identity as mere memory and stated, instead, that the body was essential to the said concept. Memory and human body are, in order to develop an individual personality, inherent to each other.

This is relevant to our discussions about the definition of individual freedom. If the body is inherent to our personal identity, there are not much place left for Spinoza’s freedom of thought, or inner liberty, as the ultimate definition of individual liberty. Besides freedom of thought, we need freedom to move in order to be regarded as free individuals, and our sphere of individual autonomy should be extended to our body and its surroundings. Moreover, it would be impossible to exercise any freedom of thought and expression if such individual liberties are not protected.