The Eyes Have It (Quillette)
Kinship Is a Verb (Orion)
Vishnu used a similar play of words here.
How the University of Austin Can Change the History Profession (Law & Liberty)
New York, plus ça change: Chinatown under threat (Crimereads)
The Eyes Have It (Quillette)
Kinship Is a Verb (Orion)
Vishnu used a similar play of words here.
How the University of Austin Can Change the History Profession (Law & Liberty)
New York, plus ça change: Chinatown under threat (Crimereads)
This is a travel story of sorts of travel through time, to an extent. Be patient.
Directly to the east of Marseille, the second largest city in France are a series of beautiful, narrow coves, like fjords, situated in a sort of desert. They are called “calanques” in French. They are accessible only by sea or through a long walk on hot rocky ground. Although they constitute a separate world, the calanques are close to Marseille, as the crow flies. They used to be a major fishing resource for the city. You can be sure they were never forgotten during the 2600 years of the city’s existence. Also, the city was founded by Greeks and thus, it always had a literate population, one that kept records.
Marseille and its environs are where SCUBA was invented, the first practical solution to the problem of men breathing underwater. Accordingly, the calanques were always and thoroughly explored after 1950. In 1985, one of the co-inventors of SCUBA discovered a deep cave in one of the calanques. He couldn’t resist temptation and swam into it until he reached a large room emerging above the water level. I mean a cave where he could stand and breathe regular air. The explorer’s name was Cosquer.
Cosquer visited several times without saying a word about his discovery. Soon, he observed dozens of beautiful wall paintings belonging to two distinct periods on the upper walls of his cave. The art of the first period was mostly hand imprints or stencils. The art of the second, distinct period, comprised 170-plus beautiful animals including many horses, ibex and others mammals, also fish, seals and other sea creatures. Archeologists think the painting of the first period were done about in about 25,000 BC, those of the latter period date back to about 18,000 BC, they believe.
Today, the entrance to the cave is about 125 feet below sea level. We know that paleolithic men did not have SCUBA. They simply walked into the cave for their own reasons, with their own purposes in mind. Thus, the sea level was at least 125 feet lower then than it is today. The people of Marseille never saw the cave. They would have written about it. There would be records. They would not have forgotten it. They simply did not know of its existence during the past 2600 years or so, since the foundation of their city.
Sometime in the past 20,000 years, the sea rose 125 feet or more. That’s an amplitude several times greater than any of the direst predictions of the official United Nations Intergovernmental Panel on Climate Change for the next century. The IPCC squarely blames a future ocean rise (one that has not been observed at all, yet) on abnormal emission of several gases, especially CO2. These abnormal emissions in turn, the IPCC affirms, are traceable to human activities such as driving cars and producing many useful things by burning fossil fuels.
It seems to me that basic good science requires that causal analysis begin with a baseline. In this case, it would mean something like this: In the absence of any burning of fossil fuels, the ocean rose 125 feet sometimes during the past 20,000 years. Let’s see if we can find evidence of the ocean rising above and beyond this order of magnitude since humanity began burning fossil fuels in large quantities.
The conclusion will likely be that nothing out of the ordinary happened. Hence, fossil fuel emissions are probably irrelevant to this particular issue. (This leaves open the possibility that such emissions are odious for some other reason. I mean CO2 is plant food. Too much CO2 may promote weed growth in our fields and gardens.)
The ocean is not currently rising and if it is, the existence of the Cosquer cave suggests that it’s rising to a minuscule degree. Let’s keep things in perspective. Let’s discard openly and loudly every part of the building of a complex hypothesis that does not work. Those who don’t take these obvious cleansing measures simply have a lot of explaining to do. They should not be allowed to wrap themselves in the mantle of science while violating Science 101 principles.
One of the conceits of the Warmist movement (re-branded “Climate Change” something or other) is that you don’t have a right to an opinion unless you possess a doctorate in Atmospheric Science. By this dictate, anybody who has to keep a job, raise children, or pay a mortgage is out of the discussion. This is the typical posturing of intellectual totalitarianism. Note what’s missing in the story above: It says nothing about what did cause the ocean to rise between 18,000 years ago and today. It’s enough to know that whatever it was, it was not the massive burning of fossil fuels. And, if factors other than burning fossil fuels explain large rises in sea level, they should first be applied to a tiny rises in sea rises before other explanations are tried. That’s just good practice.
The Cousquer cave story is now complete as is. Yes, that simple.
Every great civilization has simultaneously made breakthroughs in the natural sciences, mathematics, and in the investigation of that which penetrates beyond the mundane, beyond the external stimuli, beyond the world of solid, separate objects, names, and forms to peer into something changeless. When written down, these esoteric percepts have the natural tendency to decay over time because people tend to accept them too passively and literally. Consequently, people then value the conclusions of others over clarity and self-knowledge.
Talking about esoteric percepts decaying over time, I recently read about the 1981 Act the state of Arkansas passed, which required that public school teachers give “equal treatment” to “creation science” and “evolution science” in the biology classroom. Why? The Act held that teaching evolution alone could violate the separation between church and state, to the extent that this would be hostile to “theistic religions.” Therefore, the curriculum had to concentrate on the “scientific evidence” for creation science.
As far as I can see, industrialism, rather than Darwinism, has led to the decay of virtues historically protected by religions in the urban working class. Besides, every great tradition has its own equally fascinating religious cosmogony—for instance, the Indic tradition has an allegorical account of evolution apart from a creation story—but creationism is not defending all theistic religions, just one theistic cosmogony. This means there isn’t any “theological liberalism” in this assertion; it is a matter of one hegemon confronting what it regards as another hegemon—Darwinism.
So, why does creationism oppose Darwinism? Contrary to my earlier understanding from the scientific standpoint, I now think creationism looks at Darwin’s theory of evolution by natural selection not as a ‘scientific theory’ that infringes the domain of a religion but as an unusual ‘religion’ that oversteps an established religion’s doctrinal province. Creationism, therefore, looks to invade and challenge the doctrinal province of this “other religion.” In doing so, creation science, strangely, is a crude, proselytized version of what it seeks to oppose.
In its attempt to approximate a purely metaphysical proposition in practical terms or exoterically prove every esoteric percept, this kind of religious literalism takes away from the purity of esotericism and the virtues of scientific falsification. Therefore, literalism forgets that esoteric writings enable us to cross the mind’s tempestuous sea; it does not have to sink in this sea to prove anything.
In contrast to the virtues of science and popular belief, esotericism forces us to be self-reliant. We don’t necessarily have to stand on the shoulders of others and thus within a history of progress, but on our own two feet, we seek with the light of our inner experience. In this way, both science and the esoteric flourish in separate ecosystems but within one giant sphere of human experience — like prose and poetry.
In a delightful confluence of prose and poetry, Erasmus Darwin, the grandfather of Charles Darwin, wrote about the evolution of life in poetry in The Temple of Nature well before his grandson contemplated the same subject in elegant prose:
Organic life beneath the shoreless waves
Was born and nurs’d in Ocean’s pearly caves;
First forms minute, unseen by spheric glass,
Move on the mud, or pierce the watery mass;
These, as successive generations bloom,
New powers acquire, and larger limbs assume;
Whence countless groups of vegetation spring
And breathing realms of fin, and feet, and wing.
The prose and poetry of creation — science and the esoteric; empirical and the allegorical—make the familiar strange and the strange familiar.
Are you a racist?
Anyone can feel free to answer this question any way it/she/he wishes; they wish. And that’s the problem. In this short essay, I aim first to do a little vocabulary house-keeping. Second, I try to trace three distinct origins of racism. I operate from thin authority. My main sources are sundry un-methodical readings, especially on slavery, spread over fifty years, and my amazingly clear recollection of lectures by my late teacher at Stanford, St. Clair Drake, in the sixties. (He was the author of Black Metropolis among other major contributions.) I also rely on equally vivid memories of casual conversations with that master storyteller. Here you have it. I am trying to plagiarize the pioneer St. Clair Drake. I believe the attempt would please him though possibly not the results.
Feel free to reject everything I say below. If nothing else, it might make you feel good. If you are one of the few liberals still reading me, be my guest and get exercised. Besides, I am an old white man! Why grant me any credence?
That’s on the one hand. On the other hand, in these days (2020) obsessed with racism, I never see or hear the basic ideas about racism set down below expressed in the media, in reviews or on-line although they are substantially more productive than what’s actually around. I mean that they help arrive at a clearer and richer understanding of racism.
If you find this brief essay even a little useful, think of sharing it. Thank you.
“Racism” is a poor word because today, it refers at once to thoughts, attitudes, feeling, and also to actions and policies. Among the latter, it concerns both individual actions and collective actions, and even policies. Some of the policies may be considered to be included in so-called “systemic racism” about which I wrote in my essay “Systemic Racism: a Rationalist Take.”
The mishmash between what’s in the heads of people and what they actually do is regrettable on two grounds. First, the path from individual belief, individual thoughts, individual attitudes, on the one hand, to individual action, on the other is not straightforward. My beliefs are not always a great predictor of my actions because reality tends to interfere with pure intent.
Second, collective action and, a fortiori policies, rarely looks like the simple addition of individual actions. People act differently in the presence of others than they do alone. Groups (loosely defined) are capable of greater invention than are individuals. Individuals in a group both inspire and censor one another; they even complete one another’s thoughts; the ones often give the others courage to proceed further.
This piece is about racism, the understanding, the attitudes, the collection of beliefs which predispose individuals and groups to thinking of others as inferior and/or unlikable on the basis of some physical characteristics. As I said, racism so defined can be held individually or collectively. Thus, this essay is deliberately not about actions, program, failures to act inspired by racism, the attitude. That’s another topic others can write about.
Fear and loathing of the unknown
Many people seem to assume that racial prejudice is a natural condition that can be fought in simple ways. Others, on the contrary, see it as ineradicable. Perhaps it all depends on the source of racism. The word mean prejudgment about a person’s character and abilities based on persistent physical traits that are genetically transmitted. Thus, dislike of that other guy wearing a ridiculous blue hat does not count; neither does hostility toward one sex or the other (or the other?). I think both assumptions above – racism as natural and as ineradicable – are partly but only partly true. My teacher St. Clair Drake explained to me once, standing in the aisle of a Palo Alto bookstore, that there are three separate kinds of racial prejudice, of racism, with distinct sources.
The first kind of racism is rooted in fear of the unknown or of the unfamiliar. This is probably hard-wired; it’s human nature. It would be a good asset to have for the naked, fairly slow apes that we were for a long time. Unfamiliar creature? Move away; grab a rock. After all, those who look like you are usually not dangerous enemies; those who don’t, you don’t know and why take a risk?
Anecdote: A long time ago, I was acting the discreet tourist in a big Senegalese fishing village. I met a local guy about my age (then). We had tea together, talked about fishing. He asked me if I wanted to see his nearby house. We walked for about five minute to a round adobe construction covered in thatch. He motioned me inside where it was quite dark. A small child was taking a nap on a stack of blankets in the back. Sensing a presence, the toddler woke up, opened his eyes, and began screaming at the top of his lungs. The man picked him up and said very embarrassed. “I am sorry, my son has never seen a toubab before.” (“Toubab” is the local not unfriendly word for light skin people from elsewhere.)
Similarly, Jared Diamond recounts (and show corresponding pictures in his book, The World Until Yesterday: What Can We Learn from Traditional Societies. Viking: New York.) of how central New Guinea natives became disfigured by fear at their first sight of a white person. Some explained later that they thought they might be seeing ghosts.
The second distinctive form of racism simply comes from fear of the dark, rooted itself in dread of the night. It’s common to all people, including dark skinned people, of course. It’s easy to understand once you remember that beings who were clearly our direct ancestors, people whose genes are in our cells, lived in fear of the darkness night after night for several hundreds of thousands of years. Most of their fears were justified because the darkness concealed lions, leopards, hyenas, bears, tigers, saber-toothed cats, wolves, wild dogs, and other predators, themselves with no fear of humans. The fact that the darkness of night also encouraged speculation about other hostile beings -varied spirits – that did not really exist does not diminish the impact of this incomplete zoological list.
As is easy to observe, the association dark= bad is practically universal. Many languages have an expression equivalent to: “the forces of darkness.” I doubt that any (but I can’t prove it, right now) says, “the forces of lightness” to designate something sinister. Same observation with “black magic,” and disappearing into a “black hole.” Similarly, nearly everywhere, uneducated people, and some of their educated betters, express some degree of hostility – mixed with contempt, for those, in their midst or nearby, who are darker than themselves. This is common among African Americans, for example. (Yes, I know, it may have other sources among them, specifically.)
This negative attitude is especially evident in the Indian subcontinent. On a lazy day, thirty years ago in Mumbai, I read several pages of conjugal want ads in a major newspaper. I noticed that 90% of the ads for would-be brides mentioned skin color in parallel with education and mastery of the domestic arts. (The men’s didn’t.) A common description was “wheatish,” which, I was told by Indian relatives, means not quite white but pretty close. (You can’t lie too shamelessly about skin tone because, if all goes well, your daughter will meet the other side in person; you need wiggle room.) In fact, the association between skin color and likability runs so deep in India that the same Sanskrit word, “varna,” designates both caste and color (meaning skin complexion). And, of course, there is a reason why children everywhere turn off the light to tell scary stories.
In a similar vein, the ancient Chinese seem to have believed that aristocrats were made from yellow soil while commoners were made from ordinary brown mud. (Cited by Harari, Yuval N. – 2015 – in: Sapiens: A Brief History of Humankind Harper: New York.)
Some would argue that these examples represent ancestral fears mostly left behind by civilized, urban (same thing) people. My own limited examples, both personal and from observation is that it’s not so. It seems to me that fear of the dark is the first or second page of the book of which our daily street-lit, TV illuminated bravado is the cover. Allow a couple of total power stoppages (as Californians experienced recently) and it’s right there, drilling into our vulnerable minds.
Both of these two first kinds of negative feelings about that which is dark can be minimized, the first through experience and education: No, that pale man will not hurt you. He might even give you candy, or a metal ax. The second source of distaste of darkness has simply been moved to a kind of secondary relevance by the fact that today, most people live most of the time in places where some form of artificial lightning is commonplace. It persists nevertheless where it is shored up by a vast and sturdy institutional scaffolding as with the caste system of largely Hindu India. And it may be always present somewhere in the back of our minds but mostly, we don’t have a chance to find out.
The third source of hostility toward and contempt for a dark appearance is both more difficult to understand and harder to eliminate or even to tamp down. Explaining it requires a significant detour. Bear with me, please.
The origins of useful racism
Suppose you believe in a God who demands unambiguously that you love your “neighbor,” that is, every human being, including those who are not of your tribe, even those you don’t know at all. Suppose further that you are strongly inclined toward a political philosophy that considers all human beings, or at least some large subcategory of them, as fundamentally equal, or at least equal in rights. Or imagine rather that you are indifferent to one or both ideas but that you live among neighbors 90% of whom profess one, and 80% both beliefs. They manifest and celebrate these beliefs in numerous and frequent public exercises, such as church services, elections, and civic meetings where important decisions are launched.
Now a second effort of imagination is required. Suppose also that you or your ancestors came to America from the British Isles, perhaps in the 1600s, perhaps later. You have somehow acquired a nice piece of fertile land, directly from the Crown or from a landed proprietor, or by small incremental purchases. You grow tobacco, or indigo, or rice, or (later) cotton. Fortune does not yet smile on you because you confront a seemingly intractable labor problem. Almost everyone else around you owns land and thus is not eager to work for anyone else. Just about your only recourse is temporarily un-free young men who arrive periodically from old Britain, indentured servants (sometimes also called “apprentices”). Many of them are somewhat alien because they are Irish , although most of them speak English, or some English. Moreover, a good many are sickly when they land. Even the comparatively healthy young men do not adjust well to the hot climate. They have little resistance to local tropical diseases such as malaria and yellow fever. Most don’t last in the fields. You often think they are not worth the trouble. In addition, by contract or by custom, you have to set them free after seven years. With land being so attainable, few wish to stick around and earn a wage from you .
One day you hear that somewhere, not too far, new, different kinds of workers are available that are able to work long days in the heat and under the sun and who don’t succumb easily to disease. You take a trip to find out. The newcomers are chained together. They are a strange dark color, darker than any man you have seen, English, Irish, or Indian. Aside from this, they look really good as field hands go. They are muscular, youngish men in the flower of health. (They are all survivors of the terrible Atlantic passage and, before that, of some sort of long walk on the continent of Africa to the embarkation point at Goree, Senegal, or such. Only the strong and healthy survived such ordeals, as a rule.) There are a few women of the same hue with them, mostly also young.
Those people are from Africa, you are told. They are for outright sale. You gamble on buying two of them to find out more. You carry them to your farmstead and soon put them to work. After some confusion because they don’t understand any English, you and your other servants show them what to do. You are soon dazzled by their physical prowess. You calculate that one of them easily accomplishes the tasks of two of your indentured Irish apprentices. As soon as you can afford it, you go and buy three more Africans.
Soon, your neighbors are imitating you. All the dark skinned servants are snapped up as fast as they are landed. Prices rise. Those people are costly but still well worth the investment because of their superior productivity. Farmers plant new crops, labor intensive, high yield crops – -such as cotton – that they would not have dared investing in with the old kind of labor. To make the new labor even more attractive, you and your neighbors quickly figure that it’s also capital because it can be made to be self-reproducing. The black female servants can both work part of the time and make children who are themselves servants that belong to you by right. (This actually took some time to work out legally.)
Instrumental severity and cruelty
You are now becoming rich, amassing both tools and utensils and more land. All is still not completely rosy on your plantation though. One problem is that not all of your new African servants are docile. Some are warriors who were captured on the battlefield in Africa and they are not resigned to their subjection. A few rebel or try to run away. Mostly, they fail but their doomed attempts become the stuff of legend among other black servants thus feeding a chronic spirit of rebelliousness. Even in the second and third generation away from Africa, some black servants are born restive or sullen. And insubordination is contagious. At any rate, there are enough free white workers in your vicinity for some astute observers among your African servants to realize that they and their companions are treated comparatively badly, that a better fate is possible. Soon, there are even free black people around to whom they unavoidably compare themselves. (This fact deserves a full essay in its own right.)
To make a complex issue simple: Severity is necessary to keep your workforce at work. Such severity sometimes involves brutal public punishment for repeat offenders, such as whippings. There is a belief about that mere severity undermines the usefulness of the workforce without snuffing out its rebelliousness. Downright cruelty is sometimes necessary, the more public, the better. Public punishment is useful to encourage more timid souls to keep towing the line.
And then, there is the issue of escape. After the second generation, black slaves are relatively at home where they work. Your physical environment is also their home where some think they can fend for themselves. The wilderness is not very far. The slaves also know somehow that relatively close by are areas where slavery is prohibited or not actively enforced by authorities. It’s almost a mathematical certainty that at any time, some slaves, a few slaves, will attempt escape. Each escape is a serious economic matter because, aside from providing labor, each slave constitutes live capital. Most owners have only a few slaves. A single escape constitutes for them a significant form of impoverishment. Slaves have to be terrorized into not even wanting to escape.
Soon, it’s well understood that slaves are best kept in a state of more or less constant terror. It’s so well understood that local government will hang your expensive slave for rebellion whether you like it or not.
In brief, whatever their natural inclination, whatever their personal preference, slave owners have to be systematically cruel. And, it’s helpful for them to also possess a reputation for cruelty. This reputation has to be maintained and re-inforced periodically by sensationally brutal action. One big problem arises from such a policy of obligatory and vigilant viciousness: It’s in stark contradiction with both your religious and your political ideas that proclaim that one must love others and that all humans are at least potentially equal (before God, if nowhere else). And if you don’t hold deeply such beliefs yourself, you live among people who do, or who profess to. And, by a strange twist of fate, the richest, best educated, probably the most influential strata of your society are also those most committed to those ideals. (They are the class that would eventually produce George Washington and Thomas Jefferson.)
The personal psychological tension between the actual and highly visible brutal treatment of black slaves and prevailing moral values is technically a form of dissonance.” It’s also a social tension; it expresses itself collectively. Those actively involved in mistreating slaves are numerous. In vast regions of the English colonies, and later, of the United States, the contrast between action and beliefs is thus highly visible to everyone, obvious to many who are not themselves actively involved. It becomes increasingly difficult over time to dismiss slavery as a private economic affair because, more and more, political entities make laws actively supporting slavery. There are soon laws about sheltering fugitives, laws regulating the punishment of rebellious slaves, laws about slave marriage and, laws restricting the freeing of slaves, (“manumission”). Slavery thus soon enters the public arena. There are even laws to control the behavior of free blacks, those who merely used to be slaves.
Race as legal status
Special rules governing free blacks constitute an important step because, for the first time it replaces legal status (“slave,” chattel”), with race (dark skin, certain facial features, African ancestry). So, with the advent of legislation supporting slavery, an important symbolic boundary is crossed. The laws don’t concern only those defined by their legal condition of chattel property but also others, defined mostly or largely by their physical appearance and by their putative ancestry in Africa. At this point, every white subject, then every white citizen has become a participant in a struggle that depends on frankly racial categories by virtue of his belonging to the polity. Soon the social racial category “white” comes to stand for the legal status “free person,” “non-slave.”
Then, at this juncture, potentially every white adult becomes a party to the enforcement of slavery. For almost all of them, this participation, however passive, is in stark contradiction with both religious and political values. But ordinary human beings can only live with so much personal duplicity. Some whites will reject black slavery, in part or in whole. Accordingly, it’s notable that abolitionists always existed and were vocal in their opposition to slavery in the English colonies, and then in the United States, even in the deepest South. Their numbers and visibility never flagged until the Civil War.
How to reduce tension between beliefs and deeds
There are three main paths out of this personal moral predicament. They offer different degrees of resistance. The first path is to renounce one’s beliefs, those that are in contradiction to the treatment of one’s slaves. A slave owner could adjust by becoming indifferent to the Christian message, or skeptical of democratic aspiration, or both. No belief in the fraternity of Man or in any sort of equality between persons? Problem solved. This may be relatively feasible for an individual alone. In this case, though the individuals concerned, the slave owners, and their slave drivers, exist within a social matrix that re-inforces frequently, possibly daily the dual religious command to treat others decently and the political view that all men are more or less equal. Churches, political organizations, charity concerns, and gentlemen’s club stand in the way. To renounce both sets of beliefs – however attractive this might be from an individual standpoint – would turn one into a social pariah. Aside from the personal unpleasantness of such condition, it would surely have adverse economic repercussions.
The second way to free oneself from the tension associated with the contrast between humane beliefs, on the one hand, and harsh behavior, on the other hand, is simply to desist from the latter. Southern American chronicles show that a surprisingly large numbers of slave owners chose that path at any one time. Some tried more compassionate slave driving, with varying degrees of economic success. Others – who left major traces, for documentary reasons – took the more radical step of simply freeing some of their slaves when they could, or when it was convenient. Sometimes, they freed all of their slaves, usually at their death, through their wills, for example. The freeing of slaves – manumission – was so common that the rising number of free blacks was perceived as a social problem in much of the South. Several states actually tried to eliminate the problem by passing legislation forbidding the practice.
Of course, the fact that so many engaged in such an uneconomic practice demonstrates in itself the validity of the idea that the incompatibility between moral convictions and slave driving behavior generated strong tensions. One should not take this evidence too far however because there may have been several reasons to free slaves, not all rooted in this tension. (I address this issue briefly in “Systemic Racism….”)
The easy way out
The third way to reduce the same tension, the most extreme and possibly the least costly took two steps. Step one consisted in recognizing consciously this incompatibility; step two was to begin mentally to separate the black slaves from humanity. This would work because all your bothersome beliefs – religious and political – applied explicitly to other human beings. The less human the objects of your bad treatment the less the treatment contravened your beliefs. After all, while it may be good business to treat farm animals well, there is not much moral judgment involved there. In fact, not immediately but not long after the first Africans landed in the English colonies of North America, there began a collective endeavor aiming at their conceptual de-humanization. It was strongly a collective project addressing ordinary people including many who had no contacts with black slaves or with free blacks. It involved the universities and intellectual milieus in general with a vengeance (more on this latter).
Some churches also lent a hand by placing the sanction of the Bible in the service of the general idea that God himself wanted slaves to submit absolutely to the authority of their masters. To begin with, there was always to story of Noah’s three sons. The disrespectful one, Ham, cursed by Noah, was said to be the father of the black race, on the thin ground that his name means something like “burnt.” However, it’s notable that the tension never disappeared because other churches, even in the Deep South, continued their opposition to slavery on religious grounds. The Quakers, for example, seldom relented.
Their unusual appearance and the fact that the white colonists could not initially understand their non-European languages (plural) was instrumental in the collective denial of full humanity to black slaves. In fact, the arriving slaves themselves often did not understand one another. This is but one step from believing that they did not actually possess the power of speech. Later, as the proportion of America-born slaves increased, they developed what is known technically as a creole language to communicate with one another. That was recognizably a form of English but probably not understood by whites unless they tried hard. Most had few reasons to try at all. Language was not the only factor contributing to the ease with which whites, troubled by their ethical beliefs, denied full humanity to black slaves. Paradoxically, the degrading conditions in which the slaves were held must also have contributed to the impression of their sub-humanity.
The effort to deny full humanity to people of African descent continued for two centuries. As the Enlightenment reached American shores, the focus shifted from Scriptures to Science (pseudo science, sometimes but not always). Explorers’ first reports from sub-tropical Africa seemed to confirmed the soundness of the view that black Africans were not completely human: There were no real cities there, little by way of written literature, no search for knowledge recognizable as science, seemingly no schools. What art conscious visitors reported on did not seem sufficiently realistic to count as art by 18th and 19th century standards. I think that no one really paid attention to the plentiful African artistic creativity– this unmixed expression of humanity if there ever was one – until the early 1900s. Instead, African art was dismissed as crude stammering in the service of inarticulate superstitions.
The effort to harness science in service of the proposition of African un-humanity easily outlasted the Civil War and even the emancipation of slaves in North America. After he published the Origins of the Species in 1859, Darwin spent much of the balance of his life – curiously allied with Christians – in combating the widespread idea that there had been more than one creation of humanoids, possibly, one for each race. The point most strongly argued by those holding to this view was that Africans could not possibly be the brothers, or other close relatives, of the triumphant Anglo-Saxons. The viewpoint was not limited to the semi-educated by any means. The great naturalist Louis Agassiz himself believed that the races of men were pretty much species. In support, he presented the imaginary fact that the mating of different races – like mating between horses and donkeys – seldom produced fertile offspring. (All recounted in: Desmonds, Adrian, and James Moore. 2009. Darwin’s Sacred Cause: How A Hatred of Slavery Shaped Darwin’s Views on Human Evolution. Hougton: NY.)
Those three main roads to racism are unequal in their persistence. Dislike for strangers tends to disappear of its own accord. Either the frightening contact ceases or it is repeated. In the first case, dislike turns irrelevant and accordingly becomes blurred. In the second case, repeated experience will often demonstrate that the strangers are not dangerous and the negative feelings subside of their own accord. If the strangers turn out to be dangerous overall, it seems to me that negative feelings toward them does not constitute racism. This, in spite of the fact that the negativity may occasionally be unfair to specific, individual strangers.
Racial prejudice anchored in atavistic fear of the night may persist in the depth of one’s mind but it too, does not survive experience well. Exposed to the fact that dark people are not especially threatening, many will let the link between darkness and fear or distaste subside in their minds. For this reason, it seems to me that the great American experiment in racial integration of the past sixty years was largely successful. Many more white Americans today personally know African Americans than was the case in 1960, for example. The black man whose desk is next to yours, the black woman who attends the same gym as you week after week, the black restaurant goers at you favored eating place, all lose their aura of dangerousness through habituation. Habituation works both ways though. The continued over-representation of black men in violent crimes must necessarily perpetuates in the minds of all (including African Americans) the association between danger and a dark complexion.
The road to racism based on the reduction of the tension between behavior and beliefs via conceptual de-humanization of the victims has proved especially tenacious. Views of people of African descent, but also of other people of color, as less than fully human persist or re-merge frequently because they have proved useful. This approach may have saved the important part of the American economy based on slavery until war freed the slaves without removing the de-humanizing. As many leftists claim (usually without evidence) this was important to the latter fast development of the American economy because cotton production in the South was at its highest in the years right preceding the Civil War. In the next phase the view of black Americans as less than human served well to justify segregation for the next hundred years. It was thus instrumental in protecting poor whites from wage competition with even poorer African Americans.
In the second half of the 19th century and well into the 20th, the opinion that Africans – and other people of color – were not quite human also strengthened the European colonial enterprise in many places. (The de-humanization of colonial people was not inevitable though. The French justification of colonialism – “France’s civilizing mission” – is incompatible with this view. It treated the annexed people instead as immature, as infantile, rather than as subhuman.)
This third road to racism tends to last because it’s a collective response to a difficult situation that soon builds its own supporting institutions. For a long time, in America and in the West, in general, it received some assistance from the new, post-religious ideology, science. Above all, it’s of continuing usefulness in a variety of situations. This explanation reverses the naive, unexamined explanation of much racism: That people act in cruel ways toward others who are unlike them because they are racist. It claims, rather that they become racist in order to continue acting in cruel ways toward others, contrary to their own pre-existing beliefs that enjoin them to treat others with respect. If this perspective is correct, we should find that racism is the more widespread and the more tenacious the more egalitarian and the more charitable the dominant culture where it emerges.
Last week, I sat down with Scott Johnson of the Device Alliance to discuss how medical research is communicated only through archaic and disorganized methods, and how the root of this is the “economy” of Impact Factor, citations, and tenure-seeking as opposed to an exercise in scientific communication.
We also discussed a vision of the future of medical publishing, where the basic method of communicating knowledge was no longer uploading a PDF but contributing structured data to a living, growing database.
You can listen here: https://www.devicealliance.org/medtech_radio_podcast/
As background, I recommend the recent work by Patrick Collison and Tyler Cowen on broken incentives in medical research funding (as opposed to publishing), as I think their research on funding shows that a great slow-down in medical innovation has resulted from systematic errors in organizing knowledge gathering. Mark Zuckerberg actually interviewed them about it here: https://conversationswithtyler.com/episodes/mark-zuckerberg-interviews-patrick-collison-and-tyler-cowen/.
A well-known Latin adage reads “de gustibus non est disputandum”, roughly translated as “about tastes it should not be disputed”. In English, we usually refer to the maxim as “over tastes there is no argument”, indicating the economist’s fundamental creed that tastes and preferences may very well come from somewhere but are useless to argue over. We can’t prove them. We can’t disavow them. Ultimately, they just are and we have to live with that.
In November last year, ridiculing a prominent Swedish politician, I used the example of ice-cream flavours to illustrate the point:
“I like ice-cream” is an innocent and unobjectionable opinion to have. Innocent because hey, who doesn’t like ice-cream, and unobjectionable because there is no way we can verify whether you actually like ice-cream. We can’t effortlessly observe the reactions in your brain from eating ice-cream or even criticize such a position.
Over tastes there is no dispute. You like what you like. We can theorize all we want over sociological or cultural impacts, or perhaps attempt to trace biological reasons that may explain why some people like what they like – but ultimately we must act in the world (Proposition #1) and so we shrug our shoulders and get on with life. We accept that people believe, like, and prefer different things and that’s that.
Being strange rationalising creatures, you don’t have to scratch humans very deeply before you encounter convictions or beliefs that make no sense whatsoever. Most of the time we’re talking plainly irrational or internally inconsistent beliefs, but, like most tastes and political opinions, they are very cheap to hold – you are generally not taxed or suffer noticeable disadvantages from holding erroneous or contradictory beliefs. Sometimes, by giving the speaker social kudos for believing it, the cost of holding an erroneous belief might even be negative – openly portraying it gives us benefits with our in-group. (yes, we’re all Caplanites now).
When I make a decision in the world (as I must to stay alive, Proposition #1), I occasionally feel the urge to explain that choice to others – because they ask or because I submit to the internalised pressure. I might say “eating ice-cream is good for me” (Proposition #2a).
Now, most people would probably consider that statement obviously incorrect (ice-cream is a sweet, a dessert; desserts make you fat and unhealthy, i.e. not good for you). The trouble is, of course, that I didn’t specify what I meant by “good for me”. It’s really unclear what that exactly means, since we don’t know what I have in mind and what I value as “good” (taste? Longevity? Complete vitamins? How it makes me feel? Social considerations?).
This version of Proposition 2a therefore essentially reverts back to a Proposition 1 claim; you can like whatever you want and you happen to like what ice-cream does to you in that dimension (taste, feeling, social consideration). Anything still goes.
I might also offer a slightly different version (Proposition #2b) where I say “eating ice-cream is good for me because it cures cancer”.
Aha! Now I’ve not only given you a clear metric of what I mean by ‘good’ (curing cancer), I’ve also established a causal mechanism about the world: ice-cream cures cancer.
By now, we’ve completely left the domain of “everything goes” and “over tastes there is no argument”. I’m making a statement about the world, and this statement is ludicrous. Admittedly, there might be some revolutionary science that shows the beneficial impacts of ice-cream on cancer, but I seriously doubt it – let’s say the causal claim here is as incorrect and refuted as a claim can possibly be.
Am I still justified in staying with my conviction and eating ice-cream? No, of course not! I gave a measure of what I meant by ‘good’ and clear causal criteria (“cure cancer”) for how ice-cream fits into that – and it’s completely wrong! I must change my beliefs, accordingly – I am no longer free to merely believe whatever I want.
If I don’t change my behaviour and maintain enjoying my delicious chocolate-flavoured ice-cream, two things happen: First, I can surrender my outrageous claim and revert back to Proposition 1. That’s fine. Or I can amend Proposition 2b into something more believable – like “eating ice-cream makes me happy, and I like being happy”.
If we substitute ice-cream for – I posit with zero evidence – the vast majority of people’s beliefs (about causality in the world, about health and nutrition, about politics, about economics and religion), we’re in essentially the same position. All those convictions, ranging from what food is good for you, to how that spiritual omnipotent power you revere helps your life, to what the government should do with taxes or regulations to reduce poverty, are most likely completely wrong.
Sharing my own experiences or telling stories about how I solved some problem is how we socially interact as humans – that’s fine and wonderful, and essentially amounts to Proposition 1-style statements. If you and I are sufficiently alike, you might benefit from those experiences.
Making statements about the world, however, particularly causal relations about the world, subjects me to a much higher level of proof. Now my experiences or beliefs or tastes are not enough. Indeed, it doesn’t even matter if I invoke the subjective and anecdotal stories of a few friends or this or that family member. I’m still doing sh*t science, making claims about the world on seriously fragile grounds. It’s not quite Frankfurt’s “Bullshit” yet, since we haven’t presumed that I don’t care about the truth, but as a statement of the world, what I’m saying is at least garbage.
I am entitled to my own beliefs and tastes and political “opinions“, whatever that means. I am not, however, entitled to my own facts and my own causal mechanisms of the world.
Keeping these spheres separate – or at least being clear about moving from one to the other – ought to rank among the highest virtues of peaceful human co-existence. We should be more humble and realise that on most topics, most of the time, we really don’t know. But that doesn’t mean anything goes.
I’ve been working on a paper — since I’ve long tabled the idea of a future in academia, or scholarship, I have only a few projects I want to get done in substitution — to expand the work of Paul Feyerabend into a political philosophy. Feyerabend’s primary discipline was the philosophy of science and epistemology, where he considered his central thesis to be “methodological” or “epistemological anarchism.”
His dialogues, essays, and lengthier expositions of (sometimes called) “epistemological Dadaism” can be roughly summed up as
For any scientific conclusion C, there is no one route from empirical premises P.
“Scientific” here being widely inclusive and contemporary with social standards, as a function of Feyerabend’s opposition to positivism. The hubbub of observation statements, empirical tests, auxiliary hypotheses, inferences, axioms, etc. that govern a research programme are only one possible set of multiple that have historically yielded similarly sanctified discoveries. For any B, there is no single route from A. Describing the scientific method as a route of Popperian falsification, for instance, cuts out Galileo, or cuts out Einstein, he would argue.
Feyerabend swore off the doctrine of political anarchism as a cruel system, although he was often inspired by revolutionary anarchists like Bakunin. Even still, his philosophy lends support for social power decentralization in general — even with sometimes grotesque deviations like his support for government suppression of academic inquiry.
I’ll be working on the paper on this, but in lieu of that, I think the primary connection between Feyerabend’s work on epistemology and a potential work in political science is the support of his epistemological thesis — for any scientific conclusion C, there is no one route from empirical premises P — for a broader methodological statement, namely, that for any outcome C, there is no one route from starting point A. For politics, this could mean:
For any social-organizational outcome O, there is no one route from given state of nature N.
Where “route” can clearly apply to ranges of government involvement, or zero government involvement. Feyerabend’s writings do not support this liberal of a reading in general, but in a constrained domain of social organization and especially knowledge-sharing (he was keen on dissolving hierarchies for their disruption of information), there might be a lot of connection to unearth.
This is, again, part of a larger project to bring Feyerabend more into the liberty spectrum — his writings are hosted on marxists.org, after all — or at least on the radar for inspiration. I’ll be posting more, and hopefully defending it, in the future.
Today when a terrorist attack happens, the press too often avoids naming the perpetrators and instead seeks to be uncompromised by phrases like “car hits people.” But not long ago, the press usually blamed fundamentalists for terrorist attacks.
The name fundamentalist originated, interestingly enough, in Protestant circles in the US. Only much later was it used to describe other religions, and then mostly to Muslims. Among Protestants, the name fundamentalist was used to designate people against theological liberalism. I explain. With the Enlightenment, an understanding grew in theological circles that modern man could not believe in supernatural aspects of the Bible anymore. The answer was theological liberalism, a theology that tried to maintain the “historical Jesus,” but striping him from anything science couldn’t explain. Fundamentalism was an answer to this. Fundamentalists believed that some things are, well… fundamental! You can’t have Jesus without the virgin birth, the many miracles, the resurrection, and the ascension. That would be not Jesus at all! In other words, it is a matter of Principia: either science comes first and faith must submit, or faith comes before science.
The great observation made by fundamentalist theologian Cornelius Van Til is that fundamentalist Protestants are not the only fundamentalists! Everybody has fundamentals. Everybody has basic principles that are themselves not negotiable. If you start asking people “why” eventually they will answer “because it is so.”
If everybody has starting points that are themselves not open to further explanation, that means that our problem (and the problem with terrorism) is not fundamentalism per se. Everybody has fundamentals. The question is what kind of fundamentals do you have. Fundamentals that tell you about the holiness of human life, or fundamentals that tell you that somehow assassinating people is ok or even commendable?
Steven Pinker, the Harvard professor, recently published Enlightenment Now. The Case for Reason, Science, Humanism, and Progress.
It is a fine book that basically sets out to do what its subtitle promises. It does so covering a wide range of ideas and topics, and discusses and rejects most arguments often used against Enlightenment thought, which Pinker equates with classical liberalism.
Those who know the work of Johan Norberg of the Cato Institute, the late Julian Simon’s writings, Jagdish Bhagwati’s magisterial In Defense of Globalization, or last but not least, Deirdre McCloskey’s Bourgeois Trilogy will be updated on the latest figures, but will not learn much in terms of arguments.
Those new to the debate, or searching for material to defend classical liberal ideas and values, will find this a very helpful book.
“In so far as their only recourse to that world is through what they see and do, we may want to say that after a revolution scientists are responding to a different world.”
Thomas Kuhn, The Structure of Scientific Revolutions p. 111
I can remember arguing with my cousin right after Michael Brown was shot. “It’s still unclear what happened,” I said, “based soley on testimony” — at that point, we were still waiting on the federal autopsy report by the Department of Justice. He said that in the video, you can clearly see Brown, back to the officer and with his hands up, as he is shot up to eight times.
My cousin doesn’t like police. I’m more ambivalent, but I’ve studied criminal justice for a few years now, and I thought that if both of us watched this video (no such video actually existed), it was probably I who would have the more nuanced grasp of what happened. So I said: “Well, I will look up this video, try and get a less biased take and get back to you.” He replied, sarcastically, “You can’t watch it without bias. We all have biases.”
And that seems to be the sentiment of the times: bias encompasses the human experience, it subsumes all judgments and perceptions. Biases are so rampant, in fact, that no objective analysis is possible. These biases may be cognitive, like confirmation bias, emotional fallacies or that phenomenon of constructive memory; or inductive, like selectivity or ignoring base probability; or, as has been common to think, ingrained into experience itself.
The thing about biases is that they are open to psychological evaluation. There are precedents for eliminating them. For instance, one common explanation of racism is that familiarity breeds acceptance, and infamiliarity breeds intolerance (as Reason points out, people further from fracking sites have more negative opinions on the practice than people closer). So to curb racism (a sort of bias), children should interact with people outside of their singular ethnic group. More clinical methodology seeks to transform mental functions that are automatic to controlled, and thereby enter reflective measures into perception, reducing bias. Apart from these, there is that ancient Greek practice of reasoning, wherein patterns and evidence are used to generate logical conclusions.
If it were true that human bias is all-encompassing, and essentially insurmountable, the whole concept of critical thinking goes out the window. Not only do we lose the critical-rationalist, Popperian mode of discovery, but also Socratic dialectic, as essentially “higher truths” disappear from human lexicon.
The belief that biases are intrinsic to human judgment ignores psychological or philosophical methods to counter prejudice because it posits that objectivity itself is impossible. This viewpoint has been associated with “postmodern” schools of philosophy, such as those Dr. Rosi commented on (e.g., those of Derrida, Lacan, Foucault, Butler), although it’s worth pointing out that the analytic tradition, with its origins in Frege, Russell and Moore represents a far greater break from the previous, modern tradition of Descartes and Kant, and often reached similar conclusions as the Continentals.
Although theorists of the “postmodern” clique produced diverse claims about knowledge, society, and politics, the most famous figures are nearly almost always associated or incorporated into the political left. To make a useful simplification of viewpoints: it would seem that progressives have generally accepted Butlerian non-essentialism about gender and Foucauldian terminology (discourse and institutions). Derrida’s poststructuralist critique noted dichotomies and also claimed that the philosophical search for Logos has been patriarchal, almost neoreactionary. (The month before Donald Trump’s victory, the word patriarchy had an all-time high at Google search.) It is not a far right conspiracy that European philosophers with strange theories have influenced and sought to influence American society; it is patent in the new political language.
Some people think of the postmodernists as all social constructivists, holding the theory that many of the categories and identifications we use in the world are social constructs without a human-independent nature (e.g., not natural kinds). Disciplines like anthropology and sociology have long since dipped their toes, and the broader academic community, too, relates that things like gender and race are social constructs. But the ideas can and do go further: “facts” themselves are open to interpretation on this view: to even assert a “fact” is just to affirm power of some sort. This worldview subsequently degrades the status of science into an extended apparatus for confirmation-bias, filling out the details of a committed ideology rather than providing us with new facts about the world. There can be no objectivity outside of a worldview.
Even though philosophy took a naturalistic turn with the philosopher W. V. O. Quine, seeing itself as integrating with and working alongside science, the criticisms of science as an establishment that emerged in the 1950s and 60s (and earlier) often disturbed its unique epistemic privilege in society: ideas that theory is underdetermined by evidence, that scientific progress is nonrational, that unconfirmed auxiliary hypotheses are required to conduct experiments and form theories, and that social norms play a large role in the process of justification all damaged the mythos of science as an exemplar of human rationality.
But once we have dismantled Science, what do we do next? Some critics have held up Nazi German eugenics and phrenology as examples of the damage that science can do to society (nevermind that we now consider them pseudoscience). Yet Lysenkoism and the history of astronomy and cosmology indicate that suppressing scientific discovery can too be deleterious. Austrian physicist and philosopher Paul Feyerabend instead wanted a free society — one where science had equal power as older, more spiritual forms of knowledge. He thought the model of rational science exemplified in Sir Karl Popper was inapplicable to the real machinery of scientific discovery, and the only methodological rule we could impose on science was: “anything goes.”
Feyerabend’s views are almost a caricature of postmodernism, although he denied the label “relativist,” opting instead for philosophical Dadaist. In his pluralism, there is no hierarchy of knowledge, and state power can even be introduced when necessary to break up scientific monopoly. Feyerabend, contra scientists like Richard Dawkins, thought that science was like an organized religion and therefore supported a separation of church and state as well as a separation of state and science. Here is a move forward for a society that has started distrusting the scientific method… but if this is what we should do post-science, it’s still unclear how to proceed. There are still queries for anyone who loathes the hegemony of science in the Western world.
For example, how does the investigation of crimes proceed without strict adherence to the latest scientific protocol? Presumably, Feyerabend didn’t want to privatize law enforcement, but science and the state are very intricately connected. In 2005, Congress authorized the National Academy of Sciences to form a committee and conduct a comprehensive study on contemporary legal science to identify community needs, evaluating laboratory executives, medical examiners, coroners, anthropologists, entomologists, ontologists, and various legal experts. Forensic science — scientific procedure applied to the field of law — exists for two practical goals: exoneration and prosecution. However, the Forensic Science Committee revealed that severe issues riddle forensics (e.g., bite mark analysis), and in their list of recommendations the top priority is establishing an independent federal entity to devise consistent standards and enforce regular practice.
For top scientists, this sort of centralized authority seems necessary to produce reliable work, and it entirely disagrees with Feyerabend’s emphasis on methodological pluralism. Barack Obama formed the National Commission on Forensic Science in 2013 to further investigate problems in the field, and only recently Attorney General Jeff Sessions said the Department of Justice will not renew the committee. It’s unclear now what forensic science will do to resolve its ongoing problems, but what is clear is that the American court system would fall apart without the possibility of appealing to scientific consensus (especially forensics), and that the only foreseeable way to solve the existing issues is through stricter methodology. (Just like with McDonalds, there are enforced standards so that the product is consistent wherever one orders.) More on this later.
So it doesn’t seem to be in the interest of things like due process to abandon science or completely separate it from state power. (It does, however, make sense to move forensic laboratories out from under direct administrative control, as the NAS report notes in Recommendation 4. This is, however, specifically to reduce bias.) In a culture where science is viewed as irrational, Eurocentric, ad hoc, and polluted with ideological motivations — or where Reason itself is seen as a particular hegemonic, imperial device to suppress different cultures — not only do we not know what to do, when we try to do things we lose elements of our civilization that everyone agrees are valuable.
Although Aristotle separated pathos, ethos and logos (adding that all informed each other), later philosophers like Feyerabend thought of reason as a sort of “practice,” with history and connotations like any other human activity, falling far short of sublime. One could no more justify reason outside of its European cosmology than the sacrificial rituals of the Aztecs outside of theirs. To communicate across paradigms, participants have to understand each other on a deep level, even becoming entirely new persons. When debates happen, they must happen on a principle of mutual respect and curiosity.
From this one can detect a bold argument for tolerance. Indeed, Feyerabend was heavily influenced by John Stuart Mill’s On Liberty. Maybe, in a world disillusioned with scientism and objective standards, the next cultural move is multilateral acceptance and tolerance for each others’ ideas.
This has not been the result of postmodern revelations, though. The 2016 election featured the victory of one psychopath over another, from two camps utterly consumed with vitriol for each other. Between Bernie Sanders, Donald Trump and Hillary Clinton, Americans drifted toward radicalization as the only establishment candidate seemed to offer the same noxious, warmongering mess of the previous few decades of administration. Politics has only polarized further since the inauguration. The alt-right, a nearly perfect symbol of cultural intolerance, is regular news for mainstream media. Trump acolytes physically brawl with black bloc Antifa in the same city of the 1960s Free Speech Movement. It seems to be the worst at universities. Analytic feminist philosophers asked for the retraction of a controversial paper, seemingly without reading it. Professors even get involved in student disputes, at Berkeley and more recently Evergreen. The names each side uses to attack each other (“fascist,” most prominently) — sometimes accurate, usually not — display a political divide with groups that increasingly refuse to argue their own side and prefer silencing their opposition.
There is not a tolerant left or tolerant right any longer, in the mainstream. We are witnessing only shades of authoritarianism, eager to destroy each other. And what is obvious is that the theories and tools of the postmodernists (post-structuralism, social constructivism, deconstruction, critical theory, relativism) are as useful for reactionary praxis as their usual role in left-wing circles. Says Casey Williams in the New York Times: “Trump’s playbook should be familiar to any student of critical theory and philosophy. It often feels like Trump has stolen our ideas and weaponized them.” The idea of the “post-truth” world originated in postmodern academia. It is the monster turning against Doctor Frankenstein.
Moral (cultural) relativism in particular only promises rejecting our shared humanity. It paralyzes our judgment on female genital mutilation, flogging, stoning, human and animal sacrifice, honor killing, Caste, underground sex trade. The afterbirth of Protagoras, cruelly resurrected once again, does not promise trials at Nuremberg, where the Allied powers appealed to something above and beyond written law to exact judgment on mass murderers. It does not promise justice for the ethnic cleansers in Srebrenica, as the United Nations is helpless to impose a tribunal from outside Bosnia-Herzegovina. Today, this moral pessimism laughs at the phrase “humanitarian crisis,” and Western efforts to change the material conditions of fleeing Iraqis, Afghans, Libyans, Syrians, Venezuelans, North Koreans…
In the absence of universal morality, and the introduction of subjective reality, the vacuum will be filled with something much more awful. And we should be afraid of this because tolerance has not emerged as a replacement. When Harry Potter first encounters Voldemort face-to-scalp, the Dark Lord tells the boy “There is no good and evil. There is only power… and those too weak to seek it.” With the breakdown of concrete moral categories, Feyerabend’s motto — anything goes — is perverted. Voldemort has been compared to Plato’s archetype of the tyrant from the Republic: “It will commit any foul murder, and there is no food it refuses to eat. In a word, it omits no act of folly or shamelessness” … “he is purged of self-discipline and is filled with self-imposed madness.”
Voldemort is the Platonic appetite in the same way he is the psychoanalytic id. Freud’s das Es is able to admit of contradictions, to violate Aristotle’s fundamental laws of logic. It is so base, and removed from the ordinary world of reason, that it follows its own rules we would find utterly abhorrent or impossible. But it is not difficult to imagine that the murder of evidence-based reasoning will result in Death Eater politics. The ego is our rational faculty, adapted to deal with reality; with the death of reason, all that exists is vicious criticism and unfettered libertinism.
Plato predicts Voldemort with the image of the tyrant, and also with one of his primary interlocutors, Thrasymachus, when the sophist opens with “justice is nothing other than the advantage of the stronger.” The one thing Voldemort admires about The Boy Who Lived is his bravery, the trait they share in common. This trait is missing in his Death Eaters. In the fourth novel the Dark Lord is cruel to his reunited followers for abandoning him and losing faith; their cowardice reveals the fundamental logic of his power: his disciples are not true devotees, but opportunists, weak on their own merit and drawn like moths to every Avada Kedavra. Likewise students flock to postmodern relativism to justify their own beliefs when the evidence is an obstacle.
Relativism gives us moral paralysis, allowing in darkness. Another possible move after relativism is supremacy. One look at Richard Spencer’s Twitter demonstrates the incorrigible tenet of the alt-right: the alleged incompatibility of cultures, ethnicities, races: that different groups of humans simply can not get along together. The Final Solution is not about extermination anymore but segregated nationalism. Spencer’s audience is almost entirely men who loathe the current state of things, who share far-reaching conspiracy theories, and despise globalism.
The left, too, creates conspiracies, imagining a bourgeois corporate conglomerate that enlists economists and brainwashes through history books to normalize capitalism; for this reason they despise globalism as well, saying it impoverishes other countries or destroys cultural autonomy. For the alt-right, it is the Jews, and George Soros, who control us; for the burgeoning socialist left, it is the elites, the one-percent. Our minds are not free; fortunately, they will happily supply Übermenschen, in the form of statesmen or critical theorists, to save us from our degeneracy or our false consciousness.
Without the commitment to reasoned debate, tribalism has continued the polarization and inhumility. Each side also accepts science selectively, if they do not question its very justification. The privileged status that the “scientific method” maintains in polite society is denied when convenient; whether it is climate science, evolutionary psychology, sociology, genetics, biology, anatomy or, especially, economics: one side is outright rejecting it, without studying the material enough to immerse oneself in what could be promising knowledge (as Feyerabend urged, and the breakdown of rationality could have encouraged). And ultimately, equal protection, one tenet of individualist thought that allows for multiplicity, is entirely rejected by both: we should be treated differently as humans, often because of the color of our skin.
Relativism and carelessness for standards and communication has given us supremacy and tribalism. It has divided rather than united. Voldemort’s chaotic violence is one possible outcome of rejecting reason as an institution, and it beckons to either political alliance. Are there any examples in Harry Potter of the alternative, Feyerabendian tolerance? Not quite. However, Hermione Granger serves as the Dark Lord’s foil, and gives us a model of reason that is not as archaic as the enemies of rationality would like to suggest. In Against Method (1975), Feyerabend compares different ways rationality has been interpreted alongside practice: in an idealist way, in which reason “completely governs” research, or a naturalist way, in which reason is “completely determined by” research. Taking elements of each, he arrives at an intersection in which one can change the other, both “parts of a single dialectical process.”
“The suggestion can be illustrated by the relation between a map and the adventures of a person using it or by the relation between an artisan and his instruments. Originally maps were constructed as images of and guides to reality and so, presumably, was reason. But maps, like reason, contain idealizations (Hecataeus of Miletus, for examples, imposed the general outlines of Anaximander’s cosmology on his account of the occupied world and represented continents by geometrical figures). The wanderer uses the map to find his way but he also corrects it as he proceeds, removing old idealizations and introducing new ones. Using the map no matter what will soon get him into trouble. But it is better to have maps than to proceed without them. In the same way, the example says, reason without the guidance of a practice will lead us astray while a practice is vastly improved by the addition of reason.” p. 233
Christopher Hitchens pointed out that Granger sounds like Bertrand Russell at times, like this quote about the Resurrection Stone: “You can claim that anything is real if the only basis for believing in it is that nobody has proven it doesn’t exist.” Granger is often the embodiment of anemic analytic philosophy, the institution of order, a disciple for the Ministry of Magic. However, though initially law-abiding, she quickly learns with Potter and Weasley the pleasures of rule-breaking. From the first book onward, she is constantly at odds with the de facto norms of the university, becoming more rebellious as time goes on. It is her levelheaded foundation, but ability to transgress rules, that gives her an astute semi-deontological, semi-utilitarian calculus capable of saving the lives of her friends from the dark arts, and helping to defeat the tyranny of Voldemort foretold by Socrates.
Granger presents a model of reason like Feyerabend’s map analogy. Although pure reason gives us an outline of how to think about things, it is not a static or complete blueprint, and it must be fleshed out with experience, risk-taking, discovery, failure, loss, trauma, pleasure, offense, criticism, and occasional transgressions past the foreseeable limits. Adding these addenda to our heuristics means that we explore a more diverse account of thinking about things and moving around in the world.
When reason is increasingly seen as patriarchal, Western, and imperialist, the only thing consistently offered as a replacement is something like lived experience. Some form of this idea is at least a century old, with Husserl, still modest by reason’s Greco-Roman standards. Yet lived experience has always been pivotal to reason; we only need adjust our popular model. And we can see that we need not reject one or the other entirely. Another critique of reason says it is fool-hardy, limiting, antiquated; this is a perversion of its abilities, and plays to justify the first criticism. We can see that there is room within reason for other pursuits and virtues, picked up along the way.
The emphasis on lived experience, which predominantly comes from the political left, is also antithetical for the cause of “social progress.” Those sympathetic to social theory, particularly the cultural leakage of the strong programme, are constantly torn between claiming (a) science is irrational, and can thus be countered by lived experience (or whatnot) or (b) science may be rational but reason itself is a tool of patriarchy and white supremacy and cannot be universal. (If you haven’t seen either of these claims very frequently, and think them a strawman, you have not been following university protests and editorials. Or radical Twitter: ex., ex., ex., ex.) Of course, as in Freud, this is an example of kettle-logic: the signal of a very strong resistance. We see, though, that we need not accept nor deny these claims and lose anything. Reason need not be stagnant nor all-pervasive, and indeed we’ve been critiquing its limits since 1781.
Outright denying the process of science — whether the model is conjectures and refutations or something less stale — ignores that there is no single uniform body of science. Denial also dismisses the most powerful tool for making difficult empirical decisions. Michael Brown’s death was instantly a political affair, with implications for broader social life. The event has completely changed the face of American social issues. The first autopsy report, from St. Louis County, indicated that Brown was shot at close range in the hand, during an encounter with Officer Darren Wilson. The second independent report commissioned by the family concluded the first shot had not in fact been at close range. After the disagreement with my cousin, the Department of Justice released the final investigation report, and determined that material in the hand wound was consistent with gun residue from an up-close encounter.
Prior to the report, the best evidence available as to what happened in Missouri on August 9, 2014, was the ground footage after the shooting and testimonies from the officer and Ferguson residents at the scene. There are two ways to approach the incident: reason or lived experience. The latter route will lead to ambiguities. Brown’s friend Dorian Johnson and another witness reported that Officer Wilson fired his weapon first at range, under no threat, then pursued Brown out of his vehicle, until Brown turned with his hands in the air to surrender. However, in the St. Louis grand jury half a dozen (African-American) eyewitnesses corroborated Wilson’s account: that Brown did not have his hands raised and was moving toward Wilson. In which direction does “lived experience” tell us to go, then? A new moral maxim — the duty to believe people — will lead to no non-arbitrary conclusion. (And a duty to “always believe x,” where x is a closed group, e.g. victims, will put the cart before the horse.) It appears that, in a case like this, treating evidence as objective is the only solution.
Introducing ad hoc hypotheses, e.g., the Justice Department and the county examiner are corrupt, shifts the approach into one that uses induction, and leaves behind lived experience (and also ignores how forensic anthropology is actually done). This is the introduction of, indeed, scientific standards. (By looking at incentives for lying it might also employ findings from public choice theory, psychology, behavioral economics, etc.) So the personal experience method creates unresolvable ambiguities, and presumably will eventually grant some allowance to scientific procedure.
If we don’t posit a baseline-rationality — Hermione Granger pre-Hogwarts — our ability to critique things at all disappears. Utterly rejecting science and reason, denying objective analysis in the presumption of overriding biases, breaking down naïve universalism into naïve relativism — these are paths to paralysis on their own. More than that, they are hysterical symptoms, because they often create problems out of thin air. Recently, a philosopher and mathematician submitted a hoax paper, Sokal-style, to a peer-reviewed gender studies journal in an attempt to demonstrate what they see as a problem “at the heart of academic fields like gender studies.” The idea was to write a nonsensical, postmodernish essay, and if the journal accepted it, that would indicate the field is intellectually bankrupt. Andrew Smart at Psychology Today instead wrote of the prank: “In many ways this academic hoax validates many of postmodernism’s main arguments.” And although Smart makes some informed points about problems in scientific rigor as a whole, he doesn’t hint at what the validation of postmodernism entails: should we abandon standards in journalism and scholarly integrity? Is the whole process of peer-review functionally untenable? Should we start embracing papers written without any intention of making sense, to look at knowledge concealed below the surface of jargon? The paper, “The conceptual penis,” doesn’t necessarily condemn the whole of gender studies; but, against Smart’s reasoning, we do in fact know that counterintuitive or highly heterodox theory is considered perfectly average.
There were other attacks on the hoax, from Slate, Salon and elsewhere. Criticisms, often valid for the particular essay, typically didn’t move the conversation far enough. There is much more for this discussion. A 2006 paper from the International Journal of Evidence Based Healthcare, “Deconstructing the evidence-based discourse in health sciences,” called the use of scientific evidence “fascist.” In the abstract the authors state their allegiance to the work of Deleuze and Guattari. Real Peer Review, a Twitter account that collects abstracts from scholarly articles, regulary features essays from the departments of women and gender studies, including a recent one from a Ph. D student wherein the author identifies as a hippopotamus. Sure, the recent hoax paper doesn’t really say anything. but it intensifies this much-needed debate. It brings out these two currents — reason and the rejection of reason — and demands a solution. And we know that lived experience is going to be often inconclusive.
Opening up lines of communication is a solution. One valid complaint is that gender studies seems too insulated, in a way in which chemistry, for instance, is not. Critiquing a whole field does ask us to genuinely immerse ourselves first, and this is a step toward tolerance: it is a step past the death of reason and the denial of science. It is a step that requires opening the bubble.
The modern infatuation with human biases as well as Feyerabend’s epistemological anarchism upset our faith in prevailing theories, and the idea that our policies and opinions should be guided by the latest discoveries from an anonymous laboratory. Putting politics first and assuming subjectivity is all-encompassing, we move past objective measures to compare belief systems and theories. However, isn’t the whole operation of modern science designed to work within our means? The system by Kant set limits on humanity rationality, and most science is aligned with an acceptance of fallibility. As Harvard cognitive scientist Steven Pinker says, “to understand the world, we must cultivate work-arounds for our cognitive limitations, including skepticism, open debate, formal precision, and empirical tests, often requiring feats of ingenuity.”
Pinker goes for far as to advocate for scientism. Others need not; but we must understand an academic field before utterly rejecting it. We must think we can understand each other, and live with each other. We must think there is a baseline framework that allows permanent cross-cultural correspondence — a shared form of life which means a Ukrainian can interpret a Russian and a Cuban an American. The rejection of Homo Sapiens commensurability, championed by people like Richard Spencer and those in identity politics, is a path to segregation and supremacy. We must reject Gorgian nihilism about communication, and the Presocratic relativism that camps our moral judgments in inert subjectivity. From one Weltanschauung to the next, our common humanity — which endures class, ethnicity, sex, gender — allows open debate across paradigms.
In the face of relativism, there is room for a nuanced middleground between Pinker’s scientism and the rising anti-science, anti-reason philosophy; Paul Feyerabend has sketched out a basic blueprint. Rather than condemning reason as a Hellenic germ of Western cultural supremacy, we need only adjust the theoretical model to incorporate the “new America of knowledge” into our critical faculty. It is the raison d’être of philosophers to present complicated things in a more digestible form; to “put everything before us,” so says Wittgenstein. Hopefully, people can reach their own conclusions, and embrace the communal human spirit as they do.
However, this may not be so convincing. It might be true that we have a competition of cosmologies: one that believes in reason and objectivity, one that thinks reason is callow and all things are subjective.These two perspectives may well be incommensurable. If I try to defend reason, I invariably must appeal to reasons, and thus argue circularly. If I try to claim “everything is subjective,” I make a universal statement, and simultaneously contradict myself. Between begging the question and contradicting oneself, there is not much indication of where to go. Perhaps we just have to look at history and note the results of either course when it has been applied, and take it as a rhetorical move for which path this points us toward.
A few days ago I asked whether the social sciences could benefit from being unified. The post was not meant to make an argument in favor or against unification, although I myself favor a form of unification. The post was merely me thinking out loud and asking for feedback from others. In this follow up post I argue that the social sciences are already in the process of unification and a better question is what type of unification type this will be.
What is a social science?
First though allow me to define my terms as commentator Irfan Khawaja suggested. By social sciences I mean those fields whose subjects are acting individuals. For the time being the social sciences deal with human beings, but I see no particular reason why artificial intelligence (e.g. robots in the mold of Isaac Asimov’s fiction) or other sentient beings (e.g. extraterrestrials) could not be studied under the social sciences.
The chief social sciences are:
Economics: The study of acting individuals in the marketplace.
Sociology: The study of acting individuals and the wider society they make up.
Anthropology: The study of the human race in particular.
Political Science: The study of acting individuals in political organizations.
There are of course other social sciences (e.g. Demography, Geography, Criminology) but I believe the above four are those with the strongest traditions and distinctive methodologies. Commentators are more than encouraged to propose their own listings.
In review the social sciences study acting individuals. A social science (in the singular) is an intellectual tradition that has a differentiating methodology. Arguably the different social sciences are not sciences as much as they are different intellectual schools.
Why do I believe the social sciences will be unified?
On paper the social sciences have boundaries among themselves.
In practice though the boundaries between the social sciences blurs quickly. Economists in particular are infamous for crossing the line that the term ‘economics imperialism‘ has been coined to refer to the application of economic theory to non-market subjects. This imperialism has arguably been successful with Economists winning the Nobel prize for applying their theory to sociology (Gary Becker), history (Douglass North, Robert Fogel), law (Ronald_Coase) and political science (James M. Buchanan). The social sciences are in the process of being unified via economic imperialism.
Imperialism is a surprisingly proper term to describe the phenomenon taking place. Economists are applying their tools to subjects outside the marketplace, but little exchange is occurring on the other end. As the “Superiority of Economists” discusses, the other social sciences are reading and citing economics journals but the economics profession itself is very insular. The other social sciences are being treated as imperial subjects who must be taught by Economists how to conduct research in their own domains.
To an extent this reflects the fact that the economics profession managed to build a rigorous methodology that can be exported abroad and, with minimal changes, be used in new applications. I think the world is richer in so far that public choice theory has been exported to political science or price theory introduced to sociology. The problem lays in that this exchange has been so unequal that the other social sciences are not taken seriously by Economists.
Sociologists, Political Scientists, and Anthropologists might have good ideas that economics could benefit from, but it is only through great difficulty that these ideas are even heard. It is harder still for these ideas to be adopted.
Towards Federalizing the Social Sciences
My answer to economic imperialism is to propose ‘federalizing’ the social sciences, that is to say to give the social sciences a common set of methodologies so that they can better communicate with one another as equals but still specialize in their respective domains.
In practice this would mean reforming undergraduate education so that social science students take at minimum principle courses in each other’s fields before taking upper division courses in their specializations. These classes would serve the dual purpose of providing a common language for communication and encouraging social interaction between the students. Hopefully social interaction with one another will cause students to respect the work of their peers and discourage any one field from creating a barrier around itself. A common language (in the sense of methodology) meanwhile should better allow students to read each other’s work without the barriers that jargon terminology and other technical tools create. It is awful when a debate devolves into a semantics fight.
Supplementary methodologies will no doubt be introduced in upper division and graduate study, reflecting the different needs that occur from specialization, but the common methodology learned early on should still form the basis.
The unification of the social science need not mean the elimination of specialization. I do however fear that unless some attempt is made in ‘federalizing’ the social sciences we will see economics swallow up its sister sciences through imperialism.
As always I more than encourage thoughts from others and am all too happy to defer to better constructed opinions.
This past month a paper Marion Fourcade, Etienne Ollion, and Yann Algan by on the ‘Superiority of Economists‘ has made the rounds around the webs. Our own Brandon has made note of it before. I have given the paper some thought and cannot help but wonder if the social sciences could not benefit from being synthesized into a unified discipline.
Some background: I have been studying economics for a little under half a decade now. By all means I’m a new-born chicken, but I have been around long enough to have grown a distaste for certain elements of the dismal science. In particular I am disturbed by the insular nature of economists; relatively few seem interested in dropping by the History or Political Science departments next door to see what they’re working on. I cannot help but feel this insular nature will be economic’s undoing.
It should be no surprise that I hope to enter CalTech’s Social Science program for my PhD studies. The university is famed for its interdisciplinary nature and its social science program is no different. Its students are steeped in a core composed of microeconomics, statistics, and the other social sciences. For a while the New School in New York City offered a similar program.
I am sure there would be those who would object to synthesizing the social sciences into a unified discipline. Sociology and Economics might be more easily combined (as they were by folks such as Gary Becker) than Economics and Anthropology.
I am eager to hear other’s thoughts on this. Is the gap between the social sciences too large for them to be unified? Is unification even desirable? Should we content ourselves with an annual holiday dinner where we make fun of our common enemy?
In Part One of Scholarly Conspiracies, Scholarly Corruption and Global Warming, I drew on my own experience as a scholar to describe how the scientific enterprise can easily become corrupted for anodyne, innocent reasons, for reasons that are not especially cynical. I argued, of course, that this can especially happen in connection with such big, societal issues as climate change. I concluded that the findings of scientists do not, as a matter of principle, merit the quasi-religious status they are often granted. It follows from this that the Left’s attempt to stop any debate on the ground that science has spoken is grotesque.
I should have added in Part One that at different times in my career, I may have benefited by the kind of corruption I describe as well as having been hurt by it. Of course, one thing does not compensate for the other. Corruption is corruption; it constitutes more or less wide steps away from the truth whether I profit by it or whether it harms me. These things just add up, they don’t balance each other out.
Once you open your eyes, it’s not difficult to find gross derailments of the scientific enterprise. To be more precise, the transformation of limited scientific results into policy often gives rise to abuses. Sometimes, they are gross abuses verging on the criminal.
A recent book describes in detail how the slim results of 1950s studies that were obviously flawed both in their design and with respect to data collection were adopted by the American scientific establishment as policy. They resulted in a couple of generations of Americans being intellectually terrorized into adopting a restrictive, sad, un-enjoyable diet that may even have undermined their health. The book is The Big Fat Surprise by Nina Teicholz .
For most of my adult life, I limited my own intake of meats because saturated fats were supposed to give me cardiac illness and, ultimately heart attacks. I often thought something was fishy about the American Heart Association severity concerning saturated fats because of my frequent stays in France. There, I contemplated men of all ages feasting on pork chops fried in butter followed by five different kinds of cheese also eaten with butter. Then, they would have a post-prendial cigarette or two, of course. None of the men I knew exercised beyond walking to shop for pâtés, sausages, and croissants sweating butter (of course). Every time, I checked – often – Frenchmen had a longer life expectancy than American men (right now, it’s two and half years longer.)
Yet, such was the strength of my confidence (of our confidence) in the official medical-scientific establishment that I bravely followed my stern semi-macrobiotic diet even while in France. In my fifties, I developed Type Two diabetes. None of my four siblings who lived and ate in France did. I understand well the weakness of a such anecdotal evidence. And I know I could have been the one of five who hit the wrong number in the genetic lottery. (That would have been the inheritance from my grandfather who died at 26 of a worse illness than diabetes, a German bullet, in his case.) Yet, if there are quite a few cases like mine where siblings constitute a natural control for genetic factors, it would seem worth investigating the possibility that a diet high in carbohydrates is an actual cause of what is often described as an “epidemic” of Type II diabetes. If there are many more cases than there were before the anti-fat campaign, controlling for age, something must have changed in American society. The diet low on saturated fats pretty much forced on us since the fifties could be that societal change.
I am not saying that it is. I am saying it’s worth investigating, with proper design and normal rules of data selection. I am not holding my breath. I think the scientific establishment will not turn itself around until its biggest honchos of the relevant period pass away. Teicholz’s book may turn out to have many defects because she is more a journalist than a scientist. I am awaiting with great attention the rebuttals from the scientific establishment, or – you never know – their apologies.
And then, there is the old story of how it took twenty years for the American Medical Association to change its recommendation on how to treat the common duodenum ulcer after an obscure Australian researcher showed that it was almost always caused by a bacterium. (The story was told about twenty years ago, in Atlantic Monthly, I think I remember. You look it up.)
The de facto scientific establishment is not infallible but it usually wants to pretend that it is. It’s aided in its stubbornness by the religiously inspired passivity of ordinary people who were raised with misplaced all-around reverence for science and anything that appears, rightly or wrongly, “scientific.”
The climate change lobby, wrapped in a pseudo-scientific mantle still thrives in several policy areas in spite of most Americans’ relative indifference to the issue. Two of its main assets are these: First it is well served by irresponsible repetition of a simplified form of its message that amounts to constant, uncritical amplification; second, even well-educated people usually don’t pay a lot of attention to detail, don’t read critically because they are busy.
Now, I am not going to spend any time denouncing the myriad airheads with short skirts who add their own climate change sage commentary to their presentation of ordinary weather reports. (I am a man of vast culture, I listen to the same tripe in three different languages!) As I keep saying, I don’t beat on kindergartners. Let’s take National Geographic, instead, that justifiably respected monument to good information since 1888.
The October 2013 issue presents another striking photographic documentary intended to illustrate fast climate change. One of the photographic essays in the issue concerns, predictably, the alleged abnormal melting of glaciers. The talented photographer, James Balog, contributes his own completely superfluous, judgmental written commentary:
We know the climate is changing…. I never expected to see such huge changes in such a short period of time.
The guy is a photographer, for God’s sake! He has an undergraduate degree in communications. His credentials to pronounce on long-term climate change are…? Even the National Geographic, generally so careful about its assertions, couldn’t resist, couldn’t bring itself to tell him, “This is outside your area of competence, STFU!” Why not let the janitor also give his judgment in the pages of National Geographic? This is a free country after all. Most people simply don’t have the energy to notice thousands of such violations of good scientific practice.
Now to inattention, still with the venerated National Geographic. The September 2013 issue, entitled “Rising Seas” presents a truly apocalyptic future in case global warming is not controlled. As is usually the case with N.G. the article is chock-full with facts from studies. The article is also tightly argued. N.G. is normally careful about what it asserts. To make things even clearer, it offers a graph on pp. 40 -41 purporting to demonstrate a disastrous future for the earth starting very soon.
Being a leisurely retired man endowed with an unusually contrary personality, being furthermore well schooled in elementary data handling, I did the obvious with the graph, the obvious not one educated person in 10,000 would think of doing, or care to do. I took my desk ruler to the graph itself. Here is what I discovered:
Between 1880 and 2013, there was less than a one foot rise in the oceans level according to National Geographic. Of course, those 123 years cover the period of most rapid rise in the emission of alleged greenhouse gases. Imagine if National Geographic had an article entitled:
“Less Than Foot-Rise in Ocean in Spite of More than 120 Years of Greenhouse Emissions”
Many citizens would respond by thinking that maybe, possibly there is global warming but it’s not an urgent problem. Let’s take our time looking into the phenomenon more carefully, they would say. Let’s try and eliminate alternative explanations to greenhouse gases if we find that there is indeed abnormal warming. After all, how much of a rush would I be in even if I were convinced that water rises in my basement by almost one tenth of an inch each year on the average?
This is not an absurd mental exercise. The business of science is to try to falsify and falsify again. When you get interesting results, the scientific establishment (if not the individual scientist author of the findings) is supposed to jump on them with both feet to see if they stand up. Instead in connection with global warming, scientists have allowed the policy establishment and those in their midst that influence it to do exactly the reverse: If you see anything you like in a scientific study, try hard for more of the same. If you find something that contradicts your cause, bury it if you can, ignore it otherwise. You will get plenty of help in doing either.
Scientists have become collectively a complicit in massive anti-scientific endeavor with many religious features.
I am finally proofing the print copy of my book:
97 % of scientists, blah, blah…. Ridiculous, pathetic.
Thus challenged, some people I actually like throw reading assignments at me. Some are assignments in scholarly journals; some, sort of. Apparently, I have to keep my mouth shut until I reach a high degree of technical competence in climate science (or something). I don’t need to do these absurd assignments. I am not blind and I am not deaf. I see what I see; I hear what I hear; it all sounds familiar. Been there, done it!
A long time ago, I accepted a good job in France in urban planning after receiving my little BA in sociology from Stanford. I was a slightly older graduate and I had no illusions that I knew much of anything then. I had some clear concepts in my mind and I had learned the basic of the logic of scientific inquiry from old Prof. Joseph Berger and from Prof. Bernard Cohen. I had also done some reading in the “excerpts” department including the trilogy of Max Weber, Emile Durkheim and Karl Marx. Only a couple of weeks after I took my job, my boss sent me to a conference of urban sociologists in Paris. Having been intellectually spoiled by several years in the US and conscious of my limited knowledge of urban planning, I asked many questions, of course.
In the weeks following the meeting, I became aware of a rumor circulating that presented me as an impostor. This guy coming out of nowhere – the USA – cannot possibly have studied sociology because he does not know anything, French sociologists thought. I had to ask how the rumor started. I was aware that I knew little but, but, I did not think it was exactly “nothing.” Besides, most of my questions at the conference had not been answered in an intelligible manner so, I was not convinced that my comparison set – French sociologists working in city planning – knew much more than I did.
Soon afterward, I wrote a “white paper.” It was about the eastern region where I had been tasked to plan for the future until 2005 (the year was 1967) as part of a multidisciplinary team. The white paper gave a list of social issues city planners had to face at this point, the starting point of the planning endeavor. As young men will do, I had allowed myself short flights of speculation in the white paper, flights I would not have indulged in a few years later. My direct supervisor, an older French woman who was supposed to be sociologist, read the whole ambitious product, or said she had, and made no comments except one. She took exception to one of my speculative flights in which I made reference to the idea that much societal culture rises up from the street. It was almost an off-hand remark. Had that part been left out, the white paper would have been pretty much the same. The supervisor insisted I had to remove that comment because, she said exactly, ”Marx asserts clearly that culture comes from the ruling class.” She told me she would not allow the white paper to be presented until I extirpated the offending statement.
In summary: The woman had nothing to say about the many parts of the report that were instrumental to the endeavor that our team was supposed to complete, about that for which she and I were explicitly being paid. She had nothing to say about the likely mistakes I exhibited in the report because of my short experience. Her self-defined role was strictly to protect what she took to be Marxist orthodoxy even if it was irrelevant. There was a double irony there. First, the government that employed us was explicitly not in sympathy with any form of Marxism. The woman was engaging in petty sedition. Second, Karl Marx himself was no lover of orthodoxies. He would have abhorred here role. (Marx is said to have declared before his death, “I am not a Marxist”!)
In any event, I was soon rid of the ideological harridan and I was able to do my job after a fashion. For those who like closure: I went back to the US to attend graduate school, at Stanford again. Two years later, my old boss called me back. He had come up in the world. He was in charge of a big Paris metropolitan area urban research institute. He begged me, begged on the phone to go back to France, and take charge of the institute’s sociology cell. He said that he understood not a word of what the “sociologists” there said to him. He added that I was the only sociologist he had ever understood. I yielded to his entreaties and I promised him a single year of my life. I interrupted my graduate studies and flew to Paris. In the event, I gave the sociologists at the institute one month warning. Then, I summoned each one of them to explain to me orally how his work contributed to Paris city and regional planning. (“What will it change to the way this is currently being done?” I asked.) They did not respond to my satisfaction and I fired all six of them. I replaced them with people who could keep their Marxism under control. My boss was grateful. I could have had a great career in France. I chose to return to my studies instead.
Three years later, having completed my doctorate, I found my self at critical juncture common to all those who go that course. You have to turn your doctoral thesis into papers published in double-blind refereed journals. (Here is what this means: “What’s Peer Review and Why It Matters“)
That’s a lot like leaving kindergarten: no more cozy relationships, no more friends assuring you that your work is just wonderful; the real world hits you in the face. The review process in good journals is often downright brutal. Anyone who does not feel a little vulnerable at that point is probably also a little silly. To make matters worse, the more respected the journal, the harder it is to get in and the better your academic career. As a rule, if you have not achieved publication in a first-rate journal in the first three or four years after completing your doctorate, you will be consigned forever to second-tier universities or worse.
Be patient, I am just setting the stage for what’s coming.
Much of my early scholarly work happened to take place within a school of research dominated by “neo-Marxists.” It was not my choice. I was interested in problems of economic development that happened to be largely in the hands of those people. My choice was between abandoning my interests or buckling up and taking my chances. I buckled up, of course. My first article to be published was innovative but a little esoteric. (Delacroix, Jacques. “The permeability of information boundaries and economic growth: a cross-national study.” Studies in Comparative International Development. 12-1:3-28. 1977.) I presented to a specialized journal and therefore not one that could be called “first tier.” It happened to contain nothing that would offend the neo-Marxists. It took less than six months to have it accepted for publication.
The second published paper out of my dissertation struck at the heart of neo-Marxists convictions. It demonstrated – using their methods – that the parlous condition of the Third World – allegedly caused by capitalist exploitation – could be remedied through one aspect of ordinary good governance. I submitted it to one of the two most respected journals (the American Sociological Review). All the reviewers who had the technical skills to review my submission were also neo-Marxists or sympathetic to their doctrine. The paper reported on a study conducted according to methods that were by now common. Having the paper accepted for publication took more than three years. It also took a rare personal intervention by the journal’s editor whom I somehow managed to convince that the reviewers he had chosen were acting unreasonably. (The paper: Delacroix, Jacques. “The export of raw materials and economic growth: a cross-national study.” American Sociological Review 42:795-808. 1977.) No need to read either paper.
Am I telling you here a story of conspiracy or a story of academic corruption? Yes, I faced a conspiracy but it was not a conspiracy against me personally and it was mostly not conscious. The only people – but me- who had the skills to pass judgment on my paper were not numerous. They were a small group that shared a common understanding of the reality of the world. It was not a cold, cerebral understanding. Those people formed a community of sentiment. They believed their work would contribute to the righting of a worldwide injustice, a “global” injustice committed against the defenseless people of underdeveloped countries. Is it possible that their ethical faith influenced their judgment? To ask the question is to answer it, I think. Did their faith induce them to close their eyes when others from their own camp cut some research corners here and there? On the contrary, were their eyes wide open when they were reviewing for a journal a submission whose conclusion impaired their representation of the world? In that situation, did they overreact to an uncrossed “t” or a dotted “i,” in a paper that undermined their beliefs? Might be. Could be. Probably was. Other things being equal, they may have just thought, it would be better if these annoying Delacroix findings were not publicized in a prime journal. Delacroix could always try elsewhere anyway.
So, yes, I faced corruption. It was not conscious, above-board corruption. It was not cynical. It was a corruption of blindness, much of it deliberate blindness. The blindness was all the more sturdy because it was seldom called into question. Those who would have cared did not understand the relevant techniques. Those who knew them shared in the blindness. This is a long way from cynical, deliberate lying. It’s just as destructive though. And it’s not only destructive for the lives of the likes of me who don’t belong to the relevant tribe. It’s destructive of what ordinary people think of as the truth. That is so because – however unlikely that sounds – the productions of elite and abstruse journals usually find their way into textbooks, even if it take twenty years.
Are the all-powerful editors of important journals part of the conspiracy? Mine were not but they tended to adhere to imperfect rules of behavior that made them objective accomplices of conspiracies. Here is the proof that the editor of the particular journal tried to be impartial. Only a month after he accepted my dissenting paper, the editor assigned me to review a submission from the same neo-Marxist school of thought that trumpeted another empirical finding proving that, blah, blah…. After one reading of the paper, my intuition smelled a rat. I spent days in the basement of the university library, literally days, taking apart the empirical foundation of the paper. I found the rat deep in its bowel. To put it briefly, if you switched a little thing from one category to another, all the conclusions were reversed. There was no imperative argument to put that one thing in one category rather than in the other. The author had chosen that which put his labor of love in line with the love of his neo-Marxist cozy-buddies. If he had not done it, his pluses would have become minuses, his professional success anathema. In the event, the editor agreed with my critique and dinged the paper for good. Nothing worse happened to the author. No one could tell whether he was a cheat. Or, no one would. No one was eager to. The editor was not in appetite for a fight. He let the whole matter go.
Myself, I came out of this experience convinced that it was likely that no one else in the whole wide world had both the skills and the motivation to dive into the depth of the paper to find that rat. It’s likely that no one else would have smelled a rat. It’s possible that if I had not still been smarting from three years of rejection of my own work, I would not have smelled the rat myself. The editor had the smarts, the intuition fed by experience, I would say, that he could put to work my unique positioning, my combination of competence and contrariness. He put it to work in defense of the truth. That fact is enough to exonerate him from complicity in the conspiracy I described. To answer my own question: Do I think that powerful scientific journal editors are often part of a conspiracy of the right thinking, of an orthodox cabala? I think not. Do they sometimes or often fall for one? Yes.
For those who like closure: My interests switched later to other topics. (See vita, linked to this blog’s “About me.”) I think the neo-Marxist school of thought to which I refer above gradually sank into irrelevance.
After that experience, and several others of the same kind, do I have something better to propose? I don’t but I think the current system of scholarship publication does not deserve anything close to religious reverence. Even if there were anything close to a “consensus” of scientists on anything, that should not mean that the book is closed. Individual rationalism also matters. It matters more, in my book.
What does this story of reminiscences this have to do with global warming, climate change, climate disruption , you might ask? Everything, I would say. More on the connection in part Two. [Update: Here is part 2, as promised! – BC]