The Sad Retreat

Do not go gentle into that good night / Rage, rage against the dying of the light.

 

~ Dylan Thomas

Thomas’ villanelle to his dying father is one of the most iconic English poems of the 20thcentury. It is also curiously relevant today, though not in a literal sense. The traditional American zeitgeist is the willingness to step forward fearlessly into the unknown, and in doing so to illuminate it and to dispel infantile terror of the dark. Sadly, the contemporary spirit is one of moribund confidence and the acceptance of stagnation.

There was nothing more indicative of the American spirit than the opening lines of Star Trek: The Original Series: “To boldly go where no man has gone before.” The root of the phrase’s power lies in action, an acceptance that man is capable of seizing control and carrying himself into space. Yet, despite the television series’ timing, the American people were not going boldly into the night, or darkness of space, but rather were starting a retreat that continues into the present.

The retreat is not an apparent one. To all appearances there has been no pell-mell flight from a battlefield with weaponry and protective gear to signal a loss of confidence. To quote Kevin D Williamson, “Nothing happened.” It is the lack of action, the stagnation, that signals a retreat occurred.

In his book Slouching to Gomorrah (1996), the late Robert Bork (1927 – 2012) catalogued the ways he believed America had declined since his youth. Buried among the musings are some anecdotes about Bork’s time at Yale as a young law professor in the late 1960s. In that decade, according to Bork, the university had quietly relaxed its admissions criteria and admitted applicants who would not have qualified under the previous standard. From a position of authority, Bork observed as these young people – mostly men as the problem was concentrated at the undergraduate level and the only women at Yale at the time were graduate students – struggled academically due to lack of adequate preparation. Many of these new students began to flunk as individual faculty refused to dilute their syllabi or grading standards.

As he explained, the stakes at the time were particularly high: in 1968, the time of Bork’s first semester, the draft and deployment to Vietnam would immediately and ruthlessly punish academic failure. Additionally, the students described largely came from middle-America and bore the full weight of their families and local communities’ expectations, causing failure to be particularly humiliating. Although Bork rejected the Marxist and anti-intellectual aspects of the 1968 student protests, he presented a hidden facet to the protestors whom he identified as young people angry at a system they felt had betrayed them and doomed them to failure. The message extracted from a well-intentioned policy change was that these people were ones who couldn’t – they couldn’t keep up with their peers, they couldn’t succeed, and all the indicators of their time pointed toward a truncated future. In short, they didn’t matter; they were not strictly necessary for broader society. Aside from property destruction, the turn toward Marxism and anti-intellectualism was a retreat, a flight from reality. With the rout – entirely self-imposed since the simple solution to the problem was to go to the library and catch up, rather than go burn the books, which is the choice the students made – the United States unknowingly set off on a path of becoming a nation of “cannots.”

To present an analogy, in the training of thoroughbred racehorses, a promising colt is raced against another, less able one in order to build the former’s confidence. The second horse is not only expected to lose, he is rewarded for doing so, but very often at the cost of his spirit and willingness to compete. This analogy is somewhat limited since the Yale student protestors of the 1960s chose the role of second horse themselves, but the result, anger, wanton destruction, and futile rage in the face of their inadequacies, are human indicators of broken spirit and loss of competitive edge. These two traits have, since the 60s, trickled down through all echelons of American society, accompanied by all the symptoms of anger and unnecessary misery. The American people have become the second horse.

All statistical evidence indicates that the quality of life in America is higher than ever before; we have record low rates of crime, better healthcare, and unparalleled access to consumer goods and luxury technologies. Under circumstances such as these, we should possess an equally high level of national confidence and happiness. But this is not the case. Currently, a Gallup poll from late 2017 shows that, despite the country’s increased prosperity under the new administration, subjective, or perceived, happiness has declined since the 2016 election. In other words, middle America is no happier now than it was pre-November 2016: it is less happy. Combined with the rise of “deaths of despair (official term for deaths from addiction or suicide in middle-aged or younger people)” and the mediatized claims of loss of opportunity due to – O tempora, o mores – technology and the new economy, the story is one of surrender, nothing else.

In the narrative, especially the one surrounding the 2016 election, the story is one of middle-America neglected and in need of special favors and treatment. As part of this picture, its authors and advocates on both sides of the political aisle sneer at the idea of self-determination, in the way of Michael Brendan Dougherty in the piece that Williamson rebutted with his “Nothing happened.” Technology is a particular target of Dougherty’s ire as somehow destructive to a utopic version of American community and family – his most recent article from May 1, 2018, was an apology for using the internet as a work medium – but ignoring the path to financial independence and economic integration that it provides. Hatred of technology and change, a desire to return to the “good old days” is a symptom of the retreat.

Today the logical and economic fallacy, identified by AEI’s Arthur Brooks, of “helping poor people” instead of “needing them” is dominant. Yet, it is a betrayal at all levels of the American ethos. Protectionism, insularity, and above all else, a desire to justify the degraded state of the American worker, pinning the fault on a wide range of people and things, are all signs of the willful betrayal of the American spirit. Although there are individual Americans who are leaders in technology and new industry, the American people are collectively falling behind, and our policy-makers are rewarding us for becoming the second horse through protectionism and populist speeches that reinforce the notion that there is a wronged group of “left behind.” We are no longer going “boldly where no man has gone before;” instead we are docilely being led into the “good night,” all while thinking that we are raging against it. Without a change, the pasture of irrelevance awaits us.

Eye Candy: the languages of Brazil

NOL map Brazil languages
Click here to zoom

My only question: no Spanish, anywhere? Not even along the borders?

The Intolerance of Tolerance

Just recently I read The Intolerance of Tolerance, by D. A. Carson. Carson is one of the best New Testament scholars around, but what he writes in this book (although written from a Christian perspective) has more to do with contemporary politics. His main point is that the concept of tolerance evolved over time to become something impossible to work with.

Being a New Testament scholar, Carson knows very well how language works, and especially how vocabulary accommodates different concepts over time. Not long ago, tolerance was meant to be a secondary virtue. We tolerate things that we don’t like, but that at the same time we don’t believe we should destroy. That was the case between different Protestant denominations early in American history: Baptists, Presbyterians, and so on decided to “agree to disagree,” even though they didn’t agree with one another completely. “I don’t agree with what you are saying, but I don’t think I have to shut you up. I’ll let you say and do the things I don’t like.” Eventually, the boundaries of tolerance were expanded to include Catholics, Jews, and all kinds of people and behaviors that we don’t agree with, but that we decide to tolerate.

The problem with tolerance today is that people want to make it a central value. You have to be tolerant. Period. But is that possible? Should we be tolerant of pedophilia? Murder? Genocide? Can the contemporary tolerant be tolerant of those who are not tolerant?

Postmodernism really made a mess. Postmodernism is very good as a necessary critic to the extremes of rationalism present in modernity. But in and by itself it only leads to nonsense. Once you don’t have anything firm to stand on, anything goes. Everything is relative. Except that everything is relative, that’s absolute. Tolerance today goes the same way: I will not tolerate the fact that you are not tolerant.

The news cycle vs. current events

The other day, while badgering my fellow Notewriters to blog more often, I mentioned that current events are different from the news cycle, and are still important to dissect and blog about. This distinction between the news cycle and current events was sparked by economist Arnold Kling’s recent post on where he gets the news (found in one of last week’s Nightcaps). Basically, the news cycle is terrible. I rarely pay any attention to it. CNN, a left-of-center media outlet that almost everybody has heard of, has on its front page today (the wee morning hours of 4-23-17) a great example of the news cycle:

  • Kellyanne Conway to Dana Bash: OK, you went there
  • Conway says asking about her husband’s anti-Trump tweets is a ‘double standard’
  • Analysis: Trump’s score-settling creates jarring contrast
  • WSJ: Trump to ask North Korea to dismantle nuclear arsenal before talking sanctions relief
  • Opinion: Macron’s bromance with Trump will come at a price
  • Biographer: Trump has lied since youth
  • Melania Trump plans state dinner on her own
  • Stelter: One Trump lie is crystal clear

You get the idea, and remember that CNN is a well-established, long-running media outlet. Other media outlets that focus on the news cycle are just as bad, if not worse (at least CNN pretends, most of the time, to wrap its clear bias in a cloth of objectivity). This is a far cry from the concept of “current events.” Current events, in my view, are arguments about ideas, events, or even people that take place between at least two sides in specific time frame. Most of the time, “current events” involve using events (usually) or people (rarely) to defend or attack an idea. You see the difference? Have a better definition?

The news cycle is largely garbage, but it can still be useful, especially for international news. I never visit RealClearPolitics, for example, because it focuses on the news cycle, but I stop by RealClearWorld, which usually conveys the news cycles of other countries, once or twice a day. Even though I’m consuming a news cycle, I’m still learning something because it’s a news cycle about a place very different from my own.

Five or ten years from now, the bullet points from CNN will be useless and forgotten, but the arguments put forth into the stream of current events will be useful and maybe even prized. What baffles me is that the news cycle, while almost universally loathed, is far more popular in terms of consumption than current events. Doesn’t everybody know how to use Google by now?

Eye Candy: Computer games, worldwide

NOL map computer games.png
Click here to zoom

These are the most-owned games on Steam, a digital distribution platform (wiki). This was fascinating to me for a bunch of different reasons. You can come up with your own, I’m sure. Here are the wikis for the games:

I have played none of these games…

The problem with Brazil (and it’s not socialism)

The problem with Brazil is not Luis Inacio Lula da Silva. It’s not the Worker’s Party. It’s not Socialism.

Certainly one of the most important politicians in Brazilian History was Getulio Vargas. Vargas came to power in a coup (that symptomatically most Brazilian historians call a revolution) in 1930. He ended up staying in power, without ever being elected by popular vote, until 1945. Then he peacefully resigned, not without electing his chosen successor, Eurico Gaspar Dutra. Vargas came back to power immediately after Dutra, and committed suicide while in office. Almost all Brazilian presidents from 1945 to 1964 were from Vargas’ close circle.

Brazilians to this day are still taught that Vargas was a hero, persecuted by an evil opposition. Initially, Vargas was some kind of Brazilian positivist. He was anti-liberal because liberalism is weak and slow. We need a strong technical government, able to identify problems and come with solutions fast. However, while in office, he became “the father of the poor,” a defensor of the lower classes. Nothing could be farther from the truth, of course, but that’s how Vargas is remembered by many.

One of my favorite interpretation of Brazil comes from Sergio Buarque de Holanda. According to Holanda, the problem with Brazil is that Brazilians are cordial. What he means by that is this: using Weber’s models of authority, he identified that Brazilians were never able to support a Legal-Rational authority. Vargas was seen as “a father.” not a president. The country is seen as a big family. Lula used a very similar vocabulary and tried to reenact Vargas’ populism.

As I mentioned, Holanda’s interpretation is Weberian. Weber’s most famous book is The Protestant Ethic and the Spirit of Capitalism. The problem with Brazil is that it never went through a protestant reformation. And because of that, it never developed the “spirit of capitalism” that Weber describes. Brazil is still, to a great degree, stuck with traditional and charismatic forms of authority.

To be sure, Brazil has many features of a modern liberal state. Since late 18th century Portugal tried to copy these from more advanced nations, especially England. Brazil followed suit. But you can’t have the accidents without the substance. Unless Brazil actually goes through a transformation in its soul, it will never become the modern liberal state many want it to be. Quoting Domingo Faustino Sarmiento, “An ignorant people will always choose Rosas.”

Eye Candy: Kurdistan

NOL map Kurdistan.png
Click here to zoom (courtesy of the excellent Decolonial Atlas)

Countries with significant Kurdish populations in the Near East: Turkey, Syria, Iraq, and Iran.

Countries with significant Kurdish populations in the Near East that the United States has bombed or put boots on the ground in: Iraq and Syria.

Countries with significant Kurdish populations in the Near East that the United States has threatened to bomb and possibly invade: Iran.

Countries with significant Kurdish populations in the Near East that the United States is allied with: Turkey.

Three of the four countries with significant Kurdish populations in the Near East are (or was, in the case of Iraq) considered hostile to the US government, so the use of Kurds to further American Realpolitik in the region is almost obvious, until you consider that Turkey has been a longtime ally of Washington.

Suppose you’re a big-time Washington foreign policy player. Do you arm Kurdish militias in Syria, encourage continued political autonomy in Kurdish Iraq, finance Kurdish discontent in Iran, and shrug your shoulders at Istanbul? Seriously, what do you do in this situation?

From Petty Crime to Terrorism

I grew up in France. I know the French language inside out. I follow the French media. In that country, France, people with a Muslim first name are 5% or maybe, 7% of the population. No one estimates that they are close to 10%. I use this name designation because French government agencies are forbidden to cooperate in the collection of religious (or ethic, or racial) data. Moreover, I don’t want to be in the theological business of deciding who is a “real Muslim.” Yet, common sense leads me to suspect that French people who are born Muslims are mostly religiously indifferent or lukewarm, like their nominally Christian neighbors. I am not so sure though about recent immigrants from rural areas bathed in a jihadist atmosphere, as occur in Algeria, and in Morocco, for example. Continue reading

Eye Candy: Poo-Pooing in Public, Percentages

NOL map open defecation 2015
click here to zoom

My only question is: how do they compile data on this? The World Bank put this thing together, but I can’t see any economists doing fieldwork on this. Maybe the Bank hired anthropologists to do the poo-poo stuff…

The President’s commission on opioids (2/2)

Here’s the second half of an abridged essay I wrote for a public policy course. First half is here, and next week I’ll write about the FDA’s new enemy, kratom.


 

Epidemic status

The DEA’s 2015 declaration of an opioid epidemic was the first sign of large-scale federal attention to prescription analgesics, to my knowledge. On the CDC’s official glossary, “outbreak” and “epidemic” are interchangeable: “the occurrence of more cases of disease than expected in a given area or among a specific group of people over a particular period of time.”

The classification of addiction as a disease is sometimes controversial. (See also Adam Alter’s Irresistible for a popularized form of the psychological takes on addiction.) For the opioid problem to be an epidemic, the focus must be the addition rate, and not the overdose or death rate alone. The federal government usually refers to the opioid situation as an epidemic or emergency (which presupposes a value judgment), and when media has covered it (as with the deaths of Philip Seymour Hoffman, Heath Ledger and Prince) they use the same language. One definitive media moment might have been last year, when John Oliver announced for a young progressive crowd that “America is facing an epidemic of addiction to opioids.”

Oliver was referring specifically to addiction — criticizing companies like Purdue Pharma (creator of OxyContin) for misleading or misinformative advertising about addictiveness. But usually it does not seem like the focus is on addiction. As stated, nonmedical usage of opioids is generally down or stabilized from the last couple years, and the problem is mostly overdoses. (True, these are intimately connected.) This might indicate that cutting the pills with other drugs or general inexperienced use are greater problems than general addiction. So, there is an epidemic in the colloquial usage — extensive usage of something which can be harmful — but only questionably in the CDC’s medical definition, as the usage rates are expected to be up as synthesized morphine-, codeine- or thebaine-based pain relievers diversified, and these have mostly stabilized except for heroin (thought as often beyond opioid status) and fully synthetic derivatives which get less attention (fentanyl, tramadol).

Why the standard of abuse fails

John Oliver — worthy to talk about because much of the public plausibly started paying attention after his episode — noted that the pills are assigned to patients and then, even if the patient doesn’t develop an addiction, they end up in the “wrong hands.” What happens at this point? The Commission recommends that companies design their prescription drugs for “abuse-deterrent” formulations (ADF). After spikes in opioid abuse, Purdue Pharma and other companies began researching mechanisms to prevent abusers from easily obtaining a recreational high by tampering with the pill or capsule. In a public statement, FDA commissioner Scott Gottlieb asserted that the administration’s focus is on “decreasing unnecessary exposure to opioids,” but, recognizing the real role that prescription opioids play in pain relief, Gottlieb continues that “until we’re able to find new nonopioid forms of pain management … it’s critical that that we also continue to promote the development of opioids that are harder to manipulate and abuse, and take steps to encourage their use over opioids that don’t offer any form of deterrence.” Some of these abuse-deterrent options are crush resistance or wax coating to make dissolving more difficult.

However, opioid abuse comes in two forms which are conflated by the legal language. The first is when a patient takes more than their recommended allocation or takes it in the wrong way. The second is when someone with or without a prescription consumes them purely for recreation. Many drug savvy abusers of the second variety have adapted methods to get a recreational high but avoid potential health risks, the most popular method being “cold water extraction” (CWE). Most opioid pills contain both a synthesized opium alkaloid (from morphine, codeine or thebaine) and acetaminophen: Percocet contains oxycodone and paracetamol; Vicodin contains hydrocodone and paracetamol. The acetaminophen or APAP has no recreational benefits (a pure pain reliever/fever reducer) and can cause severe liver problems in large quantities, so recreational users will extract the opium alkaloid by crushing the pill, dissolving it in distilled water, chilling it to just above freezing, and filtering out the uncrystallized APAP from the synthesized opium. This way a greater quantity of the opioid can be ingested without needlessly consuming acetaminophen. Other recreational users that want less of the opium derivative can proceed without CWE and insufflate or orally ingest the particular pills.

ADFs might be able to dent the amount of abusers of the second variety. If the pills are harder to crush (the route of Purdue’s 2010 OxyContin release) or, for capsules, the interior balls are harder to dissolve (the route of Collegium Pharmaceutical’s Xtampza), amateur or moderately determined oxycodone enthusiasts may find the buzz is not worth the labor. As the Commission observes, more than 50% of prescription analgesic misusers get them from friends and family (p. 41) — these are not hardcore aficionados, but opportunists who might be dissuaded by simple anti-abuse mechanisms. Abusers of the first variety, though, are unaffected: at least in the short term, their abuse rests on slightly-over-the-recommended doses or a natural tendency to develop an addiction or non-medical physical dependency. And, if the political core of the opioid emergency is patients that develop an addiction accidentally (those that stay addicted to pharmaceuticals and those that graduate to heroin), abuse-deterrent focuses are unlikely to create real change in addiction rates. It could even have the unintended consequence of higher overdose deaths for amateur narcotics recreationalists, who aren’t skilled enough to perform extractions and opt to consume more pills in one sitting instead.

And furthermore, ADFs can be incorporated into the naturals and synthetics that are usually bonded with APAP like codeine, oxycodone, hydrocodone and tramadol, but cannot for the drugs that come in pure form like heroin or fentanyl. And those are the problem drugs. The NIDA research on drugs involved in overdose deaths across the board, for one, shows that overdose deaths are on the rise as a whole (except for methadone), and also that the synthetic opioids are much more deadly than the naturals and semi-synthetics: fentanyl is the biggest prescription analgesic killer (it’s much more potent than morphine, and tramadol is not very good for recreation).

drugoverdosedeaths

(This graph also shows, however, that the natural/semi-synthetic usage rate was possibly leveling out but resuscitated in 2015.) So ADFs are useless for the drugs most massively causing the “opioid epidemic.” Making them harder to abuse only dents the second category of abuser, and does not limit their addictiveness for those prescribed them for postsurgical pain or otherwise.

Moreover, from a libertarian standpoint, the second category of abuser does not really belong in the “public health crisis” discussion. Those who knowingly consume opioids for recreation are not a problem, they are participants in a pleasure-seeking activity that doesn’t tread on others. So long as their costs are not imposed on other people, it might be better to separate them from the “epidemic” status. Blurring the lines between the groups that fall under “abusive” means that those with a side-interest in OxyContin on Friday nights are lumped in with addicts suffering from physical dependency. Someone who has a glass of wine each night is not “abusing” alcohol, but we can recognize someone who is an alcoholic; the same distinction should be applied medically to opioid users. By painting all consumers outside of direct medical usage as “abusers,” there can be no standard for misuse, and thus no way for a recreationalist to know how much is too much, when health problems might set in, if they are really trapped in their recreation, etc. Research and knowledge are threatened by the legal treatment and classification.

Conclusion

To summarize, the government terminology of “abuse” obscures a legitimate distinction that is justified on both medical-political and civil liberty grounds. Some of the approaches in the Commission report, like the market-based CMS package recommendation, will likely succeed at quelling opioid exposure (and thus addiction and overdoses), while other maneuvers like an education campaign or ADFs should be treated with cautious skepticism. The trends show that heroin and fentanyl are actually the biggest contributors to the opioid epidemic, although semi-synthetics are climbing again in overdose deaths after leaning toward stabilization two years ago. Evidence that prescription abuse and street use are linked, as well as testimony from former addicts, indicates that drug users easily swing between the legal and black market.

English and Math

These thoughts by David Henderson over at EconLog have stuck with me since the day I read them (in 2012):

At the end, one of two hosts [of a radio program he was being interviewed at] asked me, “If you were giving a 12-year-old American kid advice on what languages to learn, what advice would you give?” I think he was expecting me to say “English and Chinese.” I answered, “Two languages: English and math.”

I think of this insight often when I read, mostly because they confirm my own anecdotal experiences travelling abroad. Everybody in Ghana spoke English, and only rural Iberians and Slovenians had trouble with English in Europe. Non-Native French speakers seem only to be in parts of France’s old empire, where old customs – learning the language of the conqueror to get ahead in the rat race – still prevail. English is learned because it’s necessary to communicate these days.

Check out this excerpt from a piece on Swiss language borders in the BBC:

There are four official Swiss languages: German, French, Italian and Romansh, an indigenous language with limited status that’s similar to Latin and spoken today by only a handful of Swiss. A fifth language, English, is increasingly used to bridge the linguistic divide. In a recent survey by Pro Linguis, three quarters of those queried said they use English at least three times per week.

Read the rest. That’s a lot of English used in a country that’s sandwiched between Germany, France, and Italy. I think the power of English, at least in Europe, has to do with the fact that it’s a mish-mash of Germanic and Latin; it’s a “bastard tongue,” in the words of John McWhorter, a linguist at Columbia. Let’s hear it for the bastards of the world!

Are voting ages still democratic?

Rather par for the course, our current gun debate, initiated after the school shooting in Parkland, has been dominated by children — only this time, literally.

I’m using “children” only in the sense that they are not legally adults, hovering just under the age of eighteen. They are not children in a sense of being necessarily mentally underdeveloped, or necessarily inexperienced, or even very young. They are, from a semantics standpoint, still teenagers, but they are not necessarily short-sighted or reckless or uneducated.

Our category “children” is somewhat fuzzy. And so are our judgments about their political participation. For instance, we consider ourselves, roughly, a democracy, but we do not let children vote. Is restricting children from voting still democratic?

With this new group of Marjory Stoneman Douglas high school students organizing for political change (rapidly accelerated to the upper echelons of media coverage and interviews), there has been widespread discussion about letting children vote. A lot of this is so much motivated reasoning: extending suffrage to the younger demographic would counter the current proliferation of older folks, who often vote on the opposite side of the aisle for different values. Young people tend to be more progressive; change the demographics, change the regime. Yet the conversation clearly need not be partisan, since there exist Republican- and Democrat-minded children, and suffrage doesn’t discriminate. (Moreover, conservative religious groups that favor large families, like Mormons, could simply start pumping out more kids to compete.)

A plethora of arguments exist that do propose pushing the voting age lower — 13, and quite a bit for 16 (ex. Jason Brennan) — and avoid partisanship. My gripe about these arguments is that, in acknowledging the logic or utility of a lowered voting age, they fail to validate a voting age at all. Which is not to say that there should not be a voting age in place (I am unconvinced in either direction); it’s just to say that we might want to start thinking of ourselves as rather undemocratic so long as we have one.

An interesting thing to observe when looking at suffrage for children is that Americans do not consider a voting age incompatible with democracy. If Americans do not think of America as a democracy, it is because our office of the President is not directly elected by majority vote (or they think of it as an oligarchy or something); it is not undemocratic just because children cannot vote. The fact that we deny under-eighteen year olds the vote does not even cross their minds when criticizing what many see as an unequal political playing field. For instance, in eminent political scientist Robert Dahl’s work How Democratic is the American Constitution? the loci of criticism are primarily on the electoral college and bicameral legislature. In popular parlance these are considered undemocratic, conflicting with the equal representation of voters.

Dahl notes that systems with unequal representation contrast to the principle of “one person, one vote.” Those with suffrage have one or more votes (as in nineteenth-century Prussia where voters were classified by their property taxes) while those without have less than one. Beginning his attack on the Senate, he states “As the American democratic credo continued episodically to exert its effects on political life, the most blatant forms of unequal representation were in due time rejected. Yet, one monumental though largely unnoticed form of unequal representation continues today and may well continue indefinitely. This results from the famous Connecticut Compromise that guarantees two senators from each state” (p. 48).

I quote Dahl because his book is zealously committed to majoritarian rule, rejecting Toqueville’s qualms about the tyranny of the majority. Indeed, Dahl says he believes “that the legitimacy of the Constitution ought to derive solely from its utility as an instrument of democratic government” (39). And yet, in the middle of criticizing undemocratic American federal law, the voting age and status of children are not once brought up. These factors appear to be invisible. In our ordinary life, when the voting age is brought up, it is nearly always in juxtaposition to other laws, e.g., “We let eighteen year olds vote and smoke, but they have to be 21 to buy a beer,” or, on the topic of gun control, “If you can serve in the military at 18, and you can vote at 18, then what is the problem, exactly, with buying a gun?”

What is the explanation for this? We include the march for democracy as one progressive aspect of modernity. We see ourselves as more democratic than our origin story, having extended suffrage to non-whites, women and people without property. We see America under the Constitution as a more developed rule-of-the-people than Athens under Cleisthenes. So, we admit to degrees of political democracy — have we really reached the end of the road? Isn’t it more accurate that we are but one law away from its full realization? And of course, even if we are more of a representative republic, this is still under the banner of democracy — we still think of ourselves as abiding by “one person, one vote” (Dahl, 179-183).

In response, it is said that children are not properly citizens. This allows us to consider ourselves democratic, even while restricting direct political power from a huge subset of the population while inflicting our laws on them.

This line of thought doesn’t cut it. The arguments for children as non- or only partial-citizens are riddled with imprecisely-targeted elitism. “Children can be brainwashed. Children do not understand their own best interests. Children are uninterested in politics. Children are not informed enough. Children are not rational. Children are not smart enough to make decisions that affect the entire planet.”

Although these all might apply, on the average, to some age group — one which is much younger than seventeen, I would think — they also apply to all sorts of individuals distributed throughout every age. A man gets into a car wreck and severely damages his frontal lobe. In most states there is no law prohibiting him from dropping a name in the ballot, even though his judgment is dramatically impaired, perhaps analogous to an infant. A nomad, who eschews modern industrial living for the happy life of travel and pleasure, is allowed to vote in his country of citizenship — even though his knowledge of political life may be no greater than someone from the 16th century. Similarly, adults can be brainwashed, adults can be stupid, adults can be totally clueless about which means will lead to the satisfaction of their preferred ends.

I venture that all Americans do not want uninformed, short-sighted, dumb, or brainwashable people voting, but they will not admit to it on their own. Children are a proxy group to try to limit the amount of these people that are allowed in on our political process. And is banning people based on any of these criteria compatible with democracy and equality?

Preventing “stupid” people from voting is subjective and elitist; preventing “brainwashable” people from voting is arbitrary; preventing “short-sighted” people from voting is subjective and elitist, and the same for “uninformed” people. We come to the category of persons with severe mental handicaps, be their brain underdeveloped from the normal process of youth, or injury, or various congenital neurodiversities. Regrettably, at first glance it seems reasonable to prevent people with severe mental defects from voting. Because, it is thought, they really can’t know their interests, and if they are to have a voting right, it should be placed in a benefactor who is familiar with their genuine interests. But now, this still feels like elitism, and it doesn’t even touch on the problem of how to gauge this mental defect — it seems all too easy for tests to impose a sort of subjective bias.

Indeed, there is evidence that this is what happens. Laws which assign voting rights to guardians are too crude to discriminate between mental disabilities which prevent voting and other miscellaneous mental problems, and make it overly burdensome to exercise voting rights even if one is competent. It is hard to see how disenfranchising populations can be done on objective grounds. If we extended suffrage from its initial minority group to all other human beings above the age of eighteen, the fact that we prolong extending it to children is only a function of elitism, and consequently it is undemocratic.

To clarify, I don’t think it is “ageist” to oppose extending the vote to children, in the way that it is sexist to restrict the vote for women. Just because the categories are blurry doesn’t mean there aren’t substantial differences, on average, between children and adults. But our reasoning is crude. We are not anti-children’s suffrage because of the category “children,” but because of the collective disjunction of characteristics we associate underneath this umbrella. It seems like Americans would just as easily disenfranchise even larger portions of the population, were we able to pass it off as democratic in the way that it has been normalized for children.

Further, it is not impossible to extend absolute suffrage. Children so young that they literally cannot vote — infants — could have their new voting rights bestowed upon their caretakers, since insofar as infants have interests, they almost certainly align with their daily providers. This results in parents having an additional vote per child which transfers to their children whenever they request them in court. (Again, I’m not endorsing this policy, just pointing out that it is possible.) The undemocratic and elitist nature of a voting age cannot be dismissed on the grounds that universal suffrage is “impossible.”

It is still perfectly fine to say “Well, I don’t want the boobgeoisie voting about what I can do anyway, so a fortiori I oppose children’s suffrage,” because this argument asserts some autocracy anyway (so long as we assume voting as an institutional background). The point is that the reason Americans oppose enfranchising children is because of elitism, and that the disenfranchising of children is undemocratic.

In How Democratic is the American Constitution? the closest Robert Dahl gets to discussing children is adding the Twenty-Six Amendment to the march for democratic progress, stating that lowering the voting age to eighteen made our electorate more inclusive (p. 28). I fail to see why lowering it even further would not also qualify as making us more inclusive.

In conclusion, our system is not democratic at all,
Because a person’s a person no matter how small.

 

A short note on Klimt and Schiele

I hope y’all have been enjoying my new “Nightcap” series. Many of the articles eventually end up at RealClearHistory (my bad ass editor has the final say-so), so I thought I’d be doing y’all a favor by sharing them here, in smaller doses, first.

This BBC article on Gustav Klimt and Egon Schiele, a couple of Austrian artists, won’t make the cut (RCH‘s readers don’t really enjoy art history), but I thought you’d love it. Vienna was the center of intellectual life for not only economists and philosophers in the late 19th-early 20th centuries, but also for artists and other academics and critics as well.

Klimt (bio) is my favorite painter, ranking just above Picasso, Chagall, BoschHokusai, and Dalí. Check this out:

[…] a decision was made to permanently display the paintings in a gallery rather than on the ceiling [because they were so scandalous]. Klimt was furious and insisted on returning his advances and keeping the paintings. The request was refused but after a dramatic standoff in which Klimt allegedly held off removal men with a shotgun, the Ministry eventually capitulated.

Tragically the paintings were destroyed by retreating SS forces in 1945 and all that remains are hazy black and white photographs.

How could you not like the guy?

PS: I’ve heard, through the grapevine, that Lode and Derrill have posts on the way. Stay tuned!

Tech’s Ethical Dark Side

An article at the NY Times opens:

The medical profession has an ethic: First, do no harm.

Silicon Valley has an ethos: Build it first and ask for forgiveness later.

Now, in the wake of fake news and other troubles at tech companies, universities that helped produce some of Silicon Valley’s top technologists are hustling to bring a more medicine-like morality to computer science.

Far be it from me to tell people to avoid spending time considering ethics. But something seems a bit silly to me about all this. The “experts” are trying to teach students the consequences of the complex interactions between the services they haven’t yet created and the world as it doesn’t yet exist.

My inner cynic sees this “ethics of tech” movement as a push to have software engineers become nanny-state-like social engineers. “First do no harm” is not the right standard for tech (which isn’t to say “do harm” is). Before 2016 Facebook and Twitter were praised for their positive contribution to the Arab Spring. After our dumb election the educated western elite threw up our hands and said, “it’s an ethical breach to reduce our power!” Freedom is messy, and “do no harm” privileges the status quo.

The root problem is that computer services interact with the public in complex ways. Recognizing this is important and an ethics class ought to grapple with that complexity and the resulting uncertainty in how our decisions (including design decisions) can affect the well being of others. My worry is that a sensible call to think about these issues will be co-opted by power-hungry bureaucrats. (There really ought to be ethics classes on the “Dark Side of Ethical Judgments of Others and Education Policy”.)

I don’t doubt that the motivations of the people involved are basically good, but I’m deeply skeptical of their ability to do much more than offer retrospective analysis as particular events become less relevant. History is important, but let’s not trick ourselves into thinking the lessons of 2016 Facebook will apply neatly to whatever network we’re on in 2026.

It hardly seems reasonable to insist that Facebook be put in charge of what we get to see. Some argue that’s already the world we live in, and they aren’t completely wrong. But that authority is still determined by the voluntary individual decision of users with access to plenty of alternatives. People aren’t always as thoughtful and deliberate as I’d like, but that doesn’t mean I should step in and be a thoughtful and deliberate Orwellian figure on their behalf.

On the popularity of economic history

I recently engaged in a discussion (a twittercussion) with Leah Boustan of Princeton over the “popularity” of economic history within economics (depicted below).  As one can see from the purple section, it is as popular as those hard candies that grandparents give out on Halloween (to be fair, I like those candies just like I do economic history). More importantly, the share seems to be smaller than at the peak of 1980s. It also seems like the Nobel prize going to Fogel and North had literally no effects on the subfield’s popularity. Yet, I keep hearing that “economic history is back”. After all, the Bates Clark medal went to Donaldson of Stanford this year which should confirm that economic history is a big deal.  How can this be reconciled with the figure depicted below?

EconomicHIstoryData

As I explained in my twittercussion with Leah, I think that there is a popularity for using historical data. Economists have realized that if some time is spent in archives to collect historical data, great datasets can be assembled. However, they do not necessarily consider themselves “economic historians” and as such they do not use the JEL code associated with history.  This is an improvement over a field where Arthur Burns (former Fed Chair) supposedly said during the 1970s that we needed to look at history to better shape monetary policy. And by history, he meant the 1950s. However, while there are advantages, there is an important danger which is left aside.

The creation of a good dataset has several advantages. The main one is that it increases time coverage. By increasing the time coverage, you can “tackle” the big questions and go for the “big answers” through the generation of stylized facts. Another advantage (and this is the one that summarizes my whole approach) is that historical episodes can provide neat testing grounds that give us a window to important economic issues. My favorite example of that is the work of Petra Moser at NYU-Stern. Without going into too much details (because her work was my big discovery of 2017), she used a few historical examples which she painstakingly detailed in order to analyze the effect of copyright laws. Her results have important ramifications to debates regarding “science as a public good” and “science as a contribution good” (see the debates between Paul David and Terence Kealey on this in Research Policy for this point).

But these two advantages must be weighted against an important disadvantage which Robert Margo has warned against in a recent piece in Cliometrica.  When one studies economic history, one must keep in mind that two things must be accomplished simultaneously: to explain history through theory and bring theory to life through history (this is not my phrase, but rather that of Douglass North). To do so, one must study a painstaking amount of details to ascertain the quality of the sources used and their reliability.  In considering so many details, one can easily get lost or even fall prey to his own prior (i.e. I expect to see one thing and upon seeing it I ask no question). To avoid this trap, there must be a “northern star” to act as a guide. That star, as I explained in an earlier piece, is a strong and general understanding of theory (or a strong intuition for economics). To create that star and give attention to details is an incredibly hard task and which is why I argued in the past that “great” economic historians (Douglass North, Deirdre McCloskey, Robert Fogel, Nathan Rosenberg, Joel Mokyr, Ronald Coase (because of the lighthouse piece), Stephen Broadberry, Gregory Clark etc.) take a longer time to mature. In other words, good economic historians are projects that have have a long “time to build problem” (sorry, bad economics joke).  However, the downside is that when this is not the case, there are risks of ending up with invalid results that are costly and hard to contest.

Just think about the debate between Daron Acemoglu and David Albouy on the colonial origins of development. It took more than five years to Albouy to get his results that threw doubts on Acemoglu’s 1999 paper. Albouy clearly expended valuable resources to get the “details” behind the variables. There was miscoding of Niger and Nigeria, and misunderstandings of what type of mortalities were used.  This was hard work and it was probably only deemed a valuable undertaking because Acemoglu’s paper was such a big deal (i.e. the net gains were pretty big if they paid off). Yet, to this day, many people are entirely unaware of the Albouy rebuttal.  This can be very well seen in the image below regarding the number of cites of the Acemoglu-Johnson-Robinson paper on an annual basis. There seems to be no effect from the massive rebuttal (disclaimer: Albouy convinced me that he was right) from the Albouy piece.

AcemogluPaperCites

And it really does come down to small details like those underlined by Albouy. Let me give you another example taken from my work. Within Canada, the French minority is significantly poorer than the rest of Canada. From my cliometric work, we now know that there were poorer than the rest of Canada and North America as far as the colonial era. This is a stylized fact underlying a crucial question today (i.e. Why are French-Canadians relatively poor).  That stylized fact requires an explanation. Obviously, institutions are a great place to look. One of the institution that is most interesting is seigneurial tenure which was basically a “lite” version of feudalism in North America that was present only in the French settled colonies. Some historians and economic historians argued that there were no effects of the institutions on variables like farm efficiency.  However, some historians noticed that in censuses the French reported different units that the English settlers within the colony of Quebec. To correct for this metrological problem, historians made county-level corrections. With those corrections, the aforementioned has no statistically significant effect on yields or output per farm. However, as I note in this piece that got a revise and resubmit from Social Science Quarterly (revised version not yet online), county-level corrections mask the fact that the French were more willing to move to predominantly English areas than the English were willing to predominantly French areas. In short, there was a skewed distribution. However, once you correct the data on an ethnic composition basis rather than on the county-level (i.e. the same correction for the whole county), you end with a statistically significant negative effect on both output per farm and yields per acre. In short, we were “measuring away” the effect of institutions. All from a very small detail about distributions. Yet, that small detail has supported a stylized fact that the institution did not matter.

This is the risk that Margo speaks about illustrated in two examples. Economists who use history merely as a tool may end up making dramatic mistakes that will lead to incorrect conclusions. I take this “juicy” quote from Margo (which Pseudoerasmus) highlighted for me:

[EH] could become subsumed entirely into other fields… the demand for specialists in economic history might dry up, to the point where obscure but critical knowledge becomes difficult to access or is even lost. In this case, it becomes harder to ‘get the history right’

Indeed, unfortunately.