Are voting ages still democratic?

Rather par for the course, our current gun debate, initiated after the school shooting in Parkland, has been dominated by children — only this time, literally.

I’m using “children” only in the sense that they are not legally adults, hovering just under the age of eighteen. They are not children in a sense of being necessarily mentally underdeveloped, or necessarily inexperienced, or even very young. They are, from a semantics standpoint, still teenagers, but they are not necessarily short-sighted or reckless or uneducated.

Our category “children” is somewhat fuzzy. And so are our judgments about their political participation. For instance, we consider ourselves, roughly, a democracy, but we do not let children vote. Is restricting children from voting still democratic?

With this new group of Marjory Stoneman Douglas high school students organizing for political change (rapidly accelerated to the upper echelons of media coverage and interviews), there has been widespread discussion about letting children vote. A lot of this is so much motivated reasoning: extending suffrage to the younger demographic would counter the current proliferation of older folks, who often vote on the opposite side of the aisle for different values. Young people tend to be more progressive; change the demographics, change the regime. Yet the conversation clearly need not be partisan, since there exist Republican- and Democrat-minded children, and suffrage doesn’t discriminate. (Moreover, conservative religious groups that favor large families, like Mormons, could simply start pumping out more kids to compete.)

A plethora of arguments exist that do propose pushing the voting age lower — 13, and quite a bit for 16 (ex. Jason Brennan) — and avoid partisanship. My gripe about these arguments is that, in acknowledging the logic or utility of a lowered voting age, they fail to validate a voting age at all. Which is not to say that there should not be a voting age in place (I am unconvinced in either direction); it’s just to say that we might want to start thinking of ourselves as rather undemocratic so long as we have one.

An interesting thing to observe when looking at suffrage for children is that Americans do not consider a voting age incompatible with democracy. If Americans do not think of America as a democracy, it is because our office of the President is not directly elected by majority vote (or they think of it as an oligarchy or something); it is not undemocratic just because children cannot vote. The fact that we deny under-eighteen year olds the vote does not even cross their minds when criticizing what many see as an unequal political playing field. For instance, in eminent political scientist Robert Dahl’s work How Democratic is the American Constitution? the loci of criticism are primarily on the electoral college and bicameral legislature. In popular parlance these are considered undemocratic, conflicting with the equal representation of voters.

Dahl notes that systems with unequal representation contrast to the principle of “one person, one vote.” Those with suffrage have one or more votes (as in nineteenth-century Prussia where voters were classified by their property taxes) while those without have less than one. Beginning his attack on the Senate, he states “As the American democratic credo continued episodically to exert its effects on political life, the most blatant forms of unequal representation were in due time rejected. Yet, one monumental though largely unnoticed form of unequal representation continues today and may well continue indefinitely. This results from the famous Connecticut Compromise that guarantees two senators from each state” (p. 48).

I quote Dahl because his book is zealously committed to majoritarian rule, rejecting Toqueville’s qualms about the tyranny of the majority. Indeed, Dahl says he believes “that the legitimacy of the Constitution ought to derive solely from its utility as an instrument of democratic government” (39). And yet, in the middle of criticizing undemocratic American federal law, the voting age and status of children are not once brought up. These factors appear to be invisible. In our ordinary life, when the voting age is brought up, it is nearly always in juxtaposition to other laws, e.g., “We let eighteen year olds vote and smoke, but they have to be 21 to buy a beer,” or, on the topic of gun control, “If you can serve in the military at 18, and you can vote at 18, then what is the problem, exactly, with buying a gun?”

What is the explanation for this? We include the march for democracy as one progressive aspect of modernity. We see ourselves as more democratic than our origin story, having extended suffrage to non-whites, women and people without property. We see America under the Constitution as a more developed rule-of-the-people than Athens under Cleisthenes. So, we admit to degrees of political democracy — have we really reached the end of the road? Isn’t it more accurate that we are but one law away from its full realization? And of course, even if we are more of a representative republic, this is still under the banner of democracy — we still think of ourselves as abiding by “one person, one vote” (Dahl, 179-183).

In response, it is said that children are not properly citizens. This allows us to consider ourselves democratic, even while restricting direct political power from a huge subset of the population while inflicting our laws on them.

This line of thought doesn’t cut it. The arguments for children as non- or only partial-citizens are riddled with imprecisely-targeted elitism. “Children can be brainwashed. Children do not understand their own best interests. Children are uninterested in politics. Children are not informed enough. Children are not rational. Children are not smart enough to make decisions that affect the entire planet.”

Although these all might apply, on the average, to some age group — one which is much younger than seventeen, I would think — they also apply to all sorts of individuals distributed throughout every age. A man gets into a car wreck and severely damages his frontal lobe. In most states there is no law prohibiting him from dropping a name in the ballot, even though his judgment is dramatically impaired, perhaps analogous to an infant. A nomad, who eschews modern industrial living for the happy life of travel and pleasure, is allowed to vote in his country of citizenship — even though his knowledge of political life may be no greater than someone from the 16th century. Similarly, adults can be brainwashed, adults can be stupid, adults can be totally clueless about which means will lead to the satisfaction of their preferred ends.

I venture that all Americans do not want uninformed, short-sighted, dumb, or brainwashable people voting, but they will not admit to it on their own. Children are a proxy group to try to limit the amount of these people that are allowed in on our political process. And is banning people based on any of these criteria compatible with democracy and equality?

Preventing “stupid” people from voting is subjective and elitist; preventing “brainwashable” people from voting is arbitrary; preventing “short-sighted” people from voting is subjective and elitist, and the same for “uninformed” people. We come to the category of persons with severe mental handicaps, be their brain underdeveloped from the normal process of youth, or injury, or various congenital neurodiversities. Regrettably, at first glance it seems reasonable to prevent people with severe mental defects from voting. Because, it is thought, they really can’t know their interests, and if they are to have a voting right, it should be placed in a benefactor who is familiar with their genuine interests. But now, this still feels like elitism, and it doesn’t even touch on the problem of how to gauge this mental defect — it seems all too easy for tests to impose a sort of subjective bias.

Indeed, there is evidence that this is what happens. Laws which assign voting rights to guardians are too crude to discriminate between mental disabilities which prevent voting and other miscellaneous mental problems, and make it overly burdensome to exercise voting rights even if one is competent. It is hard to see how disenfranchising populations can be done on objective grounds. If we extended suffrage from its initial minority group to all other human beings above the age of eighteen, the fact that we prolong extending it to children is only a function of elitism, and consequently it is undemocratic.

To clarify, I don’t think it is “ageist” to oppose extending the vote to children, in the way that it is sexist to restrict the vote for women. Just because the categories are blurry doesn’t mean there aren’t substantial differences, on average, between children and adults. But our reasoning is crude. We are not anti-children’s suffrage because of the category “children,” but because of the collective disjunction of characteristics we associate underneath this umbrella. It seems like Americans would just as easily disenfranchise even larger portions of the population, were we able to pass it off as democratic in the way that it has been normalized for children.

Further, it is not impossible to extend absolute suffrage. Children so young that they literally cannot vote — infants — could have their new voting rights bestowed upon their caretakers, since insofar as infants have interests, they almost certainly align with their daily providers. This results in parents having an additional vote per child which transfers to their children whenever they request them in court. (Again, I’m not endorsing this policy, just pointing out that it is possible.) The undemocratic and elitist nature of a voting age cannot be dismissed on the grounds that universal suffrage is “impossible.”

It is still perfectly fine to say “Well, I don’t want the boobgeoisie voting about what I can do anyway, so a fortiori I oppose children’s suffrage,” because this argument asserts some autocracy anyway (so long as we assume voting as an institutional background). The point is that the reason Americans oppose enfranchising children is because of elitism, and that the disenfranchising of children is undemocratic.

In How Democratic is the American Constitution? the closest Robert Dahl gets to discussing children is adding the Twenty-Six Amendment to the march for democratic progress, stating that lowering the voting age to eighteen made our electorate more inclusive (p. 28). I fail to see why lowering it even further would not also qualify as making us more inclusive.

In conclusion, our system is not democratic at all,
Because a person’s a person no matter how small.

 

A short note on Klimt and Schiele

I hope y’all have been enjoying my new “Nightcap” series. Many of the articles eventually end up at RealClearHistory (my bad ass editor has the final say-so), so I thought I’d be doing y’all a favor by sharing them here, in smaller doses, first.

This BBC article on Gustav Klimt and Egon Schiele, a couple of Austrian artists, won’t make the cut (RCH‘s readers don’t really enjoy art history), but I thought you’d love it. Vienna was the center of intellectual life for not only economists and philosophers in the late 19th-early 20th centuries, but also for artists and other academics and critics as well.

Klimt (bio) is my favorite painter, ranking just above Picasso, Chagall, BoschHokusai, and Dalí. Check this out:

[…] a decision was made to permanently display the paintings in a gallery rather than on the ceiling [because they were so scandalous]. Klimt was furious and insisted on returning his advances and keeping the paintings. The request was refused but after a dramatic standoff in which Klimt allegedly held off removal men with a shotgun, the Ministry eventually capitulated.

Tragically the paintings were destroyed by retreating SS forces in 1945 and all that remains are hazy black and white photographs.

How could you not like the guy?

PS: I’ve heard, through the grapevine, that Lode and Derrill have posts on the way. Stay tuned!

Tech’s Ethical Dark Side

An article at the NY Times opens:

The medical profession has an ethic: First, do no harm.

Silicon Valley has an ethos: Build it first and ask for forgiveness later.

Now, in the wake of fake news and other troubles at tech companies, universities that helped produce some of Silicon Valley’s top technologists are hustling to bring a more medicine-like morality to computer science.

Far be it from me to tell people to avoid spending time considering ethics. But something seems a bit silly to me about all this. The “experts” are trying to teach students the consequences of the complex interactions between the services they haven’t yet created and the world as it doesn’t yet exist.

My inner cynic sees this “ethics of tech” movement as a push to have software engineers become nanny-state-like social engineers. “First do no harm” is not the right standard for tech (which isn’t to say “do harm” is). Before 2016 Facebook and Twitter were praised for their positive contribution to the Arab Spring. After our dumb election the educated western elite threw up our hands and said, “it’s an ethical breach to reduce our power!” Freedom is messy, and “do no harm” privileges the status quo.

The root problem is that computer services interact with the public in complex ways. Recognizing this is important and an ethics class ought to grapple with that complexity and the resulting uncertainty in how our decisions (including design decisions) can affect the well being of others. My worry is that a sensible call to think about these issues will be co-opted by power-hungry bureaucrats. (There really ought to be ethics classes on the “Dark Side of Ethical Judgments of Others and Education Policy”.)

I don’t doubt that the motivations of the people involved are basically good, but I’m deeply skeptical of their ability to do much more than offer retrospective analysis as particular events become less relevant. History is important, but let’s not trick ourselves into thinking the lessons of 2016 Facebook will apply neatly to whatever network we’re on in 2026.

It hardly seems reasonable to insist that Facebook be put in charge of what we get to see. Some argue that’s already the world we live in, and they aren’t completely wrong. But that authority is still determined by the voluntary individual decision of users with access to plenty of alternatives. People aren’t always as thoughtful and deliberate as I’d like, but that doesn’t mean I should step in and be a thoughtful and deliberate Orwellian figure on their behalf.

On the popularity of economic history

I recently engaged in a discussion (a twittercussion) with Leah Boustan of Princeton over the “popularity” of economic history within economics (depicted below).  As one can see from the purple section, it is as popular as those hard candies that grandparents give out on Halloween (to be fair, I like those candies just like I do economic history). More importantly, the share seems to be smaller than at the peak of 1980s. It also seems like the Nobel prize going to Fogel and North had literally no effects on the subfield’s popularity. Yet, I keep hearing that “economic history is back”. After all, the Bates Clark medal went to Donaldson of Stanford this year which should confirm that economic history is a big deal.  How can this be reconciled with the figure depicted below?

EconomicHIstoryData

As I explained in my twittercussion with Leah, I think that there is a popularity for using historical data. Economists have realized that if some time is spent in archives to collect historical data, great datasets can be assembled. However, they do not necessarily consider themselves “economic historians” and as such they do not use the JEL code associated with history.  This is an improvement over a field where Arthur Burns (former Fed Chair) supposedly said during the 1970s that we needed to look at history to better shape monetary policy. And by history, he meant the 1950s. However, while there are advantages, there is an important danger which is left aside.

The creation of a good dataset has several advantages. The main one is that it increases time coverage. By increasing the time coverage, you can “tackle” the big questions and go for the “big answers” through the generation of stylized facts. Another advantage (and this is the one that summarizes my whole approach) is that historical episodes can provide neat testing grounds that give us a window to important economic issues. My favorite example of that is the work of Petra Moser at NYU-Stern. Without going into too much details (because her work was my big discovery of 2017), she used a few historical examples which she painstakingly detailed in order to analyze the effect of copyright laws. Her results have important ramifications to debates regarding “science as a public good” and “science as a contribution good” (see the debates between Paul David and Terence Kealey on this in Research Policy for this point).

But these two advantages must be weighted against an important disadvantage which Robert Margo has warned against in a recent piece in Cliometrica.  When one studies economic history, one must keep in mind that two things must be accomplished simultaneously: to explain history through theory and bring theory to life through history (this is not my phrase, but rather that of Douglass North). To do so, one must study a painstaking amount of details to ascertain the quality of the sources used and their reliability.  In considering so many details, one can easily get lost or even fall prey to his own prior (i.e. I expect to see one thing and upon seeing it I ask no question). To avoid this trap, there must be a “northern star” to act as a guide. That star, as I explained in an earlier piece, is a strong and general understanding of theory (or a strong intuition for economics). To create that star and give attention to details is an incredibly hard task and which is why I argued in the past that “great” economic historians (Douglass North, Deirdre McCloskey, Robert Fogel, Nathan Rosenberg, Joel Mokyr, Ronald Coase (because of the lighthouse piece), Stephen Broadberry, Gregory Clark etc.) take a longer time to mature. In other words, good economic historians are projects that have have a long “time to build problem” (sorry, bad economics joke).  However, the downside is that when this is not the case, there are risks of ending up with invalid results that are costly and hard to contest.

Just think about the debate between Daron Acemoglu and David Albouy on the colonial origins of development. It took more than five years to Albouy to get his results that threw doubts on Acemoglu’s 1999 paper. Albouy clearly expended valuable resources to get the “details” behind the variables. There was miscoding of Niger and Nigeria, and misunderstandings of what type of mortalities were used.  This was hard work and it was probably only deemed a valuable undertaking because Acemoglu’s paper was such a big deal (i.e. the net gains were pretty big if they paid off). Yet, to this day, many people are entirely unaware of the Albouy rebuttal.  This can be very well seen in the image below regarding the number of cites of the Acemoglu-Johnson-Robinson paper on an annual basis. There seems to be no effect from the massive rebuttal (disclaimer: Albouy convinced me that he was right) from the Albouy piece.

AcemogluPaperCites

And it really does come down to small details like those underlined by Albouy. Let me give you another example taken from my work. Within Canada, the French minority is significantly poorer than the rest of Canada. From my cliometric work, we now know that there were poorer than the rest of Canada and North America as far as the colonial era. This is a stylized fact underlying a crucial question today (i.e. Why are French-Canadians relatively poor).  That stylized fact requires an explanation. Obviously, institutions are a great place to look. One of the institution that is most interesting is seigneurial tenure which was basically a “lite” version of feudalism in North America that was present only in the French settled colonies. Some historians and economic historians argued that there were no effects of the institutions on variables like farm efficiency.  However, some historians noticed that in censuses the French reported different units that the English settlers within the colony of Quebec. To correct for this metrological problem, historians made county-level corrections. With those corrections, the aforementioned has no statistically significant effect on yields or output per farm. However, as I note in this piece that got a revise and resubmit from Social Science Quarterly (revised version not yet online), county-level corrections mask the fact that the French were more willing to move to predominantly English areas than the English were willing to predominantly French areas. In short, there was a skewed distribution. However, once you correct the data on an ethnic composition basis rather than on the county-level (i.e. the same correction for the whole county), you end with a statistically significant negative effect on both output per farm and yields per acre. In short, we were “measuring away” the effect of institutions. All from a very small detail about distributions. Yet, that small detail has supported a stylized fact that the institution did not matter.

This is the risk that Margo speaks about illustrated in two examples. Economists who use history merely as a tool may end up making dramatic mistakes that will lead to incorrect conclusions. I take this “juicy” quote from Margo (which Pseudoerasmus) highlighted for me:

[EH] could become subsumed entirely into other fields… the demand for specialists in economic history might dry up, to the point where obscure but critical knowledge becomes difficult to access or is even lost. In this case, it becomes harder to ‘get the history right’

Indeed, unfortunately.

What should universities do?

The new semester is here so it’s time for me to figure out what the hell I’m supposed to be doing in the weird world of modern American university life. Roughly speaking, the answer is going to be “do the stuff that professors do to help universities do what universities do.” So what do universities do? What are they supposed to do?

Universities occupy a few different niches in society. I’m usually tempted to think of universities like a business. And in that framework, I justify my salary by providing something of value to those students. At my school, something like 90% of the operating budget comes out of students’ pockets.

But that’s an overly narrow view. Students pay to go to school because they expect they’ll get value from it, but they also go to school because them’s the rules–if you want to enter adult society, university is the front gate. In this framework, I justify my salary by serving as a gatekeeper. Even though it’s students paying, the (nebulous) principal I’m obliged to is the collection of people already inside the walls.

But wait! There’s more! Universities are (in no particular order):

  • A repository of knowledge,
  • A generator of new knowledge,
  • A place people go to learn,
  • A place people go to prove themselves,
  • A place people make friends and have fun (in a way that may be hard to replicate),
  • A business (engaging in mutually beneficial exchange),
  • A special interest group,
  • An institution that holds a particular (privileged) position in a wider cultural landscape.

Any one of these functions is a can of worms in its own right. When we start to consider tradeoffs between each function (and the many less visible functions I’ve surely missed), it gets downright intractable. I’m going to focus on the student-focused aspects of university life.

The mainstream view:

University is a place students get educated. This education helps them get jobs because employers value it. Students might also learn things that help them be better citizens.

The mainstream view doesn’t seem far off from what I’ve got in mind until you get your hands dirty and start disentangling what that view says. Here are three big problems inherent in that mainstream view:

  1. The education-for-job myth.
  2. The definition problem.
  3. The one-size-fits-all problem.

The Job-training myth

We’re told that students go to school to learn valuable skills. I think that’s true, but not in the usual way. Any specific skills students learn in school are a) incredibly general, or b) out of date. My students might learn some interesting ways of looking at the world (general knowledge), but a lot of what I teach is completely useless in the workforce (“Johnson, draw me a demand curve, stat!”). But students do learn valuable skills incidentally. They learn to manage their time (ideally), how to be conscientious, and in general they’re socialized so that they can fit in with adult society.

Lately I’ve been thinking of college as a form of upfront consulting. Instead of going to school when you’ve got a specific problem to solve, go when you’re young and have nothing better to do. Since you’re getting the consulting before you know what sort of problems you’ll face in the future, we couldn’t possibly give you exactly the right bundle of knowledge.

College exposes students to lots of different ideas that might combine in unexpected ways. Your class in underwater basket weaving might seem like a waste of time until some day 30 years later you are trying to solve some problem that turns out to make a lot more sense if you think of it like wet wicker (I’m looking at you civil engineers!).

Some of what I (and my colleagues) do helps prepare students for their careers, but mostly I’m trying to help them be better–better thinkers, better able to understand and appreciate, better able to enjoy life.

The definition problem

The word “Education” means a lot of things to a lot of people. More often than not, people use the word without being clear about what they mean. Often it means “job training.” Sometimes it means “enlightening.” Other times it means “making you agree with me.” In practice, it means surviving enough classes that you get a piece of paper indicating as much.

It should be recognized as a vague and nebulous word instead of being pigeonholed. It isn’t a binary state (I was ignorant, now I’m educated). It’s helpful to think of people as being more or less educated, but the state of your education isn’t something we can really objectively compare to my state of education.

There are lots of important but nebulous things in our lives: health, happiness, moral worth. Their vagueness makes them difficult, but it isn’t going away.

It isn’t hard to convince people that education is nebulous, but it is hard to get people to behave as though they really understand that.

Homogenization and commodification

Once people start thinking of education as some objective thing we can pull off a shelf and give to someone, we run into the real problems. This unexamined view leads to bureaucracies that attempt to standardize and commodify education.

Don’t get me wrong, I get why people would try to do this. We want everyone to get education (and moral training, and good health, and…). And as long as we’re worried about that, we’re going to worry about making sure everyone gets the best education possible. But “the best” gives the false impression that there’s one right answer.

A top-down approach isn’t the right way to achieve the goal of widespread education. Attempting to systematically scale up education provision kills the goose that lays the golden eggs. We should fight against attempts to commodify university (which currently happens via accreditation-as-gateway-to-subsidy and the general expansion of bureaucracy through administration).

So what should universities do?

There are different margins on which we can justify our existence, but it’s not obvious how to balance our tasks: teaching, researching, advocating, etc.. Given the high degree of uncertainty, I’d argue for pluralism… different schools (and professors) should be trying different things. As universities adapt to the future, it’s important that they don’t all try to adapt in the same way at the same time.

I think a big part of the problem is that we’ve been too successful at rent seeking. All money/privilege/goodies comes with strings attached, and more money comes with more strings. We’re always going to get a little tangled up in those strings, but in the last couple generations we’ve hamstrung ourselves. Accreditation and assessment have become the most important things a modern university does, which distracts from our more fundamental goals.

A bottom up approach doesn’t mean less education, just different education. A more modest education system would change the mix of costs and benefits faced by stakeholders. Employers might rely less on degree signalling, which means hiring managers and potential employees exercising more judgement in sending and evaluating quality signals. I don’t know exactly what would happen, but flexibility is valuable for the nebulous goals universities are supposed to be pursuing.

But at the moment we seem to be in an equilibrium. Students are expected to go to school, schools are expected to deliver on promises they can’t really fulfill, and we go through the motions of keeping schools accountable in a way that basically misses the point.

So what will I do this semester? I’m going to keep talking about interesting stuff to students. I’m going to keep working towards getting tenure. But I’m also going to quietly subvert attempts to commodify university.

 

In memory of Christie Davies: defender of the right to joke

ChristieDavies

As we approach the end of 2017, I remember this year’s sad passing of Christie Davies. Davies was a rare academic beast: a classical liberal sociologist. Despite representing a minority perspective in his discipline, he was able to thrive and leave a mark that will continue to influence scholars for generations.

Continue reading

What classes are worth subsidizing?

A friend of mine has a great phrase that captures what’s wrong with about 1/3 of the population. They’re “people who would make good Nazis.” These are the obedient people who are ready to follow orders without thinking without having sufficiently high standards for whose orders they’ll follow.

My question is “what can I do to help my students not fall into this category?” The trouble is that these folks aren’t drawn to intellectual arguments. I need to get them in a more visceral way. I think the answer is art. These kids need to be watching TV shows and movies/shows like Donnie Darko and The Handmaid’s Tale (tell me your favorites in the comments!). They need something that will grab them by the lapels, shake them, and shout “question authority!”

Pragmatic folks (the sort of people who are normally excited to hear what a libertarian economist has to say) would usually think that schools should focus on pragmatic things. It’s certainly a good idea for kids to leave school with some ideas about how to manage their personal finances and research important issues. But the economic justification for subsidies rarely favors such pragmatic topics. Petroleum engineers don’t need their schooling subsidized because they’ll end up getting paid enough to pay off their student loans.

I like to think of myself as a pragmatic person*, but I’m increasingly coming around to the idea that art is worth subsidizing. Even (perhaps) if that means giving money to ridiculous people who argue incoherently against freedom.

Bildergebnis für ned flanders parents

As subsidies go, art is cheap. You don’t need to build anything as complicated as a particle accelerator. You just need to grab some kid out of the nearest Starbucks and give them a few bucks to make something.

You’ll end up paying for a lot of garbage, but life is full of waste (I’d bet good money that any good project involves a good amount of unrealized potential savings that are only obvious after the fact). For that matter, you’ll almost certainly do some degree of harm. If we threw money at art departments (the way we did with STEM departments during the cold war) the money spent on Che shirts could easily fuel the western hemisphere by attaching a generator to Guevara’s spinning corpse.

And the benefits will be vague and arguable. I’m not in the business of selling bonafide snake oil; this idea isn’t a magical cure-all.

In fact, the more you try to measure the benefits, the less benefit you might get. If we could measure important but intangible things like decency and thoughtfulness, we’d already have those things. But when we try to come up with proxies for important things, those proxies quickly become bad proxies. All the more so if we try to reward people for measurable achieving our goals. Funding should be unconditional (and focused on production rather than selection) if we want to avoid funding propaganda. We almost certainly will be funding propaganda, but if humanity is really worth saving, it’s a baby/bathwater tradeoff.

I’ve just spent three paragraphs convincing you that this might be a terrible idea. But I still think the net benefits could justify the cost. Imagine some imperfect estimate of the impact and an even more imperfect measure of the value created. What will we get out of this (on average)? In a word: more. More novels, web comics, paintings, podcasts, and films.

And with more art, there’s more cream to rise to the top. The best art typically encourages thoughtfulness and empathy. This is a “let a thousand flowers bloom” approach that would (at relatively low cost) saturate the public sphere with enough semi-thoughtful stuff to force usually-thoughtless people to think more clearly about the world around them.

If** we can subsidize free thought, this is how we’ll do it. And if it’s possible, it’s worth doing in a world where we clearly have too many people who would make good Nazis.


*If that was really true I’d work in a bank.

**Do I really think this would work? I’m not remotely sure. But I think it’s an idea worth discussing. I teach economics because I hope the marginal return (in terms of improving the “civic quality” of my students) is high. But it feels Sisyphusian at times, and some students are clearly not ready to get it. I worry that they’ll go out into the world ready to follow any maniac’s orders. In terms of the stability of a free and peaceful society, doing something about those people seems important.

Words on the Move

I just listened to a recent(ish) episode of Econ Talk: John McWhorter on the Evolution of Language and Words on the Move.

I particularly enjoyed this episode because:

  1. Emergent order (duh!).
  2. It shed new light (for me) on a category of words that serve a function but don’t really mean anything. “Well” doesn’t really mean anything. Well, sometimes it means a hole filled with water, but in this sentence I’m using it as a “pragmatic.” Other pragmatics like eh, and huh feel like filler, but they’re really a part of oral communication where the speaker can casually and non-disruptively check in with the listener. Pretty cool, huh?
  3. And the discussion of accents was interesting in light of an experience I had just the other day. I’ll get to that at the bottom, but let me set the stage…

I’ve been particularly aware of my own accent since a young age because kids have always been quick to point out how different I’ve always sounded. At around age 7 I moved from the prairies to southern Ontario and I remember some kid asking me if I was British. They might have been picking up on regional variation in the Canadian accent, or it might be that my accent was affected by the movies and TV shows I had watched to that point (I suspect watching Monty Python at a young age deeply affected me).

Later (aged 17) I moved from Canada to Texas where I worked very hard to ditch my Canadian accent and gain some Southern drawl. When I moved to California I kept trying to lose the Canadian parts of my accent but gave up on trying to gain the drawl. When I moved to Boston I picked up some affectations that now makes me stand out on Long Island. I drink kahfee instead of quofee, but since I never did like the mwahll, my pronunciation of “mall” is probably the slightly-off version I would have picked up in my youth.

The other day I was talking to a student and noticed something especially bizarre–as our conversation moved from seafood (note to self: soak calamari in buttermilk for 3 days) to boar hunting I found myself involuntarily moving back into my Texas voice! (You’ve probably already guessed that this was an econometrics student.) I have zero experience with hunting, but I had to suppress this reflexive change in my accent. Somehow, all the automatic processes in my brain have lined up in such a way that made it clear that not only do I have a lot of tacit knowledge, but I even have unseen triggers for how I communicate.

Is it Thanksgiving without the turkey?

I was recently talking to a friend about Thanksgiving dinner. He was complaining about the difficulty of cooking turkey and asked how my household dealt with the issue. My response? We just serve chicken instead. It’s cheaper, easier to make, and frankly turkey isn’t significantly better.

That begs the question though: can you have Thanksgiving without the turkey? What makes Thanksgiving, Thanksgiving? Being around loved ones is necessary, but not sufficient. We’re, presumably, around loved ones for most holidays. What distinguishes today from other days?

I think it’s the pumpkin pie, but what about my fellow note writers?

#microblogging

Why Immigrants Are Superior

I am endlessly interested in issues of emigration/immigration. In part, this is because it’s the place where my personal experience, and my wife’s, intersect with my training and with my professional life as a sociologist. There is a deeper reason I try to explain below a little circuitously; bear with me.

I think that how humans form into groups is the central question about our species. The question arises because every adult individual without exception is simultaneously a member of several groups and categories. Thus, I am a husband (member of a very small group, at least under monogamous conditions), a member of the sociological discipline/ profession, a member of the teaching professions broadly defined (but never an “educator”!), a small-time member of a local radio station (KSCO Santa Cruz, 1080AM), a Republican but nevertheless, a libertarian (with a small “l”), and an American. Yet, as a former Frenchman I am also a member, though somewhat passive, of a culture group, roughly the francophone group.

All the above memberships are in groups. I also belong to several categories that don’t qualify as groups because they never meet and because they have little sense of themselves as belonging together. So, I am a male (decidedly so), a moderately overweight person past middle age (but athletic!), a parent, a tax-payer, and I also belong to the secret, vast, worldwide category of humans who lack hair on the second phalanx of their index finger. In America, I am also a white man. The latter category is a little problematic because it’s ill-defined, like all matters that have to do with race. It means that most Americans on looking at me would guess, probably correctly, that all or most of my ancestors lived in Europe ten thousand years ago. Do the count of your own memberships for yourself and you will be amazed.

Memberships are not all equal, especially at a given time. Some memberships become activated while others lie dormant. Individuals activate one membership over the others depending on circumstances and often, depending on their stage in the life cycle. The presence of others frequently triggers the activation of long-dormant membership, as when a thirty-something bumps into a couple of old high-school buddies. Finally, sometimes, individuals are forced to activate one membership to the near-exclusion of others. This happens most often in connection with the nation and religious denominations, including secular religions such as Communism in the old days. The penultimate sentence is a description of totalitarianism, political, religious, and other. It’s the most parsimonious definition I know.

Emigration matters because every act of emigration implies a reasonably conscious decision to de-activate a group membership that is salient in much, but not in all, the world: nationality. Emigrants may not be completely clear about how definitive their decision to move is but they always know that it entails an abrupt shutting off of whatever comfort one derives from being inside that particular group.

Emigration, immigration, after one begins to live in another country, typically remains emotionally costly for a long time. Besides, frequently, distance from others one loves, there are subtle issues of self-worth I cannot discuss here (but that I will discuss at some future time, especially if asked). In the classical age of worldwide, and American, emigration, it tended to be final. Travel was slow and expensive. If you did not like it in the new place, often, you just had to suck it up. (This is broadly true although turn-of-the-century American records show that surprising numbers of recent European immigrants left the US every year.) Today, the extraordinarily low cost of air travel means that nearly every dissatisfied immigrant may go home. In 2009, there were very few parts of the world for which a one-way ticket cost more than a thousand dollars. That would be under seven weeks worth of after-taxes minimum wage at worst. Tickets from the US to Europe, for example cost less than one third that amount off-season. In the same year, the average US wage was about $20 per hour. Estimating deductions of 25%, the net hourly wage was thus around $15. Hence, there were few if any parts of the world that could not be reached at the cost of net savings amounting to 70 hours of average wages (less than two weeks).

For emigrants to contiguous countries or proximate countries, such as Turks to Germany, Romanians to France, or Mexicans to the US, the option of going home is even more open, of course.

What I am trying to establish here is that emigration is normally a doubly voluntary act. Immigrants first volunteer to be in the country of immigration. Then, they keep volunteering by not taking up the option of moving back home, of re-emigrating.

Two things should follow from this volunteer condition: First, if they don’t like the country to which they have moved, immigrants have no one to blame but themselves. I know I am repeating myself but the imagery is so attractive, I can’t resist: If you come to the party, especially if you come uninvited (99% of immigrants, I would guess), don’t criticize the food, or the interior decoration, or the guests’ intellectual level.

Although there is a widespread impression to the contrary, it seems to me that few immigrants break this simple rule of their own accord. Rather, more commonly, they fall under the sway of political organizations who presume to speak on their behalf. These organizations are often political in nature. They seek to exploit the voting power of people unfamiliar with the national political customs. It’s in their interest to create and inflame feelings of deprivation. Moreover, since immigrants more often than not enter the host social structure near the bottom, they are frequently taken over by labor unions who do the same. In the US, specifically, recent immigrants are sometimes annexed by radical organizations with a long history of America-hatred. These influences confuse some immigrants, putting them in mental contradiction with their own choices. They do a great deal of damage by retarding immigrants’ emotional integration into American society. Note that I refer to integration rather than to “assimilation,” a cultural construct. Societies differ in the extent to which they expect immigrants to fit into the national culture, Canada little, France a great deal, with the US somewhere between the two.

Finally, organizations led by native-born who pretend to speak for immigrants also do the latter a great deal of harm by creating false impressions in the general public. The main false impression is that immigrants are more difficult to integrate than they really are. In the US, you seldom hear about the millions of immigrants who think that everything is just peachy or better.

The second consequence from the voluntary nature of the status of immigrants is seldom discussed: Immigrants make better citizens than the average native-born. Over 90% of Americans, for example, only took the trouble to be born in the right country. That’s akin to choosing your parents carefully. There is not much merit in it, first because it just happened. Secondly, most native-born citizens of any country would not have enough information to choose the land of their birth over others if the thought crossed their minds. Here again, the exception proves (“tests”) the rule: It’s possible to make such a choice since millions do, by emigrating, precisely. This would include the tens of thousands of Americans who live abroad more or less permanently.

Immigrants, by contrast, choose and keep choosing merely by staying put. Their choice is deliberate, conscious and informed. Their appreciation for the country of their immigration is a form of adult love. It should be superior to the baby-love of many of the native-born who only know one mother. If I were still a scholar, I would have a topic for a good study here. I would begin to endeavor to find data to test what is now a compelling hypothesis. But I am not, so it will remain this as far as I am concerned.

Immigrants into the US, specifically, possess superior qualities, whatever their national origin. First, they are usually hard workers, because this country offers some of the least generous social benefits (“welfare”) in the developed world. To sponge off “the system” in this country takes a great deal of skill. (A pregnant idea but I won’t go there in this essay.) Immigrants into this country must also comprise a large proportion of enterprising people, for the same reason. (There, I think data exist that demonstrate the validity of this claim.) Moreover, immigrants into the US have to be more adventurous, braver, than the native-born, on average. To change one’s living conditions drastically takes more courage, more tolerance of risk, and more imagination than moving to the next suburb.

I believe, accordingly, that American exceptionalism is rooted in exceptional institutions but that it is fertilized by wave after wave of immigration of a superior kind.

The following link will take you to an article about illegal immigration specifically that I published in the Independent Review with Russian immigrant Sergey Nikiforov: “If Mexicans and Americans could Cross the Border…“.

“The Impossibility of a University”

I was just reading David Friedman’s The Machinery of Freedom. He published the first edition in 1973. Amidst the wild ride of the contemporary American university (Evergreen State College being the most heinous single episode), one passage seems especially prescient.

From chapter twelve in the third edition:

The modern corporate university, public or private, contains an implicit contradiction: it cannot take positions, but it must take positions [sides]. The second makes the demand for a responsible university appealing, intellectually as well as emotionally. The first makes not merely the acceptance of that demand but its very consideration something fundamentally subversive of the university’s proper ends.

It cannot take positions because if it does, the efforts of its members will be diverted from the search for truth to the attempt to control the decision-making process. If it takes a public position on an important matter of controversy, those on each side of the controversy will be tempted to try to keep out new faculty members who hold the other position, in order to be sure that the university makes what they consider the right decision. To hire an incompetent supporter of the other side would be undesirable; to hire a competent one, who might persuade enough faculty members to reverse the university’s stand, catastrophic. Departments in a university that reaches corporate decisions in important matters will tend to become groups of true believers, closed to all who do not share the proper orthodoxy. They so forfeit one of the principal tools in the pursuit of truth — intellectual conflict.

A university must take positions. It is a large corporation with expenditures of tens of millions of dollars and an endowment of hundreds of millions. It must act, and to act it must decide what is true. What causes high crime rates? Should it protect its members by hiring university police or by spending money on neighborhood relations or community organizing? What effect will certain fiscal policies have on the stock market, and thus the university’s endowment? Should the university argue for them? These are issues of professional controversy within the academic community.

A university may proclaim its neutrality, but neutrality as the left quite properly argues, is also a position. If one believes that the election of Ronald Reagan or Teddy Kennedy would be a national tragedy, a tragedy in particular for the university, how can one justify letting the university, with its vast resources of wealth and influence, remain neutral?

The best possible solution within the present university structure has been not neutrality but the ignorance or impotence of the university community. As long as students and faculty do not know that the university is bribing politicians, investing in countries with dictatorial regimes, or whatever, and as long as they have no way of influencing the university’s actions, those acts will not hinder the university in its proper function of pursuing truth, however much good or damage they may do in the outside world. Once the university community realizes that the university does, or can, take actions substantially affecting the outside world and that students and faculty can influence those actions, the game is up.

There is no satisfactory solution to this dilemma within the structure of the present corporate university. In most of the better universities, the faculty has ultimate control. A university run from the outside, by a state government or a self-perpetuating board of trustees, has its own problems. A university can pretend to make no decisions or can pretend that the faculty has no control over them, for a while. Eventually someone will point out exactly what the emperor is wearing.

With an activist culture in place, the university endures more and more blows to its truth-seeking abilities. UC Berkeley spent an estimated $600,000 on security for Ben Shapiro a couple months ago, after the chaos and protests of the past year. Staff cut seating in half, worried that protesters would dismantle chairs and throw them onto the audience on the bottom floor. Now, so I hear, student clubs are having difficulty hosting evening meetings on campus, as the administration makes up for the expenses by cutting down on electricity usage and janitorial services. Club stipends, of course, are down. All of this damages the educational environment.

My friends went to see Ben, and watched a woman with a “Support the First Amendment! Shalom Shapiro!” sign get dragged into a crowd and beat up. (Not reported by major media; falsely reported as a knifing by right-wing media.) David identified the internal problem of the corporate university, which I believe we see escalating; the external problem is when outsiders — most of the violent rioters in Berkeley since the beginning of 2017 — understand the political power of the university and the speech that goes on there, and seek to control the process of intellectual conflict through physical force. Both are advanced in accordance with the political involvement of the students as well as the teachers.

Minarchism, Anarchism, and Democracy: A Shared Challenge

Minarchism–basically as small a government as we can get away with–is probably the most economically efficient possible way to organize society. A night watchman state providing courts of last resort and just enough military to keep someone worse from taking over.

The trouble (argues my inner anarchist) is that if we’ve got a government–an organization allowed to force/forbid behaviors–we’re already on the slippery slope to abuse of powers through political trading. Without an entrenched culture that takes minarchism seriously it’s only a matter of time before a) the state grows out of control and you’re no longer in a minarchist Utopia, or b) a populace unwilling to do their part allows violent gangs to fill the power vacuum.

Having a government at all is a risky proposition from the perspective of someone worried about the abuse of that power. Better not to risk it at all.

Anarchism relies on the right culture in a similar way. This is clear to critics of anarchism (basically it’s just the minarchists who are willing to take anarchists seriously at all) and is the crux of an important argument against anarchism. Without the right culture, what’s to stop people from just creating some new government? Nothing at all.

In fact, we face the same problem in the military-industrial-nanny-state complex of our imperfect real world. For any government–or lack of government–to work, the ideological framework of the people living in that society has to line up properly. To the extent people are ignorant, distracted, short-sighted, biased, or mean-spirited, we get governance that reflects those flaws.

If we want to live in a better world, we can argue all day about what sort of government we do or don’t want. But ultimately we have to work on improving the culture, because the median voter is still in charge.

The language of the economy: prices

50 Things That Made the Modern Economy is looking for a 51st thing. Below is the email I sent them.

Continue reading

Cycling in Amsterdam

I just got back from a week in London and a week in Amsterdam. Probably the most striking thing I encountered was the wonderful dutch cycling culture. Any transit system involves some implicit negotiation between motorists, pedestrians, and others. On Long Island the motorists won. In Amsterdam, cyclists won.

I’m on a bit of a Dutch cycling high, despite only spending about 2 hours on 2 wheels while in Amsterdam. The dutch take their bicycles seriously and they shape their environment to that end. The Airbnb I stayed at had frontage on a bicycle road but no direct access to a motorway. I’m not 100% on this, but I think the Netherlands’ liability laws make the faster vehicle strictly liable for accidents which serves as an implicit subsidy for bikes.

100_4514
A typical Dutch cycle path

Here are some things I like about this culture:

  • The engineering. I really like the way they do bike locks… nearly every bike has a built in lock that disables the rear wheel. Most of these locks also have a chain to lock the bike to a fence, but that chain locks with the same key for the rear wheel.
  • It encourages enough density to get people interacting with each other, but still expands your plausible travel distance. They’ve got a nice balance between closeness and congestion.
  • It’s easier on the environment (excluding the costs of building bikes and bike roads).
  • Light physical exercise feels great.
  • The infrastructure involved in managing bike traffic is pretty minimal. Speeds are slow enough that human judgement works well outside of the busiest areas.

Why should libertarians care? Well, most of them probably have better things to focus on. But those of us living in or near dense cities, this is an example of a way of life that fits nicely with our broader goal of a peaceful, prosperous, liberal order. If Manhattan tried to be more like Amsterdam it could be a huge boon (I think… based on my preferences and zero scientific analysis) to human flourishing.

Social noble lies

In the Republic, Socrates introduced the “noble lie”: governmental officials may, on occasion, be justified in propagating lies to their constituents in order to advance a more just society. Dishonesty is one tool of the political class (or even pre-political — the planning class) to secure order. This maxim is at the center of the debate about transparency in government.

Then, in the 20th century, when academic Marxism was in its prime, the French Marxist philosopher Louis Althusser became concerned with the issue of social reproduction. How does a society survive from one generation to the next, with most of its moors, morals and economic model still in place? This question was of particular interest to the Orthodox Marxists: their conflict theory of history doesn’t illuminate how a society is held together, since competing groups are always struggling for power. Althusser came up with “Ideological State Apparatuses”: institutions, coercive or purely ideological, that reinforce societal beliefs across generations. This necessarily includes all the intelligence agencies, like the CIA and FBI, and state thugs, like the Gestapo and NKVD, but it also includes the family unit (authorized by a marriage contract), public education and the political party system. “ISAs” also include traditions in the private sector, since for Althusser, the state exists primarily to protect these interests.

It’s rarely easy enough to point to a country and say, “This is the dominant ideology.” However, and here the Marxists are right, it can be useful to observe the material trends of citizens, and what sorts of interests people (of any class) save up money for, teach their children to admire, etc. In the United States, there is a conditional diversity of philosophies: many different strains abound, but most within the small notecard of acceptable opinion. Someone like Althusser might say there is a single philosophy in effect — liberal capitalism — getting reproduced across apparatuses; a political careerist might recognize antagonists across the board vying for their own particular interests. In any case, the theory of ISAs is one answer to conflict theory’s deficiencies.

There is no reason, at any time, to think that most of the ideas spreading through a given society are true. Plenty of people could point to a lesson taught in a fifth grade classroom and find something they disagree with, and not just because the lessons in elementary school are simplified often to distortion. Although ideas often spread naturally, they can also be thrusted upon a people, like agitprop or Uncle Sam, and their influence is either more or less deleterious.

Those outlooks thrust upon a people might take the form of a noble lie. I can give qualified support for noble lies, but not for the government. (The idea that noble lies are a right of government implies some sort of unique power for government actors.) There are currently two social lies which carry a lot of weight in the States. The first one comes from the political right, and it says: anyone can work their way to financial security. Anyone can come from the bottom and make a name for themselves. Sentiment like this is typically derided as pulling oneself up from the bootstraps, and in the 21st century we find this narrative is losing force.

The second lie comes from the left, and it says: the system is rigged for xyz privileged classes, and it’s necessarily easier for members of these groups to succeed than it is for non-members. White people, specifically white men, all possess better opportunities in society than others. This theory, on the other hand, is increasingly popular, and continues to spawn vicious spinoffs.

Of the two, neither is true. That said, it’s clear which is the more “socially useful” lie. A lie which encourages more personal responsibility is clearly healthier than one which blames one’s ills all on society and others. If you tell someone long enough that their position is out of their hands because the game is rigged, they will grow frustrated and hateful, and lose touch with their own creative power, opting to seek rent instead. Therefore one lie promotes individualism, the other tribalism.

Althusser wrote before the good old fashioned class struggle of Marxism died out, before the postmodernists splintered the left into undialectical identity politics. God knows what he would think of intersectionality, the ninth circle in the Dante’s Inferno of progressivism. These ideas are being spread regardless of what anyone does, are incorporated into “apparatuses” of some sort, and are both false. If we had to choose one lie to tell, though, it’s obvious to me the preferable one: the one which doesn’t imply collectivism in politics and tribalism in culture.