The Admissions Scandal Will Improve Universities

I would have been annoyed, I would have felt frustrated if my alma matter, Stanford, had been left out of the university admissions scandal. After all, what does it say about your school if it’s not worth bribing anyone to get your child admitted to it? Fortunately, it’s right in the mix.

I spent ten years in American universities as a student, and thirty as a professor. You might say that they are my milieu, that I am close to being an expert on them, or perhaps, just a native informant. Accordingly, reactions to the March 2019 admissions scandal seem a bit overwrought to me. That’s except for the delight of encountering the names among the line cutters of famous and successful people one usually associates with a good deal of sanctimoniousness. The main concern seems to be that the cheating is a violation of the meritocratic character of universities.

In fact, American universities have never been frankly or unambiguously meritocratic. They have always fulfilled simultaneously several social functions and served different and only partially overlapping constituencies. Sure enough, there is some transmission of knowledge taking place in almost all of them. I don’t mean to belittle this. I am even persuaded that there is a palpable difference between intelligent people who have attended college and those who have not. In addition, it should be obvious that some of the knowledge transmitted in higher education organizations is directly instrumental to obtaining a job (most engineering courses of study, accounting). That, although, in general, it was never expressly the primary role of undergraduate education in the US to procure employment.

The best of universities also contribute to the production of new knowledge to a considerable extent. University research is probably the bulk of the considerable body of American research in all fields. (Incidentally, I believe that the dual function of American faculty members as both researchers and teachers largely accounts for the superior international reputation of American higher education. More on this on demand.) The remainder of schools of higher education imitate the big guys and pretend to be engaged in research or in other scholarly pursuits. Many succeed some of the time. Some fail completely in that area. In fact most university professors are well aware of the degree to which each individual college or university offers conditions propitious to the conduct of research and such, and demands them. But teaching and research are not the whole story of American academia by a long shot. Those in the general public who think otherwise are deluded or, largely misinformed.

Most American universities are obviously superb sports venues; a few are world-grade in that area. In some schools, football financially supports learning rather than being an adjunct activity. Some, such as Indiana University where I taught, make do with basketball which can also be quite lucrative. It’s obvious too that residential universities- which include almost all the top names – are reasonably good adolescent-sitting services: Yes, they get drunk there but there is a fair chance they will do it on campus and not drive afterwards. If they do too much of anything else that’s objectionable – at least this was true until quite recently – there is a fair chance the story will get squashed on campus and remain there forever.

And, of course, of course, the big universities, especially the residential version but not only it, are incomparable devices to channel lust. They take young people at approximately mating age and maximize the chance that they will come out four, or more likely, five years later, either suitably matched, or appropriately unmatched. It’s a big relief for the parents that their darling daughter may become pregnant out of wedlock but it will be through the deeds of a young person from their own social class. For some parents, universities would be well worth the cost, if they limited themselves to staving off what the French call: “mésalliances.” (Go ahead, don’t be shy; you know more French than you think.)

Naturally, universities could not have been better designed to promote networking, offering at once numerous opportunities to meet new people (but not too new), and plenty of leisure time to take advantage of them, all in a conveniently limited space. As is well known the results of this networking often last a lifetime. For some, campus networking constitutes an investment that keeps paying dividends forever.

And, I kept the most important university function for last. I think that from the earliest times in America, universities served the purpose of certifying upper-class, then, middle-class status. This credentialing function is usually in two parts. The young person gets social points for being accepted in whatever college or university the parents consider prestigious enough, nationally, internationally, or even locally. The student gets more points for actually graduating from the same school or one equivalent to it.

This idea that higher education organizations publicly certify social status is so attractive that it has spread downward in my lifetime, from the best known schools, Ivy League and better (such as Stanford), down to all state universities, and then, to all lower admission-standards state colleges, and even down to two-year community colleges. In my neighborhood of California, possessors of a community college Associate of Arts degree are considered sort of upper lower-class. This small degree influences marriage choices, for example. I used to know a man of a sort of hillbilly extraction who was very intelligent and extremely eager to learn and who attended community college pretty much for twenty years. He kept faithful to his origins by never even earning an AA degree. (True story. Some other time, of course.)

Merit recruitment of faculty and students

I, and the academics I know are not very troubled by the cheating news, only by the crudeness involved, especially in the raw exchange of cash for illicit help. I suppose most of us realized, even if in a sort of subliminal way, that admission was never thoroughly or even mainly based on merit as measured, for example by high school achievement and by test results. My own undergraduate experience is limited but varied. I spent two years in a good community college where pretty much everyone who could read was accepted. Then, I transferred to Stanford with a full tuition scholarship. Academic merit did not loom very large in either school, and perhaps a bit more in the community college than it did at Stanford.

In order to preserve a reputation for intellectual excellence that contributes to their ability to credentialize without subsuming it at all, universities and colleges must actively recruit. They have first to attract faculty with a sufficient supply of their own (academic) credentials in relation to the status the universities seek to achieve, or to keep. Often, regularly for many, they also reach down to recruit as students promising young people outside of their regular socioeconomic catchment area. Their own motives are not always clear to those who make the corresponding decisions. One is do-gooding, of course completely in line with the great charitable American tradition (that this immigrant personally admires).

At the same time, colleges and universities don’t select scholarship recipients for their moral merit but for their grades, and for other desirable features. The latter include, of course, high athletic performance. Additionally, in my observation, many, or at least, some, also recruit poor undergraduates the way a good hostess composes a menu. When Stanford plucked me out of my young single immigrant poverty, it was not only for my good community college GPA, I was also an interesting case, an interesting story. (There were no French undergrads at all on campus at the time. Being French does not have cachet only for foolish young women.) Another transfer student they recruited at the same time, was a Turkish Jew whose mother tongue was 16th century Spanish (Ladino). How is this for being interesting? I am speaking about diversity, before this excellent word was kidnapped by an unlovable crowd.

Attendance, grades and merit

At Stanford, I realized after a couple of quarters that many undergraduates did not care to go to class and did not care much about grades either. I discovered a little later (I never claimed to be the sharpest knife in the drawer!) that few were preoccupied with receiving good grades. That was because it was quite difficult to get a really bad grade so long as you went through the motions.

I was puzzled that several professors took an instant liking to me. I realized later, when I was teaching myself, that it was largely because I was afraid of bad grades, greedy for good grades, and I displayed corresponding diligence. I thought later that many of the relaxed students were legacy admissions (I did not know the term then) who had good things coming to them pretty much irrespective of their GPA. Soon, I perceived my own poor boy conventional academic striving as possibly a tad vulgar in context. I did not resent my relaxed fellow students however. I kind of knew they paid the freight, including mine. Incidentally, I am reporting here, not complaining. I received a great education at Stanford, which changed my life. I was taught by professors – including a Nobel Prize winner – that I richly did not deserve. The experience transformed and improved my brain architecture.

About ten years after graduating, I became a university teacher myself, in several interesting places. One was a denominational university that was also pricey. I remember that there were always there well dressed young women around, smiley, with good manners, and vacant eyes. (I don’t recall any males of the same breed; I don’t know why.) They would do little of the modest work required. Come pop-quiz time, they would just write their name neatly on a piece of blank paper. I gave them the lowest grade locally possible, a C, of course. Same grade I gave without comment to a bright-faced, likable black athlete who turned in the best written essay I had ever seen in my life. There were no protests, from any party. We had a tacit understanding. I speculate the young women and the star athletes had the same understanding with all other faculty members. I don’t know this for fact but I don’t see how else they could have remained enrolled.

And then, there always were always cohorts of students bearing a big sticker on their forehead that said, “I am not here because of my grades but in spite of my grades.” OK, it was not on their forehead but on their skin. That was damned unfair to those minority students who had gained admission under their own power if you ask me. Nobody asked me. And then, especially in California, there has been for a long time the tiny issue of many students whose parents come from countries where they eat rice with chopsticks. Many of those couldn’t gain admission to the school of their choice if they had invented a universal cure for cancer before age eighteen. As I write, this issue is still being litigated. I doubt there is anyone in academia who believes the plaintiffs don’t have a case.

Meritocracy!

Virtue out of evil

The mid-March 2019 admissions scandal might paradoxically make universities better, from a meritocratic standpoint. By throwing a crude light on their admission process and turning part of the public cynical about it, the scandal may undermines seriously their credentialing function. It will be transformed, or at least, it may well be watered down. I mean that if you can’t trust anymore that the fact that Johnny was admitted to UnivX is proof of Johnny’s worth, then, you might develop a greater interest in what Johnny actually accomplished while he was attending UnivX. You might become curious about John’s course of study, his choice of classes, even his grades, for example. That wouldn’t be all bad.

Some schools, possibly many schools because universities are like sheep, may well respond by strengthening their transmission of knowledge function, advertising the fact loudly and, with luck, becoming trapped in their own virtuous snare. Some universities, possibly those that are now second-tiers rather than the famous ones (those could well prove immune to any scandal, indestructible) may actually become more of the learning centers they have long pretended to be.

I can envision a scenario where the US has a first kind of good universities, good for intellectual reasons, to an extent, but mostly good for continued social credentialing. And next to the first kind, would be higher education establishments mainly dedicated to studying and learning. The latter, if they were successful, would unavoidably and eventually grow a credentialing function of sorts. That would be fine. The two categories might compete for students. That would be fine too. It would be good for recruiters to have a clear choice of qualities. I think that university professors, or some of them, many of them, would easily move between the two categories of schools. There would be a single labor market but different vocations, perhaps serialized in time. Above all, students would have more choice and more sharply defined choices. Everyone could stop pretending. Actual intellectual merit and grit would find a bigger place in the higher education enterprise.

This is all wool-gathering of course. It depends on one of my big predictions being false. I mean none of the above matters if American universities are committing suicide before our eyes. I refer to unjustified and unjustifiable tuition raises over thirty years, to their collaborating in the moral horror that student loans have become; I am thinking of their capture by a monolithic tribe of ideologues clinging to an old, defeated utopianism. I refer even more to their current inability or unwillingness to protect free speech and the spirit of inquiry.

Libertarianism and Neoliberalism – A difference that matters?

I recently saw a thoroughgoing Twitter conversation between a Caleb Brown, which most of you presumably know from the Cato Daily Podcast, and the Neoliberal Project, an American project founded to promote the ideas of neoliberalism, regarding the differences between libertarianism and neoliberalism. For those who follow the debate, it is nothing new that the core of this contention goes way beyond an etymological dimension – it is concerned with one of the most crucial topics in the liberal scholarship: the relationship between government and free markets.

Arbitrary categories?

I can understand the aim to further structure the liberal movement into subcategories which represent different types of liberalism. Furthermore, I often use these different subcategories myself to distance my political ideology from liberal schools I do not associate with, such as paleo-libertarianism or anarcho-capitalism. However, I do not see such a distinct line between neoliberalism and libertarianism in practice.

As describes by Caleb Brown (and agreed on by the Neoliberal Project), neoliberalism wants to aim the wealth generated by markets at specific social goals using some government mechanism, whilst libertarianism focuses on letting the wealth created by free markets flow where it pleases, so to say. In my opinion, the “difference” between these schools is rather a spectrum of trust in government measures with libertarianism on one side and neoliberalism on the other.

I’ve often reached a certain point in the same discussion with fellow liberals:

Neoliberal: I agree that free markets are the most efficient tool to create wealth. They are just not very good at distributing it. By implementing policy X, we could help to correct market failure Y.

Libertarian: Yeah, I agree with you. Markets do not distribute wealth efficiently. However, the government has also done a poor job trying to alleviate the effects of market failures, especially when we look at case Z… (Of course, libertarians bring forth other arguments than public choice, but it is a suitable example.)

After reaching this point, advocating for governmental measures to fix market failures often becomes a moral and personal objective. My favourite example is emission trading. I am deeply intrigued by the theoretical foundation of the Coase-Theorem and how market participants still can find a Pareto-efficient equilibrium by just negotiating. Based on this theoretical framework, I would love to see a global market for carbon emission trading.

However, various mistakes were made during the implementation of emission allowances. First, there were way too many emission allowances on the market which engendered the price to drop dangerously low. Additionally, important markets such as air and ship transportation were initially left out. All in all, a policy buttressed by a solid theory had a more than rough start due to bad implementation.

At this point, neoliberals and libertarians diverge in their responses. A libertarian sees another failure of the government to implement a well-intended policy, whereas a neoliberal sees a generally good policy which just needs a bit further improvement. In such cases, the line between neoliberals and libertarians becomes very thin. And from my point of view, we make further decisions based on our trust in the government and on our subjective-moral relation to the topic as well.

I saw government too often fail (e.g. engaging in industry politics), which should be left nearly entirely to free markets. However, I also saw the same government struggling to find an adequate response to climate change. Contrary, I believe that officials should carry on with their endeavours to counteract climate change whereas they should stay out of industry politics.

Furthermore, in the recent past, there has been a tremendous amount of libertarian policy proposals put forth which remodeled the role of government in a free society: A libertarian case for mandatory vaccination? Alright. A libertarian case for UBI? Not bad. A libertarian case for a border wall? I am not so sure about that one.

Although these examples may define libertarianism in their own context, the general message remains clear to me: libertarians are prone to support governmental measures if they rank the value of a specific end higher than the risk of a failed policy. Since such an article is not the right framework to gather a robust amount of data to prove my point empirically, I rely on the conjecture, that the core question of where the government must interfere is heavily driven by subjective moral judgements.

Summary

Neoliberals and Libertarians diverge on the issue of government involvement in the economy. That’s fine.

Governmental policies often do not fully reach their intended goals. That’s also fine.

The distinction between neoliberals and libertarians is merely a threshold of how much trust one puts in the government’s ability to cope with problems. Both schools should not value this distinction too much since it is an incredibly subjective issue.

Are Swedish University Tuitions Fees Really Free?

University tuition fees are always popular talking points in politics, media, and over family dinner tables: higher education is some kind of right; it’s life-changing for the individual and super-beneficial for society, thus governments ought to pay for them on economic as well as equity grounds (please read with sarcasm). In general, the arguments for entirely government-funded universities is popular way beyond the Bernie Sanders wing of American politics. It’s a heated debate in the UK and Australia, whose universities typically charge students tuition fees, and a no-brainer in most Scandinavian countries, whose universities have long had up-front tuition fees of zero.

Many people in the English-speaking world idolize Scandinavia, always selectively and always for the wrong reasons. One example is the university-aged cohort enviously drooling over Sweden’s generous support for students in higher education and, naturally, its tradition of not charging tuition fees even for top universities. These people are seldom as well informed about what it actually means – or that costs of attending university is probably lower in both England and Australia. Let me show you some vital differences between these three countries, and thereby shedding some much-needed light on the shallow debate over tution fees:

The entire idea with university education is that it pays off – not just socially, but economically – from the individual’s point of view: better jobs, higher lifetime earnings or lower risks of unemployment (there’s some dispute here, and insofar as it ever existed, the wage premium from a university degree has definitely shrunk over the last decades). The bottom line remains: if a university education increases your lifetime earnings and thus acts as an investment that yield individual benefits down the line, then individuals can appropriately and equitably finance that investment with debt. As an individual you have the financial means to pay back your loan with interest; as a lender, you have a market to earn money – neither of which is much different from, say, a small business borrowing money to invest and build-up his business. This is not controversial, and indeed naturally follows from the very common sense principle that those who enjoy the benefits ought to at least contribute towards its costs.

Another general reason for why we wouldn’t want to artificially price a service such as university education at zero is strictly economical; it bumps up demand above what is economically-warranted. University educations are scarce economic goods with all the properties we’re normally concerned about (has an opportunity cost in its use of rivalrous resources, with benefits accruing primarily to the individuals involved in the transaction), the use and distribution of which needs to be subject to the same market-test as every other good. Prices serve a socially-beneficial purpose, and that mechanism applies even in sectors people mistakenly believe to be public or social, access to which forms some kind of special “human right.”

From a political or social-justice point of view, such arguments tend to carry very little weight, which is why the funding-side matters so much. Because of debt-aversion or cultural reasons, lower socioeconomic stratas of societies tend not to go to university as much as progressives want them to – scrapping tuition fees thus seems like a benefit to those sectors of society. When the financing of those fees come out of general taxation however, they can easily turn regressive in their correct economic meaning, disproportionately benefiting those well off rather than the poor and under-privileged they intended to help:

The idea that graduates should make no contribution towards the tertiary education they will significantly benefit from it, while expecting the minimum wage hairdresser in Hull, or waiter in Wokingham to pick up the bill by paying higher taxes (or that their unborn children and grandchildren should have to pay them due to higher borrowing) is highly regressive.

Although not nearly enough people say it, university is not for everyone. The price tag confronts students, who perhaps would go to university to fulfill an expectation rather than for any wider economic or societal benefit, with a cost as well as a benefit of attending university.

Having said that, I suggest that attending university is probably more expensive in your utopian Sweden than in England or Australia. The two models these three countries have set up look very different at first: in Sweden the government pays the tuition and subsidies your studies; in England and Australia you have to take out debt in order to cover tuition fees. A cost is always bigger than no cost – how can I claim the reverse?

With the following provision: Australian and English students don’t have to pay back their debts until they earn above a certain income level (UK: £18,330; Australia: $55,874). That is, those students whose yearly earnings never reach these levels will have their university degree paid for by the government regardless. That means that the Scandinavian and Anglophone models are almost identical: no or low costs accrue for students today, in exchange for higher costs in the future provided you earn enough income. Clearly, paying additional income taxes when earning high incomes but not on low incomes (Sweden) or paying back my student debt to the government only if I earn high incomes rather than low (England, Australia) amounts to the same thing. Changing the label of a financial transfer from the individual to the government from “debt-repayment” to “tax” has very little meaning in reality.

In one way, the Aussie-English system is somewhat more efficient since it internalises costs to only those who benefited from the service rather than blanket taxing everyone above a certain income threshold: it allows high-income earners who did not reach such financial success from going to university to avoid paying the general penalty-tax on high-incomes that Swedish high-earners do.

Let me show the more technical aspect: In England, earning above £18,330 places you at a position in the 54th percentile, higher than the majority of income-earners. Similarly, in Australia, $55,874 places you above 52% of Aussie income-earners. For Sweden, with the highest marginal income taxes in the world, a similar statistics is trickier to estimate since there is no official cut-off point above which you need to repay it. Instead, I have to estimate the line at which you “start paying” the relevant tax. What line is then the correct one? Sweden has something like 14 different steps in its effective marginal tax schedule, ranging from 0% for monthly incomes below 18,900 SEK (~$2,070) to 69.8% for incomes above 660,000 SEK (~$72,350) or even 75% in estimations that include sales taxes of top-marginal taxes:


If we would place the income levels at which Australian and English students start paying back the cost of their university education, they’d both find themselves in the middle range facing a 45.8% effective marginal tax – suggesting that they would have greatly exceeded the income level at which Swedish students pay back their tuition fees. Moreover, the Australian threshold would exchange into 367,092 SEK as of today, for a position in the 81st percentile – that is higher than 81% of Swedish income-earners. The U.K., having a somewhat lower threshold, converts to 217,577 SEK and would place them in the 48th percentile, earning more than 48% of Swedish income-earners – we’re clearly not talking about very poor people here.

The fact that income-earners in Sweden face a much-elevated marginal tax schedule as well as the simplified calculations above do indicate that despite its level of tuition fees at zero, it is more expensive to attend university in Sweden than it is in England or Australia. Since Australia’s pay-back threshold is so high relative to the income distribution of Sweden (81%), it’s conceivably much cheaper for Australian students to attend university than for it is for Swedish students, even though the tuition list prices may differ (the American debate is much exaggerated precisely because so few people pay the universities’ official list prices).

Letting governments via general taxation completely fund universities is a regressive measure that probably hurts the poor more than it helps the rich. The solution to this is not some quota-scholarships-encourage-certain-groups-version but rather to a) increase and reinstate tuition fees where applicable or b) cut government funding to universities, or ideally get government out of the sector entirely.

That’s a progressive policy in respect to universities. Accepting that, however, would be anathema for most people in politics, left and right.

The State and education – Part IV: Conclusion

On August 17, 2018, the BBC published an article titled “Behind the exodus from US state schools.” After taking the usual swipes at religion and political conservatism, the real reason for the haemorrhage became evident in the personal testimony collected from an example mother who withdrew her children from the public school in favour of a charter school:

I once asked our public school music teacher, “Why introduce Britney Spears when you could introduce Beethoven,” says Ms. Helmi, who vouches for the benefits to her daughters of a more classical education.

“One of my favourite scenes at the school is seeing a high-schooler playing with a younger sibling and then discussing whether a quote was from Aristotle or Socrates.”

The academic and intellectual problems with the state school system and curriculum are perfectly encapsulated in the quote. The hierarchy of values is lost, not only lost but banished. This is very important to understand in the process of trying to safeguard liberty: the progenitors of liberty are not allowed into the places that claim to incubate the supposed future guardians of that liberty.

In addition to any issues concerning academic curricula, there is the problem of investment. One of the primary problems I see today, especially as someone who is frequently asked to give advice on application components, such as résumés and cover letters/statements of purpose, is a sense of entitlement vis-à-vis institutional education and the individual; it is a sense of having a right to acceptance/admission to institutions and career fields of choice. In my view, the entitlement stems from either a lack of a sense of investment or perhaps a sense of failed investment.

On the one hand as E.S. Savas effectively argued in his book Privatization and Public-Private Partnerships, if the state insists on being involved in education and funding institutions with tax dollars, then the taxpayers have a right to expect to profit from, i.e. have a reasonable expectation that their children may be admitted to and attend, public institutions – it’s the parents’ money after all. On the other hand, the state schools are a centralized system and as such in ill-adapted to adjustment, flexibility, or personal goals. And if all taxpayers have a right to attend a state-funded institution, such places can be neither fully competitive nor meritocratic. Additionally, Savas’ argument serves as a reminder that state schooling is a manifestation of welfarism via democratic socialism and monetary redistribution through taxation.

That wise investing grants dividends is a truth most people freely recognize when discussing money; when applied to humans, people start to seek caveats. Every year, the BBC runs a series on 100- “fill-in-the-blank” people – it is very similar to Forbes’ lists of 30 under 30, top 100 self-made millionaires, richest people, etc. Featured on the BBC list for 2017 was a young woman named Camille Eddy, who at age 23 was already a robotics specialist in Silicon Valley and was working to move to NASA. Miss Eddy’s article begins with a quote: “Home-schooling helped me break the glass ceiling.” Here is what Eddy had to say about the difference between home and institutional schooling based on her own experience:

I was home-schooled from 1stgrade to high school graduation by my mum. My sister was about to start kindergarten, and she wanted to invest time in us and be around. She’s a really smart lady and felt she could do it.

Regarding curriculum choices, progress, and goals:

My mum would look at how we did that year and if we didn’t completely understand a subject she would just repeat the year. She focused on mastery rather than achievement. I was able to make that journey on my own time.

And the focus on mastery rather than achievement meant that the latter came naturally; Eddy tested into Calculus I her first year at university. Concerning socialization and community – two things the public schools pretend to offer when confronted with the fact that their intellectual product is inferior, and their graduates do not achieve as much:

Another advantage was social learning. Because we were with mum wherever she went we met a lot of people. From young to old, I was able to converse well with anyone. We had many friends in church, our home-school community groups, and even had international pen pals.

When I got to college I felt I was more apt to jump into leadership and talk in front of people because I was socially savvy.

On why she was able to “find her passion” and be an interesting, high-achieving person:

And I had a lot of time to dream of all the things I could be. I would often finish school work and be out designing or engineering gadgets and inventions. I did a lot of discovery during those home-school years, through documentaries, books, or trying new things.

In the final twist to the plot, Camille Eddy, an African-American, was raised by a single mother in what she unironically describes as a “smaller town in the US” where the “cost of living was not so high.” What Eddy’s story can be distilled to is a parent who recognized that the public institutions were not enough and directly addressed the problem. All of her success, as she freely acknowledges, came from her mother’s decision and efforts. In the interest of full honesty, I should state that I and my siblings were home-schooled from 1stgrade through high school by parents who wanted a full classical education that allowed for personal growth and investment in the individual, so I am a strong advocate for independent schooling.

There is a divide, illustrated by Eddy’s story, created by the concept of investment. When Camille Eddy described her mother as wanting “to invest time in us and be around,” she was simply reporting her mother’s attitude and motivation. However, for those who aspire to have, or for their children to attain, Eddy’s achievements and success, her words are a reproach. What these people hear instead is, “my mother cared more about me than yours cared about you,” or “my mother did more for her children than you have done for yours.” With statements like Eddy’s, the onus of responsibility for a successful outcome shifts from state institutions to the individual. The responsibility always lay with the individual, especially vis-à-vis public education since it was designed at the outset to only accommodate the lowest common denominator, but, as philosopher Allan Bloom, author of The Closing of the American Mind,witnessed, ignoring this truth became an overarching American trait.

There are other solutions that don’t involve cutting the public school out completely. For example: Dr. Ben Carson’s a single-, working-mother, who needed the public school, if only as a babysitter, threw out the TV and mandated that he and his brother go to the library and read. As a musician, I know many people who attended public school simply to obtain the requisite diploma for conservatory enrollment but maintain that their real educations occurred in their private preparation – music training, especially for the conservatory level, is inherently an individualistic, private pursuit. But all the solutions start with recognizing that the public schools are inadequate, and that most who have gone out and made a success of life in the bigger world normally had parents who broke them out of the state school mould. In the case of Dr. Carson’s mother, she did not confuse the babysitter (public school) with the educator (herself as the parent).

The casual expectation that the babysitter can also educate is part of the entitlement mentality toward education that is pervasive in American society. The mentality is rather new. Allan Bloom described watching it take hold, and he fingered the Silent Generation – those born after 1920 who fought in World War II; their primary historical distinction was their comparative lack of education due to growing up during the Great Depression and their lack of political and cultural involvement, hence the moniker “silent”[1]– as having raised their children (the Baby Boomers) to believe that high school graduation conferred knowledge and rights. As a boy Bloom had had to fight with his parents in order to be allowed to attend a preparatory school and then University of Chicago, so he later understandably found the entitlement mentality of his Boomer and Generation X students infuriating and offensive. The mental “closing” alluded to in Bloom’s title was the resolute refusal of the post-War generations either to recognize or to address the fact that their state-provided educations had left them woefully unprepared and uninformed.

To close, I have chosen a paraphrase of social historian Neil Howe regarding the Silent Generation, stagnation, and mid-life crises:

Their [Gen X’s] parents – the “Silent Generation” – originated the stereotypical midlife breakdown, and they came of age, and fell apart, in a very different world. Generally stable and solvent, they headed confidently into adult lives about the time they were handed high school diplomas, and married not long after that. You see it in Updike’s Rabbit books – they gave up their freedom early, for what they expected to be decades of stability.

Implicit to the description of the Silent Generation is the idea, expressed with the word “handed,” that they did not earn the laurels on which they built their futures. They took an entitlement, one which failed them. There is little intrinsic difference between stability and security; it is the same for freedom and liberty. History demonstrates that humans tend to sacrifice liberty for security. Branching out from education, while continuing to use it as a marker, we will look next at the erosive social effect entitlements have upon liberty and its pursuit.


[1]Apparently to be part of the “Greatest Generation,” a person had to have been born before or during World War I because, according to Howe, the Greatest Generation were the heroes – hero is one of the mental archetypes Howe developed in his Strauss-Howe generational theory – who engineered the Allied victory; the Silent Generation were just cogs in the machine and lacked the knowledge, maturity, and experience to achieve victory.

Evidence-based policy needs theory

This imaginary scenario is based on an example from my paper with Baljinder Virk, Stella Mascarenhas-Keyes and Nancy Cartwright: ‘Randomized Controlled Trials: How Can We Know “What Works”?’ 

A research group of practically-minded military engineers are trying to work out how to effectively destroy enemy fortifications with a cannon. They are going to be operating in the field in varied circumstances so they want an approach that has as much general validity as possible. They understand the basic premise of pointing and firing the cannon in the direction of the fortifications. But they find that the cannon ball often fails to hit their targets. They have some idea that varying the vertical angle of the cannon seems to make a difference. So they decide to test fire the cannon in many different cases.

As rigorous empiricists, the research group runs many trial shots with the cannon raised, and also many control shots with the cannon in its ‘treatment as usual’ lower position. They find that raising the cannon often matters. In several of these trials, they find that raising the cannon produces a statistically significant increase in the number of balls that destroy the fortifications. Occasionally, they find the opposite: the control balls perform better than the treatment balls. Sometimes they find that both groups work, or don’t work, about the same. The results are inconsistent, but on average they find that raised cannons hit fortifications a little more often.

A physicist approaches the research group and explains that rather than just trying to vary the height the cannon is pointed in various contexts, she can estimate much more precisely where the cannon should be aimed using the principle of compound motion with some adjustment for wind and air resistance. All the research group need to do is specify the distance to the target and she can produce a trajectory that will hit it. The problem with the physicist’s explanation is that it includes reference to abstract concepts like parabolas, and trigonometric functions like sine and cosine. The research group want to know what works. Her theory does not say whether you should raise or lower the angle of the cannon as a matter of policy. The actual decision depends on the context. They want an answer about what to do, and they would prefer not to get caught up testing physics theories about ultimately unobservable entities while discovering the answer.

Eventually the research group write up their findings, concluding that firing the cannon pointed with a higher angle can be an effective ‘intervention’ but that whether it does or not depends a great deal on particular contexts. So they suggest that artillery officers will have to bear that in mind when trying to knock down fortifications in the field; but that they should definitely consider raising the cannon if they aren’t hitting the target. In the appendix, they mention the controversial theory of compound motion as a possible explanation for the wide variation in the effectiveness of the treatment effect that should, perhaps, be explored in future studies.

This is an uncharitable caricature of contemporary evidence-based policy (for a more aggressive one see ‘Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials’). Metallurgy has well-understood, repeatedly confirmed theories that command consensus among scientists and engineers. The military have no problem learning and applying this theory. Social policy, by contrast, has no theories that come close to that level of consistency. Given the lack of theoretical consensus, it might seem more reasonable to test out practical interventions instead and try to generalize from empirical discoveries. The point of this example is that without theory empirical researchers struggle to make any serious progress even with comparatively simple problems. The fact that theorizing is difficult or controversial in a particular domain does not make it any less essential a part of the research enterprise.

***

Also relevant: Dylan Wiliam’s quip from this video (around 9:25): ‘a statistician knows that someone with one foot in a bucket of freezing water and the other foot in a bucket of boiling water is not, on average, comfortable.’

Pete Boettke’s discussion of economic theory as an essential lens through which one looks to make the world clearer.

Should The Academic “We” Be Ditched?

“Only kings, presidents, editors, and people with tapeworms have the right to use the editorial ‘we’.” – Mark Twain

When writing academically I use the “we” pronoun. I do so for a variety of reasons, but I am starting to rethink this practice. This may seem like a silly topic, but a quick google shows that I’m not the only one who thinks about this: link 1, link 2.

My K-12 teachers, and even my undergraduate English professor, constantly told me that I was prone to writing in a stream of consciousness. My writing, they argued, contained too much of my personality. They pointed out my constant use of “I”s of example of this. I I was, in general, an awful English student. In 12+ years of schooling, I rarely used the five page paragraph structure that American school children are indoctrinated with. I first adopted the use of the academic “we” in an attempt to force myself to distinguish between personal forms of writing, such as when I write on blogs, where these eccentricities could be tolerated and technical writing.

While that was my initial motivation for using the “we”, I also found the pronoun a way to emphasize the collaborative nature of science. I have several single authored papers, but I would be lying if I said that any of them were developed in a vacuum divorced from other’s feedback. Getting feedback at a conference or brown bag workshop may not merit including someone as a co-author, but I feel it strange to use “I” academically in this context. For anyone who disagrees with me – I ask that you compare a paper before and after submitting it to the review process. One may hate reviewer #2 for insisting on using an obscure estimation technique, but it cannot be denied that they shaped the final version of the paper. Again, I’m not saying we should add reviewers as co-authors, but isn’t using ‘we’ a simple way of acknowledging their role in the scientific process?

I admit, I also enjoy using the academic “we” in part because of its regal connections. King Michelangelo has a nice ring to it, no?

There are downsides to the use of the academic “we”. On several occasions I’ve had to clarify that I was the sole author of a given paper. What do NOL readers think? Do you use the academic “we”?

#microblog

The State in education – Part III: Institutionalization of learning

In The State in education – Part II: social warfare, we looked at the promise of state-sponsored education and its failure, both socially and as a purveyor of knowledge. The next step is to examine the university, especially since higher education is deeply linked to modern society and because the public school system purports to prepare its students for college.

First, though, there should be a little history on higher education in the West for context since Nietzsche assumed that everyone knew it when he made his remarks in Anti-Education. The university as an abstract concept dates to Aristotle and his Peripatetic School. Following his stint as Alexander the Great’s tutor, Aristotle returned to Athens and opened a school at the Lyceum (Λύκειον) Temple. There, for a fee, he provided the young men of Athens with the same education he had given Alexander. On a side note, this is also a beautiful example of capitalist equality: a royal education was available to all in a mutually beneficial exchange; Aristotle made a living, and the Athenians received brains.

The Lyceum was not a degree granting institution, and only by a man’s knowledge of philosophy, history, literature, language, and debating skills could one tell that he had studied at the Lyceum. A cultural premium on bragging rights soon followed, though, and famous philosophers opening immensely popular schools became de rigueur. By the rise of Roman imperium in the Mediterranean around 250 BC, Hellenic writers included their intellectual pedigrees, i.e. all the famous teachers they had studied with, in their introductions as a credibility passport. The Romans were avid Hellenophiles and adopted everything Greek unilaterally, including the concept of the lyceum-university.

Following the Dark Ages (and not getting into the debate over whether the time was truly dark or not), the modern university emerged in 1088, with the Università di Bologna. It was more of a club than an institution; as Robert S. Rait, mid-20th century medieval historian, remarked in his book Life in the Medieval University, the original meaning of “university” was “association” and it was not used exclusively for education. The main attractions of the university as a concept were it was secular and provided access to books, which were prohibitively expensive at the individual level before the printing press. A bisection of the profiles of Swedish foreign students enrolled at the Leipzig University between 1409 and 1520 shows that the average male student was destined either for the clergy on a prelate track or was of noble extraction. As the article points out, none of the students who later joined the knighthood formally graduated, but the student body is indicative of the associative nature of the university.

The example of Lady Elena Lucrezia Cornaro Piscopia, the first woman to receive a doctoral degree, awarded by the University of Padua in 1678, illuminates the difference between “university” at its original intent and the institutional concept. Cornaro wrote her thesis independently, taking the doctoral exams and defending her work when she and her advisor felt she was ready. No enrollment or attendance at classes was necessary, deemed so unnecessary that she skipped both the bachelor and masters stages. What mattered was that a candidate knew the subject, not the method of acquisition. Even by the mid-19th century, this particular path remained open to remarkable scholars, such as Nietzsche since Leipzig University awarded him his doctorate on the basis of his published articles, rather than a dissertation and defense.

Education’s institutionalization, i.e. the focus shifting more from knowledge to “the experience,” accompanied a broader societal shift. Nietzsche noted in Beyond Good and Evil that humans have an inherent need for boundaries and systemic education played a very prominent role in contemporary man’s processing of that need:

There is an instinct for rank which, more than anything else, is a sign of a high rank; there is a delight in the nuances of reverence that allows us to infer noble origins and habits. The refinement, graciousness, and height of a soul is dangerously tested when something of the first rank passes by without being as yet protected by the shudders of authority against obtrusive efforts and ineptitudes – something that goes its way unmarked, undiscovered, tempting, perhaps capriciously concealed and disguised, like a living touchstone. […] Much is gained once the feeling has finally been cultivated in the masses (among the shallow and in the high-speed intestines of every kind) that they are not to touch everything; that there are holy experiences before which they have to take off their shoes and keep away their unclean hands – this is almost their greatest advance toward humanity. Conversely, perhaps there is nothing about so-called educated people and believers in “modern ideas” that is as nauseous as their lack of modesty and the comfortable insolence in their eyes and hands with which they touch, lick, and finger everything [….] (“What is Noble,” 263)

The idea the philosopher pursued was the notion that university attendance conveyed the future right to “touch, lick, and finger everything,” a very graphic and curmudgeonly way of saying that a certain demographic assumed unjustified airs.

Given that in Anti-Education, Nietzsche lamented the fragmentation of learning into individual disciplines, causing students to lose a sense of the wholeness, the universality of knowledge, what he hated in the nouveau educated, if we will, was the rise of the pseudo-expert – a person whose knowledge was confined to the bounds of a fixed field but was revered as omniscient. The applicability of Socrates’ dialogue with Meno – the one where teacher and student discuss human tendency to lose sight of the whole in pursuit of individual strands – to the situation was unmistakable, something which Nietzsche, a passionate classicist, noticed. The loss of the Renaissance learning model, the trivium and the quadrivium, both of which emphasize an integrated learning matrix, carried with it a belief that excessive specialization was positive; it was a very perverse version of “jack of all trades, master of none.” As Nietzsche bemoaned, the newly-educated desired masters without realizing that all they obtained were jacks. In this, he foreshadowed the disaster of the Versailles Treaty in 1919 and the consequences of Woodrow Wilson’s unwholesome belief in “experts.”

The philosopher squarely blamed the model of the realschule, with its clear-cut subjects and predictable exams, for the breakdown between knowledge acquisition and learning. While he excoriated the Prussian government for basing all public education on the realschule, he admitted that the fragmentation of the university into departments and majors occurred at the will of the people. This was a “chicken or the egg” situation: Was the state or broader society responsible for university learning becoming more like high school? This was not a question Nietzsche was interested in answering since he cared more about consequences. However, he did believe that the root was admitting realschule people to university in the first place. Since such a hypothesis is very applicable today, we will examine it in the contemporary American context next.