Tech’s Ethical Dark Side

An article at the NY Times opens:

The medical profession has an ethic: First, do no harm.

Silicon Valley has an ethos: Build it first and ask for forgiveness later.

Now, in the wake of fake news and other troubles at tech companies, universities that helped produce some of Silicon Valley’s top technologists are hustling to bring a more medicine-like morality to computer science.

Far be it from me to tell people to avoid spending time considering ethics. But something seems a bit silly to me about all this. The “experts” are trying to teach students the consequences of the complex interactions between the services they haven’t yet created and the world as it doesn’t yet exist.

My inner cynic sees this “ethics of tech” movement as a push to have software engineers become nanny-state-like social engineers. “First do no harm” is not the right standard for tech (which isn’t to say “do harm” is). Before 2016 Facebook and Twitter were praised for their positive contribution to the Arab Spring. After our dumb election the educated western elite threw up our hands and said, “it’s an ethical breach to reduce our power!” Freedom is messy, and “do no harm” privileges the status quo.

The root problem is that computer services interact with the public in complex ways. Recognizing this is important and an ethics class ought to grapple with that complexity and the resulting uncertainty in how our decisions (including design decisions) can affect the well being of others. My worry is that a sensible call to think about these issues will be co-opted by power-hungry bureaucrats. (There really ought to be ethics classes on the “Dark Side of Ethical Judgments of Others and Education Policy”.)

I don’t doubt that the motivations of the people involved are basically good, but I’m deeply skeptical of their ability to do much more than offer retrospective analysis as particular events become less relevant. History is important, but let’s not trick ourselves into thinking the lessons of 2016 Facebook will apply neatly to whatever network we’re on in 2026.

It hardly seems reasonable to insist that Facebook be put in charge of what we get to see. Some argue that’s already the world we live in, and they aren’t completely wrong. But that authority is still determined by the voluntary individual decision of users with access to plenty of alternatives. People aren’t always as thoughtful and deliberate as I’d like, but that doesn’t mean I should step in and be a thoughtful and deliberate Orwellian figure on their behalf.


On the popularity of economic history

I recently engaged in a discussion (a twittercussion) with Leah Boustan of Princeton over the “popularity” of economic history within economics (depicted below).  As one can see from the purple section, it is as popular as those hard candies that grandparents give out on Halloween (to be fair, I like those candies just like I do economic history). More importantly, the share seems to be smaller than at the peak of 1980s. It also seems like the Nobel prize going to Fogel and North had literally no effects on the subfield’s popularity. Yet, I keep hearing that “economic history is back”. After all, the Bates Clark medal went to Donaldson of Stanford this year which should confirm that economic history is a big deal.  How can this be reconciled with the figure depicted below?


As I explained in my twittercussion with Leah, I think that there is a popularity for using historical data. Economists have realized that if some time is spent in archives to collect historical data, great datasets can be assembled. However, they do not necessarily consider themselves “economic historians” and as such they do not use the JEL code associated with history.  This is an improvement over a field where Arthur Burns (former Fed Chair) supposedly said during the 1970s that we needed to look at history to better shape monetary policy. And by history, he meant the 1950s. However, while there are advantages, there is an important danger which is left aside.

The creation of a good dataset has several advantages. The main one is that it increases time coverage. By increasing the time coverage, you can “tackle” the big questions and go for the “big answers” through the generation of stylized facts. Another advantage (and this is the one that summarizes my whole approach) is that historical episodes can provide neat testing grounds that give us a window to important economic issues. My favorite example of that is the work of Petra Moser at NYU-Stern. Without going into too much details (because her work was my big discovery of 2017), she used a few historical examples which she painstakingly detailed in order to analyze the effect of copyright laws. Her results have important ramifications to debates regarding “science as a public good” and “science as a contribution good” (see the debates between Paul David and Terence Kealey on this in Research Policy for this point).

But these two advantages must be weighted against an important disadvantage which Robert Margo has warned against in a recent piece in Cliometrica.  When one studies economic history, one must keep in mind that two things must be accomplished simultaneously: to explain history through theory and bring theory to life through history (this is not my phrase, but rather that of Douglass North). To do so, one must study a painstaking amount of details to ascertain the quality of the sources used and their reliability.  In considering so many details, one can easily get lost or even fall prey to his own prior (i.e. I expect to see one thing and upon seeing it I ask no question). To avoid this trap, there must be a “northern star” to act as a guide. That star, as I explained in an earlier piece, is a strong and general understanding of theory (or a strong intuition for economics). To create that star and give attention to details is an incredibly hard task and which is why I argued in the past that “great” economic historians (Douglass North, Deirdre McCloskey, Robert Fogel, Nathan Rosenberg, Joel Mokyr, Ronald Coase (because of the lighthouse piece), Stephen Broadberry, Gregory Clark etc.) take a longer time to mature. In other words, good economic historians are projects that have have a long “time to build problem” (sorry, bad economics joke).  However, the downside is that when this is not the case, there are risks of ending up with invalid results that are costly and hard to contest.

Just think about the debate between Daron Acemoglu and David Albouy on the colonial origins of development. It took more than five years to Albouy to get his results that threw doubts on Acemoglu’s 1999 paper. Albouy clearly expended valuable resources to get the “details” behind the variables. There was miscoding of Niger and Nigeria, and misunderstandings of what type of mortalities were used.  This was hard work and it was probably only deemed a valuable undertaking because Acemoglu’s paper was such a big deal (i.e. the net gains were pretty big if they paid off). Yet, to this day, many people are entirely unaware of the Albouy rebuttal.  This can be very well seen in the image below regarding the number of cites of the Acemoglu-Johnson-Robinson paper on an annual basis. There seems to be no effect from the massive rebuttal (disclaimer: Albouy convinced me that he was right) from the Albouy piece.


And it really does come down to small details like those underlined by Albouy. Let me give you another example taken from my work. Within Canada, the French minority is significantly poorer than the rest of Canada. From my cliometric work, we now know that there were poorer than the rest of Canada and North America as far as the colonial era. This is a stylized fact underlying a crucial question today (i.e. Why are French-Canadians relatively poor).  That stylized fact requires an explanation. Obviously, institutions are a great place to look. One of the institution that is most interesting is seigneurial tenure which was basically a “lite” version of feudalism in North America that was present only in the French settled colonies. Some historians and economic historians argued that there were no effects of the institutions on variables like farm efficiency.  However, some historians noticed that in censuses the French reported different units that the English settlers within the colony of Quebec. To correct for this metrological problem, historians made county-level corrections. With those corrections, the aforementioned has no statistically significant effect on yields or output per farm. However, as I note in this piece that got a revise and resubmit from Social Science Quarterly (revised version not yet online), county-level corrections mask the fact that the French were more willing to move to predominantly English areas than the English were willing to predominantly French areas. In short, there was a skewed distribution. However, once you correct the data on an ethnic composition basis rather than on the county-level (i.e. the same correction for the whole county), you end with a statistically significant negative effect on both output per farm and yields per acre. In short, we were “measuring away” the effect of institutions. All from a very small detail about distributions. Yet, that small detail has supported a stylized fact that the institution did not matter.

This is the risk that Margo speaks about illustrated in two examples. Economists who use history merely as a tool may end up making dramatic mistakes that will lead to incorrect conclusions. I take this “juicy” quote from Margo (which Pseudoerasmus) highlighted for me:

[EH] could become subsumed entirely into other fields… the demand for specialists in economic history might dry up, to the point where obscure but critical knowledge becomes difficult to access or is even lost. In this case, it becomes harder to ‘get the history right’

Indeed, unfortunately.

On the point of quantifying in general and quantifying for policy purposes

Recently, I stumbled on this piece in Chronicle by Jerry Muller. It made my blood boil. In the piece, the author basically argues that, in the world of education, we are fixated with quantitative indicators of performance. This fixation has led to miss (or forget) some important truths about education and the transmission of knowledge. I wholeheartedly disagree because the author of the piece is confounding two things.

We need to measure things! Measurements are crucial to our understandings of causal relations and outcomes.  Like Diane Coyle, I am a big fan of the “dashboard” of indicators to get an idea of what is broadly happening.  However, I agree with the authors that very often the statistics lose their entire meaning. And that’s when we start targeting them!

Once we know that this variable becomes the object of target, we act in ways that increase this variable. As soon as it is selected, we modify our behavior to achieve fixed targets and the variable loses some of its meaning. This is also known as Goodhart’s law whereby “when a measure becomes a target, it ceases to be a good measure” (note: it also looks a lot like the Lucas critique).

Although Goodhart made this point in the context of monetary policy, it applies to any sphere of policy – including education. When an education department decides that this is the metric they care about (e.g. completion rates, minority admission, average grade point, completion times, balanced curriculum, ratio of professors to pupils, etc.), they are inducing a change in behavior which alters the significance carried by this variable.  This is not an original point. Just go to google scholar and type “Goodhart’s law and education” and you end up with papers such as these two (here and here) that make exactly the point I am making here.

In his Chronicle piece, Muller actually makes note of this without realizing how important it is. He notes that “what the advocates of greater accountability metrics overlook is how the increasing cost of college is due in part to the expanding cadres of administrators, many of whom are required to comply with government mandates(emphasis mine).

The problem he is complaining about is not metrics per se, but rather the effects of having policy-makers decide a metric of relevance. This is a problem about selection bias, not measurement. If statistics are collected without an intent to be a benchmark for the attribution of funds or special privileges (i.e. that there are no incentives to change behavior that affects the reporting of a particular statistics), then there is no problem.

I understand that complaining about a “tyranny of metrics” is fashionable, but in that case the fashion looks like crocs (and I really hate crocs) with white socks.

What should universities do?

The new semester is here so it’s time for me to figure out what the hell I’m supposed to be doing in the weird world of modern American university life. Roughly speaking, the answer is going to be “do the stuff that professors do to help universities do what universities do.” So what do universities do? What are they supposed to do?

Universities occupy a few different niches in society. I’m usually tempted to think of universities like a business. And in that framework, I justify my salary by providing something of value to those students. At my school, something like 90% of the operating budget comes out of students’ pockets.

But that’s an overly narrow view. Students pay to go to school because they expect they’ll get value from it, but they also go to school because them’s the rules–if you want to enter adult society, university is the front gate. In this framework, I justify my salary by serving as a gatekeeper. Even though it’s students paying, the (nebulous) principal I’m obliged to is the collection of people already inside the walls.

But wait! There’s more! Universities are (in no particular order):

  • A repository of knowledge,
  • A generator of new knowledge,
  • A place people go to learn,
  • A place people go to prove themselves,
  • A place people make friends and have fun (in a way that may be hard to replicate),
  • A business (engaging in mutually beneficial exchange),
  • A special interest group,
  • An institution that holds a particular (privileged) position in a wider cultural landscape.

Any one of these functions is a can of worms in its own right. When we start to consider tradeoffs between each function (and the many less visible functions I’ve surely missed), it gets downright intractable. I’m going to focus on the student-focused aspects of university life.

The mainstream view:

University is a place students get educated. This education helps them get jobs because employers value it. Students might also learn things that help them be better citizens.

The mainstream view doesn’t seem far off from what I’ve got in mind until you get your hands dirty and start disentangling what that view says. Here are three big problems inherent in that mainstream view:

  1. The education-for-job myth.
  2. The definition problem.
  3. The one-size-fits-all problem.

The Job-training myth

We’re told that students go to school to learn valuable skills. I think that’s true, but not in the usual way. Any specific skills students learn in school are a) incredibly general, or b) out of date. My students might learn some interesting ways of looking at the world (general knowledge), but a lot of what I teach is completely useless in the workforce (“Johnson, draw me a demand curve, stat!”). But students do learn valuable skills incidentally. They learn to manage their time (ideally), how to be conscientious, and in general they’re socialized so that they can fit in with adult society.

Lately I’ve been thinking of college as a form of upfront consulting. Instead of going to school when you’ve got a specific problem to solve, go when you’re young and have nothing better to do. Since you’re getting the consulting before you know what sort of problems you’ll face in the future, we couldn’t possibly give you exactly the right bundle of knowledge.

College exposes students to lots of different ideas that might combine in unexpected ways. Your class in underwater basket weaving might seem like a waste of time until some day 30 years later you are trying to solve some problem that turns out to make a lot more sense if you think of it like wet wicker (I’m looking at you civil engineers!).

Some of what I (and my colleagues) do helps prepare students for their careers, but mostly I’m trying to help them be better–better thinkers, better able to understand and appreciate, better able to enjoy life.

The definition problem

The word “Education” means a lot of things to a lot of people. More often than not, people use the word without being clear about what they mean. Often it means “job training.” Sometimes it means “enlightening.” Other times it means “making you agree with me.” In practice, it means surviving enough classes that you get a piece of paper indicating as much.

It should be recognized as a vague and nebulous word instead of being pigeonholed. It isn’t a binary state (I was ignorant, now I’m educated). It’s helpful to think of people as being more or less educated, but the state of your education isn’t something we can really objectively compare to my state of education.

There are lots of important but nebulous things in our lives: health, happiness, moral worth. Their vagueness makes them difficult, but it isn’t going away.

It isn’t hard to convince people that education is nebulous, but it is hard to get people to behave as though they really understand that.

Homogenization and commodification

Once people start thinking of education as some objective thing we can pull off a shelf and give to someone, we run into the real problems. This unexamined view leads to bureaucracies that attempt to standardize and commodify education.

Don’t get me wrong, I get why people would try to do this. We want everyone to get education (and moral training, and good health, and…). And as long as we’re worried about that, we’re going to worry about making sure everyone gets the best education possible. But “the best” gives the false impression that there’s one right answer.

A top-down approach isn’t the right way to achieve the goal of widespread education. Attempting to systematically scale up education provision kills the goose that lays the golden eggs. We should fight against attempts to commodify university (which currently happens via accreditation-as-gateway-to-subsidy and the general expansion of bureaucracy through administration).

So what should universities do?

There are different margins on which we can justify our existence, but it’s not obvious how to balance our tasks: teaching, researching, advocating, etc.. Given the high degree of uncertainty, I’d argue for pluralism… different schools (and professors) should be trying different things. As universities adapt to the future, it’s important that they don’t all try to adapt in the same way at the same time.

I think a big part of the problem is that we’ve been too successful at rent seeking. All money/privilege/goodies comes with strings attached, and more money comes with more strings. We’re always going to get a little tangled up in those strings, but in the last couple generations we’ve hamstrung ourselves. Accreditation and assessment have become the most important things a modern university does, which distracts from our more fundamental goals.

A bottom up approach doesn’t mean less education, just different education. A more modest education system would change the mix of costs and benefits faced by stakeholders. Employers might rely less on degree signalling, which means hiring managers and potential employees exercising more judgement in sending and evaluating quality signals. I don’t know exactly what would happen, but flexibility is valuable for the nebulous goals universities are supposed to be pursuing.

But at the moment we seem to be in an equilibrium. Students are expected to go to school, schools are expected to deliver on promises they can’t really fulfill, and we go through the motions of keeping schools accountable in a way that basically misses the point.

So what will I do this semester? I’m going to keep talking about interesting stuff to students. I’m going to keep working towards getting tenure. But I’m also going to quietly subvert attempts to commodify university.


Paul Romer, the World Bank and Angus Deaton’s critique of effective altruism


Last week Paul Romer crashed out of his position as Chief Economist at the World Bank. He had already been isolated from the rest of the World Bank’s researchers for criticizing the reliability of their data. It seems there were several bones of contention, including the accusation that Chile’s current social democratic government falsified data contributing to some of its development indicators. Romer’s allergic reaction to the World Bank’s internal research processes has wider implications for how we think about policy research in international NGOs.

Continue reading

A Radical Take on Science and Religion

An obscure yet still controversial engineer–physicist named Bill Gaede put out a video last year, inspired by Martin Luther, spelling out 95 theses against the current scientific consensus in physics. I’m in no position to evaluate his views on physics, but I find his take on the difference between science and religion fascinating. In this post I’ll try to condense some of his views on that narrow topic. You can watch the whole video here. Fair warning: his presentation style is rather eccentric. I find it quirky and fun, you may feel differently.


A description is a list of characteristics and traits. It answers the questions what, where, when, and how many.

An explanation is a discussion of causes, reasons, and mechanisms. It answers the question why.

An opinion is a subjective belief. What counts as “evidence”, “proof”, or “truth” is an opinion.

Science is the systematic attempt at providing explanations. Why do planets orbit stars? Why are some people rich and others poor? Why is there something instead of nothing? All questions that can be answered (to varying degrees) with science.

Note well that the experiments and observations per se are not science. The scientist takes those results for granted: they form his hypothesis—literally, that which is without a thesis or explanation. Technicians and assistants may carry out the observations and experiments. But the actual science is the explanation.

Religion is the systematic attempt at shaping opinions. Religion is not mere faith—the belief in things without evidence. Religion works through persuasion—the use of science and faith to appeal to your subjective beliefs about evidence and proof and truth.

Science is not be about persuading the audience. Good science is about providing consistent, logically sound explanations. An individual may have many religious reasons for their incredulity. But religious skepticism is not the concern of the scientist. The scientist is only concerned about logically valid explanations.

How to fix the journal model? 

Disclaimer: I am not (yet!) published in any peer reviewed journals.

A companion recently posed an interesting proposal to improve the academic journal model: have referees publish their reviews after X period of time. I am sympathetic to the idea as I have always found secrecy to be a strange thing in decision making. What’s the point of ‘blind’ reviews anyway? From conversing with those with more experience in the field, it is rarely a ‘double blind’ process but a de facto one way ‘blind’ process. This seems to be the case more so outside the major journals.

My counter is: why not just get rid of journals altogether? Why not just publish via SSRN and similar websites? Journals seem to have maintained their existence in the digital age as a means of quality insurance, but there’s still lots of junk in the top journals. Surely we can come up with better ways than relying on a few referees? Even relying on citation count would be a better measure of a paper’s value I think.