Eye Candy: travel advice for Dutch citizens

NOL Dutch travel advice
Click here to zoom

Interesting map, for a few reasons. The United States is in green, which means there are “no special safety risks” to worry about. What I take this to mean is that as long as you stay out of, say, North Sacramento, or East Austin, when the sun goes down you’ll be safe.

The “pay attention, safety risks” label makes quite a big jump in my conceptual understanding of this map. What this warning means is that if you are particularly stupid, you won’t end up getting mugged and losing your wallet (like you would in green areas), you will instead end up losing your life or being kidnapped for ransom (or slavery).

This is quite a big jump, but it makes perfect sense, especially if you think about the jump in terms of inequality and, more abstractly, freedom.

The minimum wage induced spur of technological innovation ought not be praised

In a recent article at Reason.comChristian Britschgi argues that “Government-mandated price hikes do a lot of things. Spurring technological innovation is not one of them”. This is in response to the self-serve kiosks in fast-food restaurants that seem to have appeared everywhere following increases in the minimum wage.

In essence, his argument is that minimum wages do not induce technological innovation. That is an empirical question. I am willing to consider that this is not the most significant of adjustment margins to large changes in the minimum wage. The work of Andrew Seltzer on the minimum wage during the Great Depression in the United States suggests that at the very least it ought not be discarded.  Britschgi does not provide such evidence, he merely cites anecdotal pieces of support. Not that anecdotes are bad, but those that are cited come from the kiosk industry – hardly a neutral source.

That being said, this is not what makes me contentious towards the article. It is the implicit presupposition contained within: that technological innovation is good.

No, technological innovation is not necessarily good. Firms can use two inputs (capital and labor) and, given prices and return rates, there is an optimal allocation of both. If you change the relative prices of each, you change the optimal allocation. However, absent the regulated price change, the production decisions are optimal. With the regulated price change, the production decisions are the best available under the constraint of working within a suboptimal framework. Thus, you are inducing a rate of technological innovation which is too fast relative to the optimal rate.

You may think that this is a little luddite of me to say, but it is not. It is a complement to the idea that there are “skill-biased” technological change (See notably this article of Daron Acemoglu and this one by Bekman et al.). If the regulated wage change affects a particular segment of the labor (say the unskilled portions – e.g. those working in fast food restaurants), it changes the optimal quantity of that labor to hire. Sure, it bumps up demand for certain types of workers (e.g. machine designers and repairmen) but it is still suboptimal. One should not presuppose that ipso facto, technological change is good. What matters is the “optimal” rate of change. In this case, one can argue that the minimum wage (if pushed up too high) induces a rate of technological change that is too fast and will act in disfavor of unskilled workers.

As such, yes, the artificial spurring of technological change should not be deemed desirable!

On “strawmanning” some people and inequality

For some years now, I have been interested in the topic of inequality. One of the angles that I have pursued is a purely empirical one in which I attempt to improvement measurements. This angle has yielded two papers (one of which is still in progress while the other is still in want of a home) that reconsider the shape of the U-curve of income inequality in the United States since circa 1900.

The other angle that I have pursued is more theoretical and is a spawn of the work of Gordon Tullock on income redistribution. That line of research makes a simple point: there are some inequalities that are, in normative terms, worrisome while others are not. The income inequality stemming from the career choices of a benedictine monk and a hedge fund banker are not worrisome. The income inequality stemming from being a prisoner of one’s birth or from rent-seekers shaping rules in their favor is worrisome.  Moreover, some interventions meant to remedy inequalities might actually make things worse in the long-run (some articles even find that taxing income for the sake of redistribution may increase inequality if certain conditions are present – see here).  I have two articles on this (one forthcoming, the other already published) and a paper still in progress (with Rosolino Candela), but they are merely an extension of the aforementioned Gordon Tullock and some other economists like Randall Holcombe, William Watson and Vito Tanzi. After all, the point that a “first, do no harm” policy to inequality might be more productive is not novel (all that it needs is a deep exploration and a robust exposition).

Notice that there is an implicit assumption in this line of research: inequality is a topic worth studying. This is why I am annoyed by statements like those that Gabriel Zucman made to ProMarket. When asked if he was getting pushback for his research on inequality (which is novel and very important), Zucman answers the following:

Of course, yes. I get pushback, let’s say not as much on the substance oftentimes as on the approach. Some people in economics feel that economics should be only about efficiency, and that talking about distributional issues and inequality is not what economists should be doing, that it’s something that politicians should be doing.

This is “strawmanning“. There is no economist who thinks inequality is not a worthwhile topic. Literally none. True, economists may have waned in their interest towards the topic for some years but it never became a secondary topic. Major articles were published in major journals throughout the 1990s (which is often identified as a low point in the literature) – most of them groundbreaking enough to propel the topic forward a mere decade later. This should not be surprising given the heavy ideological and normative ramifications of studying inequality. The topic is so important to all social sciences that no one disregards it. As such, who are these “some people” that Zucman alludes too?

I assume that “some people” are strawmen substitutes for those who, while agreeing that inequality is an important topic, disagree with the policy prescriptions and the normative implications that Zucman draws from his work. The group most “hostile” to the arguments of Zucman (and others such as Piketty, Saez, Atkinson and Stiglitz) is the one that stems from the public choice tradition. Yet, economists in the public-choice tradition probably give distributional issues a more central role in their research than Zucman does. They care about institutional arrangements and the rules of the game in determining outcomes. The very concept of rent-seeking, so essential to public choice theory, relates to how distributional coalitions can emerge to shape the rules of the game in a way that redistribute wealth from X to Y in ways that are socially counterproductive. As such, rent-seeking is essentially a concept that relates to distributional issues in a way that is intimately related to efficiency.

The argument by Zucman to bolster his own claim is one of the reason why I am cynical towards the times we live in. It denotes a certain tribalism that demonizes the “other side” in order to avoid engaging in them. That tribalism, I believe (but I may be wrong), is more prevalent than in the not-so-distant past. Strawmanning only makes the problem worse.

The great global trend for the equality of well-being since 1900

Some years ago, I read The Improving State of the World: Why We’re Living Longer, Healthier, More Comfortable Lives on a Cleaner Planet by Indur Goklany. It was my first exposition to the claim that, globally, there has been a long-trend in the equality of well-being. The observation made by Goklany which had a dramatic effect on me was that many countries who were, at the time of his writing, as rich (incomes per capita) as Britain in 1850 had life expectancy and infant mortality levels well superior to 1850 Britain. Ever since, I accumulated the statistics on that regard and I often tell my students that when comes the time to “dispell” myths regarding the improvement in living standards since circa 1800 (note: people are generally unable to properly grasp the actual improvement in living standards).

Some years after, I discovered the work of Leandro Prados de la Escosura who is a cliometrician who (I think I told him that when I met him) influenced me deeply in my work regarding the measurement of living standards and who wrote this paper which I will discuss here.  His paper, and his work in general, shows that globally the inequality in incomes has faltered since the 1970s.  That is largely the result of the economic rise of India and China (the world’s two largest antipoverty programs). Figure1Leandro

However, when extending his measurements to include life expectancy and schooling in order to capture “human development” (the idea that development is not only about incomes but the ability to exercise agency – i.e. the acquisition of positive liberty), the collapse in “human development” inequality (i.e. well-being) precedes by many decades the reduction in global income inequality. Indeed, the collapse started around 1900, not 1970!

Figure2LEandro.png

In reading Leandro’s paper, I remembered the work of Goklany which had sowed the seeds of this idea in my idea. Nearly a decade after reading Goklany’s work well after I fully accepted this fact as valid, I remain stunned by its implications. You should too.

Nightcap

  1. The new age of great power politics John Bew, New Statesman
  2. Before We Cure Others of Their False Beliefs, We Must First Cure Our Own Christopher Preble, Cato Unbound
  3. Libertarians (FDP) ruin coalition talks in Germany Christian Hacke, Deutsche Welle
  4. The Rich You Will Always Have With You Brandon Turner, Law & Liberty

On the “tea saucer” of income inequality since 1917

I disagree often with the many details that underlie the arguments of Thomas Piketty and Emmanuel Saez. That being said, I am also a great fan of their work and of them in general. In fact, I think that both have made contributions to economics that I am envious to equal. To be fair, their U-curve of inequality is pretty much a well-confirmed fact by now: everyone agrees that the period from 1890-1929 was a high-point of inequality which leveled off until the 1970s and then picked up again.

Nevertheless, while I am convinced of the curvilinear aspect of the evolution of income inequality in the United State as depicted by Piketty and Saez, I am not convinced by the amplitudes. In their 2003 article, the U-curve of inequality really looks like a “U” (see image below).  Since that article, many scholars have investigated the extent of the increase in inequality post-1980 (circa). Many have attenuated the increase, but they still find an increase (see here here here here here here here here here). The problem is that everyone has been considering the increase – i.e. the right side of the U-curve. Little attention has been devoted to the left side of the U-curve even though that is where data problems should be considered more carefully for the generation of a stylized fact. This is the contribution I have been coordinating and working on for the last few months alongside John Moore, Phil Magness and Phil Schlosser. 

Blog Figure

To arrive at their proposed series of inequality, Piketty and Saez used the IRS Statistics of Income (SOI) to derive top income fractiles. However, the IRS SOI have many problems. The first is that between 1917 and 1943, there are many years where there are less than 10% of the potential tax population that files a tax return. This prohibits the use of a top 10% income share in many years unless an adjustment is made. The second is that prior to 1943, the IRS reports net income and reports adjusted gross income after 1943. As such, to link post-1943 with pre-1943, there needs to be an additional adjustment. Piketty and Saez made some seemingly reasonable assumptions, but they have never been put to the test regarding sensitivity and robustness. This is leaving aside issues of data quality (I am not convinced IRS data is very good as most of it was self-reported pre-1943 which is a period with wildly varying tax rates). The question here is “how good” are the assumptions?

What we did is verify each assumption to see their validity. The first one we tackle is the adjustment for the low number of returns. To make their adjustments, Piketty and Saez used the fact that single households and married households filed in different quantities relative to their total population. Their idea is that a year in which there was a large number of return was used, the ratio of single to married could be used to adjust the series. The year they used is 1942. This is problematic as 1942 is a war year with self-reporting when large quantities of young American males are abroad fighting. Using 1941, the last US peace year, instead shows dramatically different ratios. Using these ratios knocks off a few points from the top 10% income share. Why did they use 1942? Their argument was there was simply not enough data to make the correction in 1941.  They point to a special tabulation in the 1941 IRS-SOI of 112,472 1040A forms from six states which was not deemed sufficient to make to make the corrections. However, later in the same document, there is a larger and sufficient sample of 516,000 returns from all 64 IRS collection districts (roughly 5% of all forms). By comparison, the 1942 sample Piketty and Saez used to correct only had 455,000 returns.  Given the war year and the sample size, we believe that 1941 is a better year to make the adjustment.

Second, we also questioned the smoothing method to link net income-based series with adjusted-gross income based series (i.e. pre-1943 and post-1943 series). The reason for this is that the implied adjustment for deductions made by Piketty and Saez is actually larger than all the deductions claimed that were eligible under the definition of Adjusted Gross Income – which is a sign of overshot on their parts. Using the limited data available for deductions by income groups and making some assumptions (very conservative ones) to move further back in time, we found that adjusting for “actual deductions” yields a lower level of inequality. This is contrasted with the fixed multipliers which Piketty and Saez used pre-1943.

Third, we question their justification for not using the Kuznets income denominator. They argued that Kuznets’ series yielded an implausible figure because, in 1948, its use yielded a greater income for non-fillers than for fillers.  However, this is not true of all years. In fact, it is only true after 1943. Before 1943, the income of non-fillers is equal in proportion to the one they use post-1944 to impute the income of non-fillers. This is largely the result of an accounting error definition. Incomes before 1943 were reported as net income and as gross incomes after that point. This is important because the stylized fact of a pronounced U-curve is heavily sensitive to the assumption made regarding the denominator.

These three adjustments are pretty important in terms of overall results (see image below).  The pale blue line is that of Piketty of Saez as depicted in their 2003 paper in the Quarterly Journal of Economics. The other blue line just below it is the effect of deductions only (the adjustment for missing returns affects only the top 10% income share). All the other lines that mirror these two just below (with the exception of the darkest blue line which is the original Kuznets inequality estimates) compound our corrections with three potential corrections for the denominators. The U-curve still exists, but it is not as pronounced. When you look with the adjustments made by Mechling et al. (2017) and Auten and Splinter (2017) for the post-1960 period (green and red lines) and link them with ours, you can still see the curvilinear shape but it looks more like a “tea saucer” than a pronounced U-curve.

In a way, I see this as a simultaneous complement to the work of Richard Sutch and to the work of Piketty and Saez: the U-curve still exists, but the timing and pattern is slightly more representative of history. This was a long paper to write (and it is a dry read given the amount of methodological discussions), but it was worth it in order to improve upon the state of our knowledge.

FigureInequality