The minimum wage induced spur of technological innovation ought not be praised

In a recent article at Reason.comChristian Britschgi argues that “Government-mandated price hikes do a lot of things. Spurring technological innovation is not one of them”. This is in response to the self-serve kiosks in fast-food restaurants that seem to have appeared everywhere following increases in the minimum wage.

In essence, his argument is that minimum wages do not induce technological innovation. That is an empirical question. I am willing to consider that this is not the most significant of adjustment margins to large changes in the minimum wage. The work of Andrew Seltzer on the minimum wage during the Great Depression in the United States suggests that at the very least it ought not be discarded.  Britschgi does not provide such evidence, he merely cites anecdotal pieces of support. Not that anecdotes are bad, but those that are cited come from the kiosk industry – hardly a neutral source.

That being said, this is not what makes me contentious towards the article. It is the implicit presupposition contained within: that technological innovation is good.

No, technological innovation is not necessarily good. Firms can use two inputs (capital and labor) and, given prices and return rates, there is an optimal allocation of both. If you change the relative prices of each, you change the optimal allocation. However, absent the regulated price change, the production decisions are optimal. With the regulated price change, the production decisions are the best available under the constraint of working within a suboptimal framework. Thus, you are inducing a rate of technological innovation which is too fast relative to the optimal rate.

You may think that this is a little luddite of me to say, but it is not. It is a complement to the idea that there are “skill-biased” technological change (See notably this article of Daron Acemoglu and this one by Bekman et al.). If the regulated wage change affects a particular segment of the labor (say the unskilled portions – e.g. those working in fast food restaurants), it changes the optimal quantity of that labor to hire. Sure, it bumps up demand for certain types of workers (e.g. machine designers and repairmen) but it is still suboptimal. One should not presuppose that ipso facto, technological change is good. What matters is the “optimal” rate of change. In this case, one can argue that the minimum wage (if pushed up too high) induces a rate of technological change that is too fast and will act in disfavor of unskilled workers.

As such, yes, the artificial spurring of technological change should not be deemed desirable!

On “strawmanning” some people and inequality

For some years now, I have been interested in the topic of inequality. One of the angles that I have pursued is a purely empirical one in which I attempt to improvement measurements. This angle has yielded two papers (one of which is still in progress while the other is still in want of a home) that reconsider the shape of the U-curve of income inequality in the United States since circa 1900.

The other angle that I have pursued is more theoretical and is a spawn of the work of Gordon Tullock on income redistribution. That line of research makes a simple point: there are some inequalities that are, in normative terms, worrisome while others are not. The income inequality stemming from the career choices of a benedictine monk and a hedge fund banker are not worrisome. The income inequality stemming from being a prisoner of one’s birth or from rent-seekers shaping rules in their favor is worrisome.  Moreover, some interventions meant to remedy inequalities might actually make things worse in the long-run (some articles even find that taxing income for the sake of redistribution may increase inequality if certain conditions are present – see here).  I have two articles on this (one forthcoming, the other already published) and a paper still in progress (with Rosolino Candela), but they are merely an extension of the aforementioned Gordon Tullock and some other economists like Randall Holcombe, William Watson and Vito Tanzi. After all, the point that a “first, do no harm” policy to inequality might be more productive is not novel (all that it needs is a deep exploration and a robust exposition).

Notice that there is an implicit assumption in this line of research: inequality is a topic worth studying. This is why I am annoyed by statements like those that Gabriel Zucman made to ProMarket. When asked if he was getting pushback for his research on inequality (which is novel and very important), Zucman answers the following:

Of course, yes. I get pushback, let’s say not as much on the substance oftentimes as on the approach. Some people in economics feel that economics should be only about efficiency, and that talking about distributional issues and inequality is not what economists should be doing, that it’s something that politicians should be doing.

This is “strawmanning“. There is no economist who thinks inequality is not a worthwhile topic. Literally none. True, economists may have waned in their interest towards the topic for some years but it never became a secondary topic. Major articles were published in major journals throughout the 1990s (which is often identified as a low point in the literature) – most of them groundbreaking enough to propel the topic forward a mere decade later. This should not be surprising given the heavy ideological and normative ramifications of studying inequality. The topic is so important to all social sciences that no one disregards it. As such, who are these “some people” that Zucman alludes too?

I assume that “some people” are strawmen substitutes for those who, while agreeing that inequality is an important topic, disagree with the policy prescriptions and the normative implications that Zucman draws from his work. The group most “hostile” to the arguments of Zucman (and others such as Piketty, Saez, Atkinson and Stiglitz) is the one that stems from the public choice tradition. Yet, economists in the public-choice tradition probably give distributional issues a more central role in their research than Zucman does. They care about institutional arrangements and the rules of the game in determining outcomes. The very concept of rent-seeking, so essential to public choice theory, relates to how distributional coalitions can emerge to shape the rules of the game in a way that redistribute wealth from X to Y in ways that are socially counterproductive. As such, rent-seeking is essentially a concept that relates to distributional issues in a way that is intimately related to efficiency.

The argument by Zucman to bolster his own claim is one of the reason why I am cynical towards the times we live in. It denotes a certain tribalism that demonizes the “other side” in order to avoid engaging in them. That tribalism, I believe (but I may be wrong), is more prevalent than in the not-so-distant past. Strawmanning only makes the problem worse.

The great global trend for the equality of well-being since 1900

Some years ago, I read The Improving State of the World: Why We’re Living Longer, Healthier, More Comfortable Lives on a Cleaner Planet by Indur Goklany. It was my first exposition to the claim that, globally, there has been a long-trend in the equality of well-being. The observation made by Goklany which had a dramatic effect on me was that many countries who were, at the time of his writing, as rich (incomes per capita) as Britain in 1850 had life expectancy and infant mortality levels well superior to 1850 Britain. Ever since, I accumulated the statistics on that regard and I often tell my students that when comes the time to “dispell” myths regarding the improvement in living standards since circa 1800 (note: people are generally unable to properly grasp the actual improvement in living standards).

Some years after, I discovered the work of Leandro Prados de la Escosura who is a cliometrician who (I think I told him that when I met him) influenced me deeply in my work regarding the measurement of living standards and who wrote this paper which I will discuss here.  His paper, and his work in general, shows that globally the inequality in incomes has faltered since the 1970s.  That is largely the result of the economic rise of India and China (the world’s two largest antipoverty programs). Figure1Leandro

However, when extending his measurements to include life expectancy and schooling in order to capture “human development” (the idea that development is not only about incomes but the ability to exercise agency – i.e. the acquisition of positive liberty), the collapse in “human development” inequality (i.e. well-being) precedes by many decades the reduction in global income inequality. Indeed, the collapse started around 1900, not 1970!

Figure2LEandro.png

In reading Leandro’s paper, I remembered the work of Goklany which had sowed the seeds of this idea in my idea. Nearly a decade after reading Goklany’s work well after I fully accepted this fact as valid, I remain stunned by its implications. You should too.

Nightcap

  1. The new age of great power politics John Bew, New Statesman
  2. Before We Cure Others of Their False Beliefs, We Must First Cure Our Own Christopher Preble, Cato Unbound
  3. Libertarians (FDP) ruin coalition talks in Germany Christian Hacke, Deutsche Welle
  4. The Rich You Will Always Have With You Brandon Turner, Law & Liberty

On the “tea saucer” of income inequality since 1917

I disagree often with the many details that underlie the arguments of Thomas Piketty and Emmanuel Saez. That being said, I am also a great fan of their work and of them in general. In fact, I think that both have made contributions to economics that I am envious to equal. To be fair, their U-curve of inequality is pretty much a well-confirmed fact by now: everyone agrees that the period from 1890-1929 was a high-point of inequality which leveled off until the 1970s and then picked up again.

Nevertheless, while I am convinced of the curvilinear aspect of the evolution of income inequality in the United State as depicted by Piketty and Saez, I am not convinced by the amplitudes. In their 2003 article, the U-curve of inequality really looks like a “U” (see image below).  Since that article, many scholars have investigated the extent of the increase in inequality post-1980 (circa). Many have attenuated the increase, but they still find an increase (see here here here here here here here here here). The problem is that everyone has been considering the increase – i.e. the right side of the U-curve. Little attention has been devoted to the left side of the U-curve even though that is where data problems should be considered more carefully for the generation of a stylized fact. This is the contribution I have been coordinating and working on for the last few months alongside John Moore, Phil Magness and Phil Schlosser. 

Blog Figure

To arrive at their proposed series of inequality, Piketty and Saez used the IRS Statistics of Income (SOI) to derive top income fractiles. However, the IRS SOI have many problems. The first is that between 1917 and 1943, there are many years where there are less than 10% of the potential tax population that files a tax return. This prohibits the use of a top 10% income share in many years unless an adjustment is made. The second is that prior to 1943, the IRS reports net income and reports adjusted gross income after 1943. As such, to link post-1943 with pre-1943, there needs to be an additional adjustment. Piketty and Saez made some seemingly reasonable assumptions, but they have never been put to the test regarding sensitivity and robustness. This is leaving aside issues of data quality (I am not convinced IRS data is very good as most of it was self-reported pre-1943 which is a period with wildly varying tax rates). The question here is “how good” are the assumptions?

What we did is verify each assumption to see their validity. The first one we tackle is the adjustment for the low number of returns. To make their adjustments, Piketty and Saez used the fact that single households and married households filed in different quantities relative to their total population. Their idea is that a year in which there was a large number of return was used, the ratio of single to married could be used to adjust the series. The year they used is 1942. This is problematic as 1942 is a war year with self-reporting when large quantities of young American males are abroad fighting. Using 1941, the last US peace year, instead shows dramatically different ratios. Using these ratios knocks off a few points from the top 10% income share. Why did they use 1942? Their argument was there was simply not enough data to make the correction in 1941.  They point to a special tabulation in the 1941 IRS-SOI of 112,472 1040A forms from six states which was not deemed sufficient to make to make the corrections. However, later in the same document, there is a larger and sufficient sample of 516,000 returns from all 64 IRS collection districts (roughly 5% of all forms). By comparison, the 1942 sample Piketty and Saez used to correct only had 455,000 returns.  Given the war year and the sample size, we believe that 1941 is a better year to make the adjustment.

Second, we also questioned the smoothing method to link net income-based series with adjusted-gross income based series (i.e. pre-1943 and post-1943 series). The reason for this is that the implied adjustment for deductions made by Piketty and Saez is actually larger than all the deductions claimed that were eligible under the definition of Adjusted Gross Income – which is a sign of overshot on their parts. Using the limited data available for deductions by income groups and making some assumptions (very conservative ones) to move further back in time, we found that adjusting for “actual deductions” yields a lower level of inequality. This is contrasted with the fixed multipliers which Piketty and Saez used pre-1943.

Third, we question their justification for not using the Kuznets income denominator. They argued that Kuznets’ series yielded an implausible figure because, in 1948, its use yielded a greater income for non-fillers than for fillers.  However, this is not true of all years. In fact, it is only true after 1943. Before 1943, the income of non-fillers is equal in proportion to the one they use post-1944 to impute the income of non-fillers. This is largely the result of an accounting error definition. Incomes before 1943 were reported as net income and as gross incomes after that point. This is important because the stylized fact of a pronounced U-curve is heavily sensitive to the assumption made regarding the denominator.

These three adjustments are pretty important in terms of overall results (see image below).  The pale blue line is that of Piketty of Saez as depicted in their 2003 paper in the Quarterly Journal of Economics. The other blue line just below it is the effect of deductions only (the adjustment for missing returns affects only the top 10% income share). All the other lines that mirror these two just below (with the exception of the darkest blue line which is the original Kuznets inequality estimates) compound our corrections with three potential corrections for the denominators. The U-curve still exists, but it is not as pronounced. When you look with the adjustments made by Mechling et al. (2017) and Auten and Splinter (2017) for the post-1960 period (green and red lines) and link them with ours, you can still see the curvilinear shape but it looks more like a “tea saucer” than a pronounced U-curve.

In a way, I see this as a simultaneous complement to the work of Richard Sutch and to the work of Piketty and Saez: the U-curve still exists, but the timing and pattern is slightly more representative of history. This was a long paper to write (and it is a dry read given the amount of methodological discussions), but it was worth it in order to improve upon the state of our knowledge.

FigureInequality

On Monopsony and Legal Surroundings

A few days ago, in reply to this December NBER study, David Henderson at EconLog questioned the idea that labor market monopsonies matter to explain sluggish wage growth and rising wage inequality. Like David, I am skeptical of this argument. However, I am skeptical for different reasons.

First, let’s point out that the reasoning behind this story is well established (see notably the work of Alan Manning). Firms with market power over a more or less homogeneous labor force which must assume a disproportionate amount of search costs have every incentive to depress wages. This can lead to reductions in growth as, notably, it discourages human capital formation (see these two papers here and here as examples). As such, I am not as skeptical of “monopsony” as an argument.

However, I am skeptical of “monopsony” as an argument. Well, what I mean is that I am skeptical of considering monopsony without any qualifications regarding institutions. The key condition to an effective monopsony is the existence of barriers (natural and/or legal to mobility). As soon as it is relatively easy to leave a small city for another city, then even a city with a single-employer will have little ability to exert his “market power” (Note: I really hate that word). If you think about it simply through these lenses, then all that matters is the ability to move. All you need to care about are the barriers (legal and/or natural) to mobility (i.e. the chance to defect).

And here’s the thing. I don’t think that natural barriers are a big deal. For example, Price Fishback found that the “company towns” im the 19th century were hardly monopsonies (see here, here, here and here). If natural barriers were not a big deal, they are certainly not a big deal today. As such, I think the action is largely legal. My favorite example is the set of laws adopted following the Emancipation of slaves in the United States which limited the mobility (by limiting the chances of Northerners hiring agents to come who would act as headhunters in the South). That is a legal barrier (see here and here). I am also making that argument regarding the institution of seigneurial tenure in Canada in a working paper that I am reorganizing (see here).

What about today? The best example are housing restrictions? Well, housing construction and zoning regulations basically make the supply of housing quite inelastic. The areas where these regulations are the most severe are also, incidentally, high productivity areas. This has two effects on mobility. The first is that low-productivity workers in low-productivity areas cannot easily afford to move to the high-productivity area. As such, you are reducing their options of defection and increasing the likelihood that they will not look. You are also reducing the pool of places to apply which means that, in order to find a more remunerative job, they must search longer and harder (i.e. you are increasing their search costs). The second effect is that you are also tying workers to the areas they are in. True, they gain because the productivity becomes capitalized in the potential rent from selling any property they own. However, they are in essence tied to the place. As such, they can be more easily mistreated by employers.

These are only examples. I am sure I could extend the list to reach the size of the fiscal code (well, maybe not that much). The point is that “monopsony” (to the extent that it exists) is merely a symptom of other policies that either increase search costs for workers or reduce the number of options for defections. And I do not care much for analyzing symptoms.

In the Search for an Optimal Level of Inequality

Recently, the blog ThinkMarkets published a post by Gunther Schnabl about how Friedrich Hayek’s works helped to understand the link between Quantitative Easing and political unrest. The piece of writing summarized with praiseworthy precision three different stages of Friedrich Hayek’s economic and political ideas and, among the many topics it addressed, it was mentioned the increasing level of income and wealth inequality that a policy of low rates of interest might bring about.

It is well-known that Friedrich Hayek owes the Swedish School as much as he does the Austrian School on his ideas about money and capital. In fact, he borrows the distinction between natural and market interest rates from Knut Wicksell. The early writings of F.A. Hayek state that disequilibrium and crisis are caused by a market interest rate that is below the natural interest rate. There is no necessity of a Central Bank to arrive at such a situation: the credit creation of the banking system or a sudden change of the expectancies of the public could set the market interest rate well below the natural interest rate and, thus, lead to what Hayek and Nicholas Kaldor called “the Concertina Effect.”

At this point we must formulate a disclaimer: Friedrich Hayek’s theory of money and capital was so controversial and subject to so many regrets by his early supporters – like said Kaldor, Ronald Coase, or Lionel Robbins – that we can hardly carry on without reaching a previous theoretical settlement over the apportations of his works. Until then, the readings on Hayek’s economics will have mostly a heuristic and inspirational value. They will be an starting point from where to spring new insights, but hardly a single conclusive statement. Hayekian economics is a whole realm to be conquered, but precisely, the most of this quest still remains undone.

For example, if we assume – as it does the said post – that ultra-loose monetary policy enlarges inequality and engenders political instability, then we are bound to find a monetary policy that delivers, or at least does not avoid, an optimal level of inequality. As it is explained in the linked lecture, the definition of such a concept might differ whether it depends on an economic or a political or a moral perspective.

Here is where I think the works of F.A. Hayek have still so much to give to our inquiries: the matter is not where to place an optimal level of inequality, but to discover the conditions under which a certain level of inequality appears to us as legitimate, or at least tolerable. This is not a subject about quantities, but about qualities. Our mission is to discover the mechanism by which the notions of fairness, justice, or even order are formed in our beliefs.

Perhaps that is the deep meaning of the order or equilibrium that it is reach when, to use the terminology of Wicksell and Hayek’s early writings, both natural and market interest rates are the same: a state of affairs in which the most of the expectancies of the agents could prove correct. The solution does not depend upon a particular public policy, but on providing an abstract institutional structure in which each individual decision could profit the most from the spontaneous order of human interaction.