On Cuba’s Fake Stats

On Monday, my piece on the use violence for public health purposes in Cuba (reducing infectious diseases through coercion at the expense of economic growth which in turn increases deaths from preventable diseases related to living standards) assumed that the statistics were correct.

They are not! How much so? A lot! 

As I mentioned on Monday, Cuban doctors face penalties for not meeting their “infant mortality” targets. As a result, they use extreme measures ranging from abortion against the mother’s will to sterilization and isolation.  They also have an incentive to lie…(pretty obvious right?)

How can they lie? By re-categorizing early neonatal (from birth to 7th day) or neonatal deaths (up to 28th day) as late fetal deaths. Early neonatal deaths and late fetal deaths are basically grouped together at “perinatal” deaths since they share the same factors. Normally, health statistics suggest that late fetal deaths and early neonatal deaths should be heavily correlated (the graph below makes everything clearer).  However late fetal deaths do not enter inside the infant mortality rates while the early neonatal deaths do enter that often-cited rate (see graph below).

Death Structures.png

Normally, the ratio of late fetal deaths to early neonatal deaths should be more or less constant across space. In the PERISTAT data (the one that best divides those deaths), most countries have a ratio of late fetal to early neonatal deaths ranging from 1.04 to 3.03. Cuba has a ratio of more than 6. This is pretty much a clear of data manipulation.

In a recent article published in Cuban Studies, Roberto Gonzales makes adjustments to create a range where the ratio would be in line with that of other countries. If it were, the infant mortality of Cuba would be between 7.45 and 11.16 per 1,000 births rather than the 5.79 per 1,000 reported by the regime – as much as 92% higher. As a result, Cuba moves from having an average infant mortality rate in the PERISTAT data to having the worst average infant mortality in that dataset – above that of most European and North American countries.

So not only is my comment from Monday very much valid, the “upside” (for a lack of a better term) I mentioned is largely overblown because doctors and politicians have an incentive to fake the numbers.

Is planned obsolescence a good thing?

There is this recurring argument that goods don’t last as long today as they did fifty years ago. My father berated me a few months ago about my economics by saying that back in his time, goods lasted longer. I questioned his data (there are signs that goods last as long as they did twenty or thirty years ago). This new ATTN video on Repair Cafes would have caused my father to rejoice greatly. However, I am going to ask a question here and go a step further: is planned obsolescence the symptom of a good thing?

Here is my argument (feel free to throw rocks after).

Improving the lifespan of a good requires a greater level of inputs which increases the marginal costs of the good. Producing a higher-quality good basically shifts the supply curve leftwards. Basically, we pay a little more for something that lasts longer. However, if we are living in a world of rapid technological innovation, what is the point of expending more resources on a good with a twenty-year lifespan but which will be obsolete two years?

Take my previous iPhone – it lasted three years before it simply decided to not work. In the three years between my old phone and my new phone, there was a rapid change in quality: more memory, better camera, faster processing, better sound. Imagine that Apple had invested billions to increase the life of my iPhone from three to nine years. Would I have bought that phone? In all honesty, it would depend on the price increase but the answer would have been closer to “no” than to “yes”. So in a way, Apple reaches a marginal consumer like me by lowering the price and turns me into a technology adopter. However, let’s imagine this from Apple’s perspective. Its in a race for its life against competitors who keep inventing new widgets and features that change the manner in which we consume. So, its choice is the following:

A: Increase lifespan, higher marginal costs and a leftward shift of supply that causes slightly higher prices and less demand. These resources cannot be expended on R&D and innovation
B: Shorten lifespan, lower marginal costs, lower prices, more goods consumed; more resources on R&D and innovation.

If everyone is inventing rapidly, then B is the dominant strategy. If innovation is zero, A is the dominant strategy. Basically, in a Schumpeterian world like I am describing, the short lifespan of goods is a symptom of great innovations and better days to come.

What do you think?

 

Castro: Coercing Cubans into Health

On Black Friday, one of the few remaining tyrants in the world passed away (see the great spread of democracy in the world since 1988). Fidel Castro is a man that I will not mourn nor will I celebrate his passing. What I mourn are the lives he destroyed, the men and women he impoverished, the dreams he crushed and the suffering he inflicted on the innocents. When I state this feeling to others, I am told that he improved life expectancy in Cuba and reduced infant mortality.

To which I reply: why are you proving my point?

The reality that few people understand is that even poor countries can easily reduce mortality with the use of coercive measures available to a centralized dictatorship. There are many diseases (like smallpox) that spread because individuals have a hard time coordinating their actions and cannot prevent free riders (if 90% of people get vaccinated, the 10% remaining gets the protection without having to endure the cost). This type of disease is very easy to fight for a state: force people to get vaccinated.

However, there is a tradeoff to this. The type of institutions that can use violence so cheaply and so efficiently is also the type of institutions that has a hard time creating economic growth and development. Countries with “unfree” institutions are generally poor and grow slowly. Thus, these countries can fight some diseases efficiently (like smallpox and yellow fever), but not other diseases that are related to individual well-being (i.e. poverty diseases). This implies that you get unfree institutions and low rates of epidemics but high levels of poverty and high rates of mortality from tuberculosis, diarrhea, typhoid fever, heart diseases, nephritis.

This argument is basically the argument of Werner Troesken in his great book, The Pox of LibertyHow does it apply to Cuba?

First of all, by 1959, Cuba was already in the top of development indexes for the Americas – a very rich and healthy place by Latin American standards. A large part of the high levels of health indicators were actually the result of coercion. Cuba actually got its very low levels of mortality as a result of the Spanish-American war when the island was occupied by American invaders. They fought yellow fever and other diseases with impressive levels of violence. As Troesken mentions, the rate of mortality fell dramatically in Cuba as a result of this coercion.

Upon taking power in 1959, Castro did exactly the same thing as the Americans. From a public choice perspective, he needed something to shore up support.  His policies were not geared towards wealth creation, but they were geared towards the efficient use of violence. As Linda Whiteford and Laurence Branch point out, personal choices are heavily controlled in Cuba in order to achieve these outcomes. Heavy restrictions exist on what Cubans can eat, drink and do. When pregnancies are deemed risky, doctors have to coerce women to undergo abortion in spite of their wishes. Some women are incarcerated in the Casas de Maternidad in spite of their wishes. On top of this, forced sterilization in some cases are an actually documented policy tool.   These restrictions do reduce mortality, but they feel like a heavy price for the people. On the other hand, the Castrist regime did get something to brag about and it got international support.

However, when you look at the other side of the tradeoff, you see that death rates from “poverty diseases” don’t seem to have dropped (while they did elsewhere in Latin America).  In fact, there are signs that the aggregate infant mortality rates of many other Latin Americans countries collapsed toward the low-levels seen in Cuba when Castro took over in 1959  (here too). Moreover, the crude mortality rate is increasing while infant mortality is decreasing (which is a strong indictment about how much shorter adult lives are in Cuba).

So, yes, Cuba has been very good at reducing mortality from communicable diseases and choice-based outcomes (like how to give birth) that can be reduced by the extreme use of violence. The cost of that use of violence is a low level of development that allows preventable diseases and poverty diseases to remain rampant. Hardly something to celebrate!

Finally, it is also worth pointing two other facts. First of all, economic growth in Cuba has taken place since the 1990s (after decades of stagnation in absolute terms and decline in relative terms). This is the result of the very modest forms of liberalization that were adopted by the Cuban dictatorship as a result of the end of Soviet subsidies. Thus, what little improvements we can see can be largely attributed to those. Secondly, the level of living standards prior to 1990 was largely boosted by the Soviet subsidies but we can doubt how much of it actually went into the hands of the population given that Fidel Castro is worth 900$ million according to Forbes. Thus, yes, Cubans did remain dirt poor during Castro’s reign up to 1990. Thirdly, doctors are penalized for “not meeting quotas” and thus they do lie about the statistics. One thing that is done by the regime is to categorize “infant deaths” as “late fetal deaths” – its basically extending the definition in order to conceal a poorer performance.

Overall, there is nothing to celebrate about Castro’s dictatorship. What some do celebrate is something that was a deliberate political action on the part of Castro to gain support and it came at the cost of personal freedom and higher deaths from preventable diseases and poverty diseases.

H/T : The great (and French-speaking – which is a plus in my eyes because there is so much unexploited content in French) Pseudoerasmus gave me many ideas – see his great discussion here.

England circa 1700: low-wage or high-wage

A few months ago, I discussed the work of my friend (and fellow LSE graduate) Judy Stephenson on the “high-wage economy” of England during the 18th century. The high-wage argument basically states that high wages relative to capital incite management to find new techniques of production and that, as a result, the industrial revolution could be initiated. Its a crude summary (I am not doing it justice here), but its roughly accurate.

In her work, Judy basically indicated that the “high-wage economy” observed in the data was a statistical artifact. The wage rates historians have been using are not wage rates, they’re contract rates that include an overhead for contractors who hired the works. The wage rates were below the contract rates in an amplitude sufficient to damage the high-wage narrative.

A few days ago, Jane Humphries (who has been a great inspiration for Judy and whose work I have been discretely following for years) and Jacob Weisdorf came out with a new working paper on the issue that have reinforced my skepticism of the wages regarding England. A crude summary of Humphries and Weisdorf’s paper goes as such: preindustrial labor markets had search costs, workers were willing to sacrifice on the daily wage rate (lower w) in order to obtain steady employment (greater L) and thus the proper variable of interest is the wage paid on annual contracts.

While their results do not affect England’s relative position (it only affects the trend of living standards in England), it shows that there are flaws in the data. These flaws should give us pause before proposing a strong theory like the “high-wage economy” argument. Taken together, the work of Stephenson (whom I am told is officially forthcoming), Humphries and Weisdorf show the importance of doing data work as the new data may overturn some key pieces of research (maybe, I am not sure, there is some stuff worth testing).

Josh Barro and the Gold Standard

A few days ago, when it was announced that former Cato Institute president John Allison was under consideration for treasury secretary, Josh Barro of Business Insider dismissed the man as a “nutcase”. Why? Because Allison believes that the Federal Deposit Insurance Corporation (FDIC) generates a moral hazard that contributes to financial crises (a statement I agree with).

This slur irked one of the economists at Cato, George Selgin, who took to twitter to challenge Barro. In the exchange, at one point, Barro indicated that the desire of libertarians to return to the gold standard confirms the “nuttiness” of libertarians and the people at Cato.

And here, Barro allows me to make a comment on the gold standard. The sympathy towards the gold standard is not sympathy towards gold per se, but rather sympathy for reducing the capacity of governments to exercise discretion. Basically, each time you hear some academic economist mention the gold standard, what that economist means is rules-based monetary policy.

The gold standard era (1875-1914) was not an image of perfect monetary policy. It is not a lost paradise that we ought to strive to. However, the implicit rules imposed by the system did favor more stability that would have been the case with discretion during that era. In fact, the era of central banking with the Federal Reserve has not been that great relative to the gold standard era (and in the world of central banks, the Fed is pretty good). A lot of the scorn that the gold standard era has received had to do with regulatory policy towards banks (notably regarding restrictions on branch banking which forced more volatility) or with the role of changes in international demand for assets (see here). Thus, in spite of its many flaws, the gold standard was not that bad (but it was not* gold per se that was helpful – it was the shunning of discretion by governments).

To be sure, I do not favor a return to a gold standard era. What I do like, and what I think John Allison likes as well, is the return to rules-based monetary policy. Josh Barro should have been intellectually generous and understand this key distinction. By not making that distinction, of which he must be aware given his background, he debased the debate over monetary policy.

Trump’s rejection of TPP: a political economy comment

Trump’s rejection of the Trans-Pacific Partnership (TPP) seems virtually certain. Without the United States and with a weak Canadian prime minister on the issue (who got elected without a clear position on it), the agreement will die a swift death. By dying that rapidly, it confirms a point I have been making for years: agreements like the TPP are managed trade that generate as much (if not more) opposition than genuine free trade agreements (those that could fit on a few pages, not 10,000).

Protectionism are basically income-redistributing schemes. Shifting from protectionism to free trade means altering these schemes. Thus, the political opposition and the agreements we have seen over the last decades where special dispensations are placed inside the agreements. In some instances, like the Canada-US Free Trade Agreement of 1988 (CUFTA), this leads to genuinely freer (not free) trade. In other cases, like CAFTA in the early 2000s, the agreement is nothing less than rent-seeking by different means.

In the case of TPP, it seems that popular discontent is large enough to kill this very complex (and flawed) agreement. I am not sure whether or not, in net terms, the TPP was an improvement over the current state of affairs. What I am sure is that the opposition was similar to what the opposition would have been with unilateral trade liberalization.

At this point, small countries with no influence on world demand (like Canada) should simply go “at it alone”. What I mean is that unilateral trade liberalization is the way to go. There is a strong case for unilateral trade liberalization (see notably the work of Edwards and Lederman) for small economies. A large part of the cost of protectionism is not the level of tariffs and quotas, but the distortions generated in relative prices that lead to inefficient allocations of resources.The low hanging fruits are to be found in the tree of leveling the field. Small countries could convert quotas into tariffs and set a uniform (across the board) low tariff (see Chile’s case).Although far from ideal free trade (no barriers), this would represent a considerable improvement over the distorted relative entry barriers.

In such a situation, the political costs would be the same as those with agreements like the TPP, but the benefits would be infinitely larger making it easier for governments to proceed. The narrative would also be easy to sell to electorates: no special treatment for anyone. In the long-term, this may even “spillover” into multilateral trade agreements by reducing frictions during negotiations.

“Statogenic” Climate Change?

Is climate change government-made? For some years, I have been saying to my colleagues that climate change is real. Nonetheless, I am not an alarmist and I do not believe that stating that there is a problem is a blank cheque for any policy. Unlike many of my colleagues who believe that climate change is “anthropogenic”, I argue that it is “statogenic” in the sense that government policies over the last few decades basically amplified the problem.

Obviously, there is a social cost to pollution – an externality not embedded in the price system. On that basis, many have proposed the need for a carbon tax to “internalize the externality”. The logic is that anything that brings the “market price” closer to the “social cost” is an improvement.

Rarely do they consider the possibility that governments have “pushed” the market price away from the “social cost” (Note: I really hate that term as it has been subverted to mean more than what economists use it for). Consider the example of road pricing. In my part of Canada (Quebec), road pricing was eliminated in the 1970s. By eliminating road pricing, the government incentivized the greater use of vehicles and, basically, the greater burning of fossil fuels. Thus, by definition, the return of road pricing would bring the market price and the social cost closer together (and it might do so more efficiently than a carbon tax). Thus, there can be “statogenic” climate change because governments encourage indirectly the greater use of fossil fuels.

How big is that “statogenic” climate change? I think it is pretty “yuge.” For the last few months, I have been involved in a research project with Joanna Szurmak and Pierre Desrochers of the University of Toronto regarding environmental indicators in the debates between Paul Ehrlich and Julian Simon (see Joanna’s podcast with Garrett Petersen here at Economics Detective Radio). In that paper, we mention the fact that roughly a quarter of the world consumption of fossil fuels is subsidized directly or indirectly (through price controls setting local prices below world prices). That is a large share of total consumption and, according to an OECD paper, 14% of the effort needed to attain the most ambitious climate change mitigation plan could be made by eliminating those subsidies.

Now imagine that estimate was made in 2011. These policies have existed since the 1970s! One paper from the World Bank from the 1990s argued that eliminating them back in the 1980s would have reduced greenhouse gas emissions by 5% to 9%. Imagine a level lower by 9% (just for the sake of illustration) and imagine that the growth rate of greenhouse gases would have been reduced by 9% as well. Using CAIT data, we can see how this oversimplified scenario (which is by no means a general equilibrium scenario – which is the only way to measure the overall lower levels) means in terms of lower levels of GHGs. Relative to the observed data, a 9% drop back in 1990 with a 9% reduction in the growth rate of GHGs mean that the level of GHGs in 2012 in a world without subsidies would have been more than 12% lower relative to what they were in a world of subsidies.

subsidies

Again, this is an oversimplification. However, it works against my claim. The use of sophisticated methods is likely to yield much larger differences over time. Think about it for a second – alone the policy of fossil fuel subsidies explains a lot even with the oversimplification. Now, imagine adding the fact that many countries do not practice road pricing; that some countries tax the resale of used goods forcing the production of more goods; that they discourage construction in urban environments forcing a greater population sprawl; that trade barriers in agriculture prevent us from concentrating production where it is the most efficient; and the list goes on!

When people say “anthropogenic” climate change, I hear “incentives-driven” climate change or “statogenic.”

On granting the Nobel to Kirzner

In a little over a week, the Nobel Prize in economics will be unveiled. A part of me wishes that Robert Barro wins or maybe Dale Jorgenson or even William Baumol. However, I would be thrilled if Israel Kirzner won. Back in 2014, he was mentioned for the prize and it was the year that Jean Tirole, deservedly, won the prize. Since then, I have been dreaming of it as it would cement into the mainstream the best that the Austrian school of economics can offer.

Unlike many of colleagues, I have always had sympathies for numerous points made by the Austrians. Throughout my training, the Austrian school of economics was largely derided as cranks. Initially, I jumped on the bandwagon and I believed the Austrians to be crazy. However, I became exposed to many of their key points and like Edmund Phelps, I felt that the “best of the Austrians” could not be rejected so easily. True, there is some wheat to sort from the chaff, but the same applies for every school of thought.  I realized that most of their inputs in macroeconomics were largely incorporated in models like Lucas’ Islands Model or in Prescott’s Time to Build. However, what most intrigued me was how they used general equilibrium as a teaching tool, but not as a research tool. General equilibrium is useful for understanding key axiomatic assertions, but when you have an “applied economics” question, it is a hard tool to use – especially for an economic historian like me.

Kirzner is the perfect representation of the best that the Austrian school has to offer. In a way, his entire work can be summarized as such: it is the process leading to (new) equilibrium(s) that is the most interesting aspect of economics.

It is when I understood that insight that I finally grasped the deeper meaning of Hayek’s claim that “competition is a discovery process”. Entrepreneurs are people who look for the $100 bill on the sidewalk by innovating, by exploiting arbitrage opportunities and by discovering what consumers really want. They are constantly heading towards an equilibrium point. But as they try to do so, they shift the ability to produce and consume to greater levels and, in doing so, they generate a new equilibrium. And the process continues as long as human beings are humans.

For someone who studies economic history like I do, this is the most fruitful way of looking at social interactions. After all, the industrial revolution is everything except an equilibrium and the industrial revolution is most momentous structural break in history. The search for equilibrium and the creation of new equilibriums are by far more useful tools for questions like the end of “Malthusian pressures” or the beginning of the Industrial Revolution.

Of course, I am veering into excessive simplification of Kirzner’s contribution. But consider his book, Competition and Entrepreneurship. Alone, it has 7,362 citations (according to google scholar).  This is half the citations obtained by the most cited article in the American Economic Review (Armen Alchian and Harold Demsetz’s Production, Information Costs and Economic Organization). It’s close to 3,000 more citations that Deaton and Muellbauer’s “Almost Ideal Demand System” (4,775 citations). And Deaton won the Nobel last year!

By virtue of being affiliated with the Austrians, mainstream economists could have relegated him to obscurity. However, for such a citation count to be achieved, he must have been to showcase the best that the Austrian school has to offer. Just for that, maybe his contribution should be recognized.

Inherited wealth and size of government

In the debate over inequality, I have long argued that governments are much better at creating inequality than at reducing them (see my paper here with Steve Horwitz). Through rent-seeking and regulatory capture, interest groups manage to redistribute wealth in the “wrong” direction. There are a great many cases of large fortunes amassed thanks to government favors (think of the Bombardier family in Canada or Carlos Slim in Mexico).

Yet, as many scholars have pointed out, large fortunes can be rapidly consumed by squabbling heirs and poor investments (see this paper here notably). Thus, the fortunes gathered by parents who earned government favors can melt away over time. Nonetheless, governments do protect the fortunes of the heirs if they continue in business. For example, the heirs of the Bombardier family in Canada continued their ancestor’s work and remained relatively well-off. This is in part because governments continue to rescue that firm from its misfortunes. In a way, government may act to limit the erosion of fortunes. We can phrase this differently by asking whether or not the size of government is correlated with the share of wealth from the “ultra-rich” that is gained through inheritance.

Using a working paper from the Peterson Institute for International Economics titled The Origins of the Superrich: The Billionaire Characteristics Database authored by Caroline Freund and Sarah Oliver and the Economic Freedom Index of the World produced by the Fraser Institute (released this week), we can check for the existence of this correlation. The paper by Freund and Oliver documents the share of the total wealth of billionaires that was earned through inheritance.

Unsurprisingly (for me), the lower the index for the size of government, the greater the share of wealth earned through inheritance. Using all OECD and European countries, we can very well see that the size of government is negatively correlated with the share of wealth from inheritance. untitled

Maybe, just maybe, those who believe that the solution to rising inequality is more government redistribution should be willing to reconsider their proposed cure. This is, I believe, an additional cause for skepticism regarding remedies.

NGDP 3% per year; NGO much less!

A few days ago, Scott Sumner blogged about the “new normal” of NGDP trend (3% a year) (here and here). Overall, I tend to agree with him that the aggregate nominal expenditures are now at a new and historically low trend growth rate. I think that he is way too optimistic! 

As readers of this blog are aware, I am not convinced that NGDP is the proper proxy for nominal expenditures. I believe that Nominal Gross Output (NGO) is a better proxy as it captures more goods and services traded at the intermediate level (see blog posts herehere and here).  The core of my argument is that NGO will capture many “time to build” problems that will not appear in GDP as well as capture intangible investments which are now classified otherwise (see literature on intangible investment as capital goods here).  Thus, my claim that NGO captures more expenditures (especially between businesses). (Note: I am in the process with some colleagues of finalizing a paper on the superior case for NGO – more on this later.)

Are we at a new normal point where growth in nominal expenditures is slower than in the past? Yes! But according to Sumner and others, this is 3%. If we use NGO we are much lower! Again, using the FRED dataset, here is the evolution of NGO and NGDP since January 2005 (the start date of the NGO series in a quarterly form). The graph shows something odd starting in mid-2014 : NGO is growing much more slowly.

NGDPNGOaug2016geloso

To see this better, let’s plot the evolution of NGO as a percentage of NGDP in the graph below. If the ratio remains stable, then the trend is similar for both. As one can see, the recession saw a much more pronounced fall of NGO than NGDP with a failure to return to the initial levels. And since 2014, the ratio has started to fall again (indicating slower growth of NGO than NGDP).

NGDPoverNGOgeloso2016aug

So what is happening? Why is there such a difference? I am not one hundred percent sure about the causes of this difference. However, I am willing to contend that NGO is fitting better than NGDP with other indicators indicating a tepid recovery. If we look at the labor force participation rate in the US, it continues to fall in a nearly mechanical manner. Fewer and fewer workers are at work (or looking for work) in comparison to the population that could be working while investments are disappointing.

CivilianLaborForceParticipation

Maybe Scott Sumner is being overly optimistic. The new trend might simply be substantially lower than he believes.*

 

*Readers should note that I believe that monetary policy is, at present, too restrictive. However, I believe the culprit is not the Federal Reserve but financial regulations that restrict the circulation of “private money” (the other components of broad money found in divisia indices – see my blog post here). 

The minimum wage still bites

Politicians, pundits and activists jumped on a new literature that asserts that there no negative effects of substantial increases of the minimum wage on employment. Constantly, they cite this new literature as evidence that the “traditional” viewpoint is wrong. This is because they misunderstand (or misrepresent) the new literature.

What the new literature finds is that there could be no significant negative effects on employment. This is not the same as saying there are no negative effects overall. In fact, it is more proper to consider how businesses adjust to different-sized changes by using various means. Once, the minimum wage is seen in this more nuanced light, the conclusion is that it still bites pretty hard.

The New Minimum Wage Literature

Broadly speaking, the new literature states that there are minimal employment losses following increases in the minimum wage. It was initiated twenty years ago by the works of Alan Krueger and David Card who found that, in Pennsylvania and New Jersey, a change in the minimum wage had not led to losses in employment. This caused an important surprise in the academic community and numerous papers have found roughly similar conclusions.

These studies imply that the demand for labor was quite inelastic – inelastic enough to avoid large losses in employment. This is a contested conclusion. David Neumark and William Wascher are critical of the methods underlying these conclusions. Using different estimation methods, they found larger elasticities in line with the traditional viewpoint. They also pointed out Card and Krueger’s initial study had several design flaws. With arguably better data, they reversed the initial Card and Krueger conclusion.

These critics notwithstanding, let us assume that the new minimum wage literature is broadly correct. Does that mean that the minimum wage is void of adverse consequences? The answer is a resounding no.

This is because of an important nuance that has been lost on many in the broader public. In a meta-analysis of 200 scholarly articles realized by Belman and Wolfson, there are no statistically discernable effects of “moderate increases” on employment. The keyword here is “moderate” because the effects of increases in the minimum wage on employment may be non-linear. This means that while a 10% increase in the minimum wage would reduce teen employment by 1%, a 40% increase will reduce teen employment by more than 4%. A recent study by Jeremy Jackson and Aspen Gorry suggests as much: the larger the increase of the minimum wage, the larger the effects on employment.

If labor costs increase moderately, the strategy to reduce employment may be relatively inefficient. The increase of labor costs needs to reach a certain threshold before employers choose to fire workers. Below such a threshold, employers may use a wide array of mechanisms to adjust.

Adjustment channels

Employers on their respective markets face different constraints. This diversity of constraints means that there is no “unique” solution to greater labor costs. For example, if the demand for one’s products is quite inelastic, labor costs can be passed on to consumers through an increase in prices. While this may not necessarily hurt workers at the minimum wage, it impoverishes other workers who have fewer dollars left to spend elsewhere. This is still a negative outcome of the minimum wage – its just not a negative outcome on the variable of employment.

In other cases, employers might reduce employment indirectly by reducing hours of work. This is an easy solution to use for employers who cannot, for a small increase in labor costs, afford to fire a worker. Even Belman and Wolfson – who are sympathetic to the idea of increasing the minimum wage – concede that increases in the minimum wage do lead to moderate decreases in labor hours. More skeptical researcher, like Neumark and Wascher, find that the effects on hours worked is much larger. Again, the variable affected is not employment measured as the number of people holding a job. However, a reduction in the number of hours worked is a clearly a perverse outcome.

Another effect is that employers might reduce expenses associated with their workers. Even Card and Krueger, in their book on the minimum wage, recognize that employers may opt to cut on things like discounted uniforms and free meals. An employer facing a 5% increase in the minimum wage will see his labor costs increase, but firing an employee means less production and lower revenues. Thus, firing may not be an option for such a small increase. However, cutting on the expenses associated with that worker is an easy option to use. This means fewer marginal benefits and on-job training. Employers adjust by altering the method of compensation.  For example, economist Mindy Marks estimated that a 1$ increase of the minimum reduced by 6.2% the probability that a worker would be offered health insurance.  Again, employers adjust and the effects are not seen on employment. Nonetheless, these are undisputedly negative effects.

The effect may also be observed on the type of employment. Employers may decide to substitute some workers by other types of workers. Economist David Neumark pointed that, subsumed in the statistical aggregate of “labor force” is a shift in its shift. In his article, written for the Employment Policies Institute, he stated that “less skilled teens are displaced from the job market, while more highly skilled teens are lured in by higher wages (even at the expense of cutbacks in their educational attainment)”. Another example could be that a higher minimum wage induces retired workers to return to the labor force. Employers, at the sight of a greater supply of experienced workers, prefer to hire these individuals and fire less-skilled workers. In such case, “total employment” does not change, but the composition of employment is heavily changed. The negative effects are clear though: less-skilled workers are not allowed to acquire new skills through experience.

Conclusion

None of these adjustment mechanisms in response to “moderate increases in the minimum wage” are desirable. Yet, all of these channels would allow us to conclude that there are no effects on employment. To misconstrue the ability of employers to select multiple channels of adjustments other than reducing employment as the proof that the minimum wage has no negative effects is perverse in the utmost. The statement that “moderate increases in the minimum wage has no statistically significant effects on employment” is merely a positive scientific statement with no normative implications whatsoever. If anything, the multiple adjustment mechanisms suggest that the minimum wage still hurts and that is both a positive and normative statement.

A Note on the Econometric Evaluation of Presidents

Sometimes, I feel that some authors simply evolve separately from all those who might be critical of their opinions. I feel that this hurts the discipline of economics since it is better to confront potentially discomforting opinions. And discomforting opinions are never found in intellectually homogeneous groups. However, a recent paper in the American Economic Review by Alan Blinder and Mark Watson suffers exactly from this issue.

Now, don’t get me wrong, the article is highly interesting and provides numerous factoids worth considering when debating economic policy and politics. Basically, the article considers the differences in economic performance under different presidents (and their party affiliation). Overall, it seems that Democrats have a slight edge – but in large part because of “luck” (roughly speaking).

However, no where in the list of references do we find an article to the public choice theory literature. And its not as if that field had nothing to say. There are tons of papers on policy decisions and the form of government. In the AER paper, this can be best seen when Blinder and Watson ask if it was Congress, instead of the president, that caused the differences in performance. That is a correct robustness check, but it is still a mis-specification. There is a strong literature on “divided government” in the field of public choice.

In the case of the United States, this would be presidents and congresses (or even different chambers of congress) of different party affiliation. Generally, government spending is found to grow much more slowly (even relative to GDP) when congress and the White House are held by different parties. Why not extend that conclusion to economic growth? I would not be surprised that lagged values of divided government (mixed partisanships in t minus one) would have a positive on non-lagged growth rates (growth in t-zero).

Now, this criticism is not sufficient to render uninteresting the Blinder-Watson paper. However, it shows that some points fall flat when two fields fail to link together. Public choice theory, in spite of the wide fame of James Buchanan (Nobel 1986), Gordon Tullock and affiliates (or off-spawns) like Elinor Ostrom (Nobel 2009), is still clearly unknown to some in the mainstream.

And that is a disappointment…

A depressing take on inequality

Recently, I reviewed Unequal Gains (Princeton University Press) which is basically the magnum opus of economic historians Peter Lindert and Jeffrey Williamson. In the pages of Essays in Economic and Business HistoryI survey the history of growth and inequality in the United States since 1700 that they portrayed in their book.

Coming out of their book, I could not help feel depressed and simultaneously vindicated in my classical liberal outlook of the world. While they avoid the Pikettyesque tendency to create “general laws” of inequality, their results suggest that inequality has risen in spite of massive government intervention since the 1920s.

To be clear, Unequal Gains is probably the best book you can get on understanding the dynamic of inequality. Although I am biased in their favor since both authors have given me great help in my academic career, the book should overthrow Capital in the 21st century as the reference work on inequality. Throughout the book, they use normal economic theory to explain why inequality increased or decreased (discrimination, capital flows, immigration, changes in labor force participation, urbanization, relative factor scarcities, uneven supply shocks, changes in returns to human capital, regional income differences). They constantly eschew general laws. From the book, we should understand that inequality is context-specific. Like a recipe, difference mixes of the ingredients of inequality will yield different courses. This is the main strength of the book (plus the tons of data).

And this is also why it is depressing. The vast majority of inequality before 1910 in the United States would have been the result of market forces (immigration, urbanization, capital flows, relative factor scarcities, regional income differences) and not of governmental decisions. I believe that the pre-1910 level of inequality is sensibly overestimated and that, while not gigantic, government policies did have a non-negligible role in raising inequality. Nonetheless, most of these inequalities are hard to judge negatively. More immigrants from poor Italy may depress (I do not agree with that claim, but people like G.Borjas of Harvard could make this claim) wages in the United States in 1900 and increase inequality, but the migration of the Italian to America leaves no one worse off while improving the living standard of the Italian migrant. Urbanization, as part of the industrialization, is a hard process to fault and criticize. So, inequalities before 1910 are simply an issue of explaining their levels and trends.

After 1910 however, there is what Lindert and Williamson call the “great leveling” where there is an important decrease in inequality which ends in 1970. This is where I become depressed. In my paper, I highlighted that most of the fall in inequality between 1910 and 1970 occurs because or regional convergence, gender wage convergence and racial wage convergence. Between the 1910s and 1970s, differences in per capita state-level incomes narrowed dramatically (and they have since slightly widened). Between 1910 and 1970, thanks to the migration of blacks to the north, wages between whites and blacks grew closer together. Between 1910 and 1970, thanks to the arrival of household amenities like running water, appliances and electricity, women joined the labor force and the gender wage gap narrowed. None of these factors have anything to do with redistributive policy. Now, I am not claiming that redistributive policy had no impact on inequality measures (that would be empirically false). What I am claiming is that numerous forces were at play – some of which were related to non-governmental factors. Between 1910 and 1970, if one looks at ratios of government spending to GDP, there is a massive increase in the size of government. And yet, many factors of convergence had little to do with government.

08-government-debt.jpg

Since the 1970s, inequality has surged again – and this is in spite of the fact that governments are growing larger in many respects. While spending is at all levels seems to be either stable or growing, regulatory barriers like licensing regulations and rent-seeking arrangements in the form of corporate bailouts have multiplied. Thus, the rise of inequality occurs in spite of a very active state. Not only that, but I am working on papers with John Moore of Northwood University to study inequality from 1890 to 1940 because we believe that the level is overestimated and misunderstood and (by definition) that this affects the trendline of inequality in the 20th century. If inequality in the 1920s falls slightly, the U-shaped curve of inequality (very high before 1910 falling to 1970 and increasing thereafter) described by Piketty and others becomes a flatter upward slopping curve (maybe more like a J-shaped curve). If me and John are correct (we are still crunching numbers and collecting data) inequality increased with state intervention.

And that is highly depressing. Now, I am a classical liberal who believes that state intervention should be limited. But it is not beyond to recognize that when the state throws tons of money of something, it might get a few things the way it wants (a broken clock is still right twice a day). Thus, I expected some social programs to have an impact (and I still believe that on a case-by-case basis, some social programs do reduce inequality) but I did not expect such a disappointing performance. One could even say “depressing” performance.

Nonetheless, I would suggest to everyone to read Unequal Gains and throw out Capital in the 21st century. 

Note: To be clear, Lindert and Williamson are not making the claim I am making here. While their book is predominantly a “positive economics” work, they do propose some policy courses to reduce inequality and argue favorably for redistributive policy. This is merely my “positive take” on their book.

Why Rising Prices May Indicate Abundance

I am currently writing a piece with Pierre Desrochers (University of Toronto at Mississauga) regarding environmental trends and economic theory for the conference of the Association for Private Economic Education (see here). In the process of writing up the first draft of the article, I had to revisit another article I wrote (with Desrochers) and I found a passage which now offers me a greater value than when I initially wrote it. In that piece, me and Desrochers basically argued that rising prices for certain environmental goods may not always indicate rising scarcity. In fact, we argued that prices could increase even if a resource grew in abundance.  Here is the passage from our article currently undergoing revise and resubmit:

Thirdly, technological innovations that increase productivity might drive up the price of a commodity without this truly reflecting the scarcity of the resource. Whale oil is a case in point. The decline of the whaling industry in the United States began around 1850 at which point real prices began to increase (Bardi 2007). However, economic historians agree that this was not because of resource depletion or overfishing (Davis, Gallman and Hutchins 1988). Brook Kaiser (2013) thus found that the increasing demand for illuminants created pressures on prices, which in turn motivated the development of substitutes like petroleum-derived kerosene. However, whale bone and oil prices did not fall as kerosene production expanded and, in spite of falling demand, prices stayed high and even increased. The answer to this conundrum is opportunity cost as the important surge in American labor productivity was greater than the observed increase in productivity in the whaling industry. This meant that the opportunity cost of using workers, capital and other resources in the whaling industry was great. These workers, capital goods and other resources were progressively reallocated to other industries. In the process, the whaling industry faced higher costs relative to productivity. While marginal players in the whaling industry exited, the supply of inputs to the whaling industry decreased and prices had to be increased [by the remaining firms in order for economy-wide equilibrium to be achieved]. Hence, prices in that situation are not reflective of depletion or expansion of resource stock.

 

On Scottish Free Banking (a Canadian Perspective)

Yesterday, George Selgin responded on Alt-M to a series of (relatively) recent paper that posit the impossibility of private money. While Selgin does criticize the theoretical reasoning of the papers, the majority of his case is based on the historical experience of private money – notably the Scottish experience with free banking.

I wanted to write something on this, but Selgin got there faster. Indeed, the historical evidence of free banking in Canada, Scotland, Sweden and the limited experiences observed in France and elsewhere provide a strong backing for soundness of private money. Selgin is right to emphasize this.

However, I can provide a small piece of evidence to support his case. It is not only scholars like Selgin who believe that the historical experience of Scotland was positive. As far back as 1835 and as far away as Canada, the robustness of the Scottish free banking experience was lauded. Consider the following quote from a report to the House of Assembly of Upper Canada (modern day Ontario):

“In Scotland, private banking has long existed and fewer failures have occurred there than in any other part of the world; their Joint Stock Banking Companies embrace some of the following principles by which the public are quite secured and the institutions useful as Banks of Deposit and circulation, while the stock is above par, and proved to be a good investment”

This report was actually presented in Canada arguing that Scottish free banking was a solution to a longstanding problem in the colony : dearth of small denominations. The “big problem of small change” was a real issue in the colony and created important frictions. The problem was most likely created by the fixing of exchange rates between the different currencies at levels dissonant with the actual value of different currencies so that “bad money drove out good money” (see Angela Redish’s work). The report recommended legislative actions to encourage the formation of banks that would issue private notes to solve this problem. Newspapers in the neighboring colony of Lower Canada also praised (in the early 1830s) the role that banks played in easing the problem of “poor money”.

I have made an initial foray on this with Mathieu Bédard of Aix-Marseille School of Economics (and we plan to make another few) and  showed that the role of free banking in improving economic growth was considerable exactly because of the issue of private money. While Canada is a small, it provides some additional support to the claim that private money can indeed exist, survive and be superior to state money.

Source:  House of Assembly of Upper Canada. 1835. Report of the Select Committee to which was referred the subject of The Currency. Toronto : M.Reynolds Printer.

P.S. Below there is a picture of a half-penny issued by the Quebec Bank in 1837 showing that there was even private coinage in Canada.

IMG_6955