Economists, Economic History, and Theory

We can all come up with cringeworthy clichés for why history matters to society at large – as well as policy-makers and perhaps more infuriatingly, to hubris-prone economists:

And we could add the opposite position, where historical analysis is altogether irrelevant for our current ills, where This Time Is completely Different and where we naively disregard all that came before us.

My pushback to these positions is taken right out of Cameron & Neal’s A Concise Economic History of The World and is one of my most cherished intellectual guidelines. The warning appears early (p. 4) and mercilessly:

those who are ignorant of the past are not qualified to generalize about it.

We can also point to some more substantive reasons for why history matters to the present:

  • Discontinuities: by studying longer time period, in many different settings, we get more used to – and more comfortable with – the fact that institutions, routines, traditions and technologies that we take for granted may change. And do change. Sometimes slowly, sometimes frequently.
  • Selection: in combination with emphasizing history to understand the path dependence of development, delving down into economic history ought to strengthen our appreciation for chance and randomness. The history we observed was but one outcome of many that could have happened. The point is neatly captured in an obscure article of one of last year’s Nobel Prize laureates, Paul Romer: “the world as we know it is the result of a long string of chance outcomes.” Appropriately limiting this appreciation for randomness is Matt Ridley’s rejection of the Great Man Theory: a lot of historical innovations seems to have been inevitable (When Edison invented light bulbs, he had some two dozen rivals doing so independently).
  • Check On Hubris: history gives us ample examples of similar events to what we’re experiencing or contemplating in the present. As my Glasgow and Oxford professor Catherine Schenk once remarked in a conference I organized: “if this policy didn’t work in the past, what makes you think it’ll work this time?”

History isn’t only a check on policy-makers, but on ivory-tower economists as well. Browsing through Mattias Blum & Chris Colvin’s An Economist’s Guide to Economic Historypublished last year and has been making some waves since – I’m starting to see why this book is quickly becoming compulsory reading for economists. Describing the book, Colvin writes:

Economics is only as good as its ability to explain the economy. And the economy can only be understood by using economic theory to think about causal connections and underlying social processes. But theory that is untested is bunk. Economic history provides one way to test theory; it forms essential material to making good economic theory.

Fellow Notewriter Vincent Geloso, who has contributed a chapter to the book, described the task of the economic historian in similar terms:

Once the question is asked, the economic historian tries to answer which theory is relevant to the question asked; essentially, the economic historian is secular with respect to theory. The purpose of economic history is thus to find which theories matter the most to a question.

[and which theory] square[s] better with the observed facts.

Using history to debunk commonly held beliefs is a wonderful check on all kinds of hubris and one of my favorite pastimes. Its purpose is not merely to treat history as a laboratory for hypothesis testing, but to illustrate that multitudes of institutional settings may render moot certain relationships that we otherwise take for granted.

Delving down into the world of money and central banks, let me add two more observations supporting my Econ History case.

One chapter in Blum & Colvin’s book, ‘Money And Central Banking’ is written by Prof. John Turner at Queen’s in Belfast (whose writings – full disclosure – has had great influence on my own thinking). Focusing on past monetary disasters and the relationship between the sovereign and the banking system is crucial for economists, Turner writes:

We therefore have a responsibility to ensure that the next generation of economists has a “lest we forget” mentality towards the carnage that can be afflicted upon an economy as a result of monetary disorder.” (p. 69)

This squares off nicely with another brief article that I stumbled across today, by banking historian and LSE Emeritus Professor Charles Goodhart. Lamentably – or perhaps it ought to have been celebratory – Goodhart notes that no monetary regime lasts forever as central banks have for centuries, almost haphazardly, developed their various functions. The history of central banking, Goodhart notes,

can be divided into periods of consensus about the roles and functions of Central Banks, interspersed with periods of uncertainty, often following a crisis, during which Central Banks (CBs) are searching for a new consensus.”

He sketches the pendulum between consensus and uncertainty…goodhart monetary regime changes

…and suddenly the Great Monetary Experiment of today’s central banks seem much less novel!

Whatever happens to follow our current monetary regimes (and Inflation Targeting is due for an update), the student of economic history is superbly situated to make sense of it.

Evidence-based policy needs theory

This imaginary scenario is based on an example from my paper with Baljinder Virk, Stella Mascarenhas-Keyes and Nancy Cartwright: ‘Randomized Controlled Trials: How Can We Know “What Works”?’ 

A research group of practically-minded military engineers are trying to work out how to effectively destroy enemy fortifications with a cannon. They are going to be operating in the field in varied circumstances so they want an approach that has as much general validity as possible. They understand the basic premise of pointing and firing the cannon in the direction of the fortifications. But they find that the cannon ball often fails to hit their targets. They have some idea that varying the vertical angle of the cannon seems to make a difference. So they decide to test fire the cannon in many different cases.

As rigorous empiricists, the research group runs many trial shots with the cannon raised, and also many control shots with the cannon in its ‘treatment as usual’ lower position. They find that raising the cannon often matters. In several of these trials, they find that raising the cannon produces a statistically significant increase in the number of balls that destroy the fortifications. Occasionally, they find the opposite: the control balls perform better than the treatment balls. Sometimes they find that both groups work, or don’t work, about the same. The results are inconsistent, but on average they find that raised cannons hit fortifications a little more often.

A physicist approaches the research group and explains that rather than just trying to vary the height the cannon is pointed in various contexts, she can estimate much more precisely where the cannon should be aimed using the principle of compound motion with some adjustment for wind and air resistance. All the research group need to do is specify the distance to the target and she can produce a trajectory that will hit it. The problem with the physicist’s explanation is that it includes reference to abstract concepts like parabolas, and trigonometric functions like sine and cosine. The research group want to know what works. Her theory does not say whether you should raise or lower the angle of the cannon as a matter of policy. The actual decision depends on the context. They want an answer about what to do, and they would prefer not to get caught up testing physics theories about ultimately unobservable entities while discovering the answer.

Eventually the research group write up their findings, concluding that firing the cannon pointed with a higher angle can be an effective ‘intervention’ but that whether it does or not depends a great deal on particular contexts. So they suggest that artillery officers will have to bear that in mind when trying to knock down fortifications in the field; but that they should definitely consider raising the cannon if they aren’t hitting the target. In the appendix, they mention the controversial theory of compound motion as a possible explanation for the wide variation in the effectiveness of the treatment effect that should, perhaps, be explored in future studies.

This is an uncharitable caricature of contemporary evidence-based policy (for a more aggressive one see ‘Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials’). Metallurgy has well-understood, repeatedly confirmed theories that command consensus among scientists and engineers. The military have no problem learning and applying this theory. Social policy, by contrast, has no theories that come close to that level of consistency. Given the lack of theoretical consensus, it might seem more reasonable to test out practical interventions instead and try to generalize from empirical discoveries. The point of this example is that without theory empirical researchers struggle to make any serious progress even with comparatively simple problems. The fact that theorizing is difficult or controversial in a particular domain does not make it any less essential a part of the research enterprise.

***

Also relevant: Dylan Wiliam’s quip from this video (around 9:25): ‘a statistician knows that someone with one foot in a bucket of freezing water and the other foot in a bucket of boiling water is not, on average, comfortable.’

Pete Boettke’s discussion of economic theory as an essential lens through which one looks to make the world clearer.

The minimum wage induced spur of technological innovation ought not be praised

In a recent article at Reason.comChristian Britschgi argues that “Government-mandated price hikes do a lot of things. Spurring technological innovation is not one of them”. This is in response to the self-serve kiosks in fast-food restaurants that seem to have appeared everywhere following increases in the minimum wage.

In essence, his argument is that minimum wages do not induce technological innovation. That is an empirical question. I am willing to consider that this is not the most significant of adjustment margins to large changes in the minimum wage. The work of Andrew Seltzer on the minimum wage during the Great Depression in the United States suggests that at the very least it ought not be discarded.  Britschgi does not provide such evidence, he merely cites anecdotal pieces of support. Not that anecdotes are bad, but those that are cited come from the kiosk industry – hardly a neutral source.

That being said, this is not what makes me contentious towards the article. It is the implicit presupposition contained within: that technological innovation is good.

No, technological innovation is not necessarily good. Firms can use two inputs (capital and labor) and, given prices and return rates, there is an optimal allocation of both. If you change the relative prices of each, you change the optimal allocation. However, absent the regulated price change, the production decisions are optimal. With the regulated price change, the production decisions are the best available under the constraint of working within a suboptimal framework. Thus, you are inducing a rate of technological innovation which is too fast relative to the optimal rate.

You may think that this is a little luddite of me to say, but it is not. It is a complement to the idea that there are “skill-biased” technological change (See notably this article of Daron Acemoglu and this one by Bekman et al.). If the regulated wage change affects a particular segment of the labor (say the unskilled portions – e.g. those working in fast food restaurants), it changes the optimal quantity of that labor to hire. Sure, it bumps up demand for certain types of workers (e.g. machine designers and repairmen) but it is still suboptimal. One should not presuppose that ipso facto, technological change is good. What matters is the “optimal” rate of change. In this case, one can argue that the minimum wage (if pushed up too high) induces a rate of technological change that is too fast and will act in disfavor of unskilled workers.

As such, yes, the artificial spurring of technological change should not be deemed desirable!

On “strawmanning” some people and inequality

For some years now, I have been interested in the topic of inequality. One of the angles that I have pursued is a purely empirical one in which I attempt to improvement measurements. This angle has yielded two papers (one of which is still in progress while the other is still in want of a home) that reconsider the shape of the U-curve of income inequality in the United States since circa 1900.

The other angle that I have pursued is more theoretical and is a spawn of the work of Gordon Tullock on income redistribution. That line of research makes a simple point: there are some inequalities that are, in normative terms, worrisome while others are not. The income inequality stemming from the career choices of a benedictine monk and a hedge fund banker are not worrisome. The income inequality stemming from being a prisoner of one’s birth or from rent-seekers shaping rules in their favor is worrisome.  Moreover, some interventions meant to remedy inequalities might actually make things worse in the long-run (some articles even find that taxing income for the sake of redistribution may increase inequality if certain conditions are present – see here).  I have two articles on this (one forthcoming, the other already published) and a paper still in progress (with Rosolino Candela), but they are merely an extension of the aforementioned Gordon Tullock and some other economists like Randall Holcombe, William Watson and Vito Tanzi. After all, the point that a “first, do no harm” policy to inequality might be more productive is not novel (all that it needs is a deep exploration and a robust exposition).

Notice that there is an implicit assumption in this line of research: inequality is a topic worth studying. This is why I am annoyed by statements like those that Gabriel Zucman made to ProMarket. When asked if he was getting pushback for his research on inequality (which is novel and very important), Zucman answers the following:

Of course, yes. I get pushback, let’s say not as much on the substance oftentimes as on the approach. Some people in economics feel that economics should be only about efficiency, and that talking about distributional issues and inequality is not what economists should be doing, that it’s something that politicians should be doing.

This is “strawmanning“. There is no economist who thinks inequality is not a worthwhile topic. Literally none. True, economists may have waned in their interest towards the topic for some years but it never became a secondary topic. Major articles were published in major journals throughout the 1990s (which is often identified as a low point in the literature) – most of them groundbreaking enough to propel the topic forward a mere decade later. This should not be surprising given the heavy ideological and normative ramifications of studying inequality. The topic is so important to all social sciences that no one disregards it. As such, who are these “some people” that Zucman alludes too?

I assume that “some people” are strawmen substitutes for those who, while agreeing that inequality is an important topic, disagree with the policy prescriptions and the normative implications that Zucman draws from his work. The group most “hostile” to the arguments of Zucman (and others such as Piketty, Saez, Atkinson and Stiglitz) is the one that stems from the public choice tradition. Yet, economists in the public-choice tradition probably give distributional issues a more central role in their research than Zucman does. They care about institutional arrangements and the rules of the game in determining outcomes. The very concept of rent-seeking, so essential to public choice theory, relates to how distributional coalitions can emerge to shape the rules of the game in a way that redistribute wealth from X to Y in ways that are socially counterproductive. As such, rent-seeking is essentially a concept that relates to distributional issues in a way that is intimately related to efficiency.

The argument by Zucman to bolster his own claim is one of the reason why I am cynical towards the times we live in. It denotes a certain tribalism that demonizes the “other side” in order to avoid engaging in them. That tribalism, I believe (but I may be wrong), is more prevalent than in the not-so-distant past. Strawmanning only makes the problem worse.

Low-Quality Publications and Academic Competition

In the last few days, the economics blogosphere (and twitterverse) has been discussing this paper in the Journal of Economic PsychologySimply put, the article argues that economists discount “bad journals” so that a researcher with ten articles in low-ranked and mid-ranked journals will be valued less than a researcher with two or three articles in highly-ranked journals.

Some economists, see notably Jared Rubin here, made insightful comments about this article. However, there is one comment by Trevon Logan that gives me a chance to make a point that I have been mulling over for some time. As I do not want to paraphrase Trevon, here is the part of his comment that interests me:

many of us (note: I assume he refers to economists) simply do not read and therefore outsource our scholarly opinions of others to editors and referees who are an extraordinarily homogeneous and biased bunch

There are two interrelated components to this comment. The first is that economists tend to avoid reading about minute details. The second is that economists tend to delegate this task to gatekeepers of knowledge. In this case, this would be the editors of top journals. Why do economists act as such? More precisely, what are the incentives to act as such? After, as Adam Smith once remarked, the professors at Edinburgh and Oxford were of equal skill but the former produced the best seminars in Europe because their incomes depended on registrations and tuition while the latter relied on long-established endowments. Same skills, different incentives, different outcomes.

My answer is as such: the competition that existed in the field of economics in the 1960s-1980s has disappeared.  In “those” days, the top universities such as Princeton, Harvard, MIT and Yale were a more or less homogeneous group in terms of their core economics. Lets call those the “incumbents”. They faced strong contests from the UCLA, Chicago, Virginia and Minnesota.  These challengers attacked the core principles of what was seen as the orthodoxy in antitrust (see the works of Harold Demsetz, Armen Alchian, Henry Manne), macroeconomics (Lucas Paradox, Islands model, New Classical Economics), political economy (see the works of James Buchanan, Gordon Tullock, Elinor Ostrom, Albert Breton, Charles Plott) and microeconomics (Ronald Coase). These challenges forced the discipline to incorporate many of the insights into the literature. The best example would be the New Keynesian synthesis formulated by Mankiw in response to the works of people like Ed Prescott and Robert Lucas. In those days, “top” economists had to respond to articles published in “lower-ranked” journals such as Economic Inquiry, Journal of Law and Economics and Public Choice (all of which have risen because they were bringing competition – consider that Ronald Coase published most of his great pieces in the JL&E).

In that game, economists were checking one another and imposing discipline upon each other. More importantly, to paraphrase Gordon Tullock in his Organization of Inquiry, their curiosity was subjected to social guidance generated from within the community:

He (the economist) is normally interested in the approval of his peers and and hence will usually consciously shape his research into a project which will pique other scientists’ curiosity as well as his own.

Is there such a game today? If in 1980 one could easily answer “Chicago” to the question of “which economics department challenges that Harvard in terms of research questions and answers”, things are not so clear today. As research needs to happen within a network where the marginal benefits may increase with size (up to a point), where are the competing networks in economics?

And there is my point, absent this competition (well, I should not say absent – it is more precise to speak of weaker competition) there is no incentive to read, to invest other fields for insights or to accept challenges. It is far more reasonable, in such a case, to divest oneself from the burden of academia and delegate the task to editors. This only reinforces the problem as the gatekeepers get to limit the chance of a viable network to emerge.

So, when Trevon bemoans (rightfully) the situation, I answer that maybe it is time that we consider how we are acting as such because the incentives have numbed our critical minds.

Lunchtime Links

  1. Interview with a secessionist
  2. Ducking questions about capitalism
  3. The perverse seductiveness of Fernando Pessoa
  4. Yet in this simple task, a doffer in the USA doffed 6 times as much per hour as an adult Indian doffer.”
  5. Conflicted thoughts on women in medicine
  6. The Devil You Know vs The Market For Lemons (car problems)

On Financial Repression and ♀ Labor Supply after 1945

I just came back from the Economic History Association meeting in San Jose. There are so many papers that are worth mentioning (and many have got my brains going, see notably the work of Nuno Palma on monetary neutrality after the “discovery” of the New World). However, the thing that really had me thinking was the panel on which one could find Barry Eichengreen and Carmen Reinhart (who was an early echo of the keynote speech by Michael Bordo).

Here’s why : Barry Eichengreen seemed uncomfortable with the current state of affairs regarding financial regulation and pointed out that the after-war period was marked by rapid growth and strong financial regulation. Then, Reinhart and Bordo emphasized the role of financial repression in depressing growth – notably in the period praised by Eichengreen. I have priors that make more favorable to the Reinhart-Bordo position, but I can’t really deny the point made by Eichengreen.

This had me thinking for some time during and after the talks. Both positions are hard to contest but they are mutually exclusive. True, it is possible that growth was strong in spite of financial repression, but some can argue that by creating some stability, regulations actually improved growth in a way that surpassed the negative effects caused by repression. But, could there be another explanation?

Elsewhere on this blog, I have pointed out that I am not convinced that the Thirty Glorious were that “Glorious”.  In line with my Unified Growth Theory inclinations (don’t put me in that camp, but don’t exclude me either I am still cautious on this), I believe that we need to account for demographic factors that foil long-term comparisons. For example, in a paper on Canadian economic growth, I pointed out that growth from 1870 to today is much more modest once we divide output by household-size population rather than overall population (see blog post here that highlights my paper). Later, I pointed out the ideas behind another paper (which I am still writing and for which I need more data, notably to replicate something like this paper) regarding the role of the unmeasured household economy. There, I argued that the shift of women from the household to the market over-measures the actual increase in output. After all, to arrive at the net value of increased labor force participation, one must deduce the value of foregone outputs in the household – something we know little about in spite of the work of people like Valerie Ramey.

Both these factors suggest the need for corrections based on demographic changes to better reflect actual living standards. These demographic changes were most pronounced in the 1945-1975 era – that of the era of rapid growth highlighted by Eichengreen and of financial repression highlighted by Reinhart and Bordo. If these changes were most momentous in that period, it is fair to say that the measurement errors they induce are also largest in that era.

So, simply put, could it be that these were not years of rapid growth but of modest growth that were overestimated?  If so, that would put the clash of ideas between Bordo-Reinhart and Eichengreen in a different light – albeit one more favorable to the former than the latter.

But heh, this is me speculating about where research could be oriented to guide some deeply relevant policy questions.

Minimum Wages: Where to Look for Evidence (A Reply)

Yesterday, here at Notes on Liberty, Nicolas Cachanosky blogged about the minimum wage. His point was fairly simple: criticisms against certain research designs that use limited sample can be economically irrelevant.

To put you in context, he was blogging about one of the criticisms made of the Seattle minimum wage study produced by researchers at the University of Washington, namely that the sample was limited to “small” employers. This criticism, Nicolas argues, is irrelevant since the researchers were looking for those who were likely to be the most heavily affected by the minimum wage increase since it will be among the least efficient firms that the effects will be heavily concentrated. In other words, what is the point of looking at Costco or Walmart who are more likely to survive than Uncle Joe’s store? As such, this is Nicolas’ point in defense of the study.

I disagree with Nicolas here and this is because I agree with him (I know, it sounds confused but bear with me).

The reason is simple: firms react differently to the same shock. Costs are costs, productivity is productivity, but the constraints are never exactly the same. For example, if I am a small employer and the minimum wage is increased 15%, why would I fire one of my two employees to adjust? If that was my reaction to the minimum wage, I would sacrifice 33% of my output for a 15% increase in wages which compose the majority but not the totality of my costs. Using that margin of adjustment would be insensible for me given the constraint of my firm’s size. I might be more tempted to cut hours, cut benefits, cut quality, substitute between workers, raise prices (depending on the elasticity of the demand for my services). However, if I am a large firm of 10,000 employees, sacking one worker is an easy margin to adjust on since I am not constrained as much as the small firm. In that situation, a large firm might be tempted to adjust on that margin rather than cut quality or raise prices. Basically, firms respond to higher labor costs (not accompanied by greater productivity) in different ways.

By concentrating on small firms, the authors of the Seattle study were concentrating on a group that had, probably, a more homogeneous set of constraints and responses. In their case, they were looking at hours worked. Had they blended in the larger firms, they would have looked for an adjustment on the part of firms less to adjust by compressing hours but rather by compressing the workforce.

This is why the UW study is so interesting in terms of research design: it focused like a laser on one adjustment channel in the group most likely to respond in that manner. If one reads attentively that paper, it is clear that this is the aim of the authors – to better document this element of the minimum wage literature. If one seeks to exhaustively measure what were the costs of the policy, one would need a much wider research design to reflect the wide array of adjustments available to employers (and workers).

In short, Nicolas is right that research designs matter, but he is wrong in that the criticism of the UW study is really an instance of pro-minimum wage hike pundits bringing the hockey puck in their own net!

On doing economic history

I admit to being a happy man. While I am in general a smiling sort of fellow, I was delightfully giggling with joy upon hearing that another economic historian (and a fellow  Canadian from the LSE to boot), Dave Donaldson, won the John Bates Clark medal. I dare say that it was about time. Nonetheless I think it is time to talk to economists about how to do economic history (and why more should do it). Basically, I argue that the necessities of the trade require a longer period of maturation and a considerable amount of hard work. Yet, once the economic historian arrives at maturity, he produces long-lasting research which (in the words of Douglass North) uses history to bring theory to life.

Economic History is the Application of all Fields of Economics

Economics is a deductive science through which axiomatic statements about human behavior are derived. For example, stating that the demand curve is downward-sloping is an axiomatic statement. No economist ever needed to measure quantities and prices to say that if the price increases, all else being equal, the quantity will drop. As such, economic theory needs to be internally consistent (i.e. not argue that higher prices mean both smaller and greater quantities of goods consumed all else being equal).

However, the application of these axiomatic statements depends largely on the question asked. For example, I am currently doing work on the 19th century Canadian institution of seigneurial tenure. In that work, I  question the role that seigneurial tenure played in hindering economic development.  In the existing literature, the general argument is that the seigneurs (i.e. the landlords) hindered development by taxing (as per their legal rights) a large share of net agricultural output. This prevented the accumulation of savings which – in times of imperfect capital markets – were needed to finance investments in capital-intensive agriculture. That literature invoked one corpus of axiomatic statements that relate to capital theory. For my part, I argue that the system – because of a series of monopoly rights – was actually a monopsony system through the landlords restrained their demand for labor on the non-farm labor market and depressed wages. My argument invokes the corpus of axioms related to industrial organization and monopsony theory. Both explanations are internally consistent (there are no self-contradictions). Yet, one must be more relevant to the question of whether or not the institution hindered growth and one must square better with the observed facts.

And there is economic history properly done. It tries to answer which theory is relevant to the question asked. The purpose of economic history is thus to find which theories matter the most.

Take the case, again, of asymetric information. The seminal work of Akerlof on the market for lemons made a consistent theory, but subsequent waves of research (notably my favorite here by Eric Bond) have showed that the stylized predictions of this theory rarely materialize. Why? Because the theory of signaling suggests that individuals will find ways to invest in a “signal” to solve the problem. These are two competing theories (signaling versus asymetric information) and one seems to win over the other.  An economic historian tries to sort out what mattered to a particular event.

Now, take these last few paragraphs and drop the words “economic historians” and replace them by “economists”.  I believe that no economist would disagree with the definition of the tasks of the economist that I offered. So why would an economic historian be different? Everything that has happened is history and everything question with regards to it must be answered through sifting for the theories that is relevant to the event studied (under the constraint that the theory be consistent). Every economist is an economic historian.

As such, the economic historian/economist must use advanced tools related to econometrics: synthetic controls, instrumental variables, proper identification strategies, vector auto-regressions, cointegration, variance analysis and everything you can think of. He needs to do so in order to answer the question he tries to answer. The only difference with the economic historian is that he looks further back in the past.

The problem with this systematic approach is the efforts needed by practitioners.  There is a need to understand – intuitively – a wide body of literature on price theory, statistical theories and tools, accounting (for understanding national accounts) and political economy. This takes many years of training and I can take my case as an example. I force myself to read one scientific article that is outside my main fields of interest every week in order to create a mental repository of theoretical insights I can exploit. Since I entered university in 2006, I have been forcing myself to read theoretical books that were on the margin of my comfort zone. For example, University Economics by Allen and Alchian was one of my favorite discoveries as it introduced me to the UCLA approach to price theory. It changed my way of understanding firms and the decisions they made. Then reading some works on Keynesian theory (I will confess that I have never been able to finish the General Theory) which made me more respectful of some core insights of that body of literature. In the process of reading those, I created lists of theoretical key points like one would accumulate kitchen equipment.

This takes a lot of time, patience and modesty towards one’s accumulated stock of knowledge. But these theories never meant anything to me without any application to deeper questions. After all, debating about the theory of price stickiness without actually asking if it mattered is akin to debating with theologians about the gender of angels (I vote that they are angels and since these are fictitious, I don’t give a flying hoot’nanny). This is because I really buy in the claim made by Douglass North that theory is brought to life by history (and that history is explained by theory).

On the Practice of Economic History

So, how do we practice economic history? The first thing is to find questions that matter.  The second is to invest time in collecting inputs for production.

While accumulating theoretical insights, I also made lists of historical questions that were still debated.  Basically, I made lists of research questions since I was an undergraduate student (not kidding here) and I keep everything on the list until I have been satisfied by my answer and/or the subject has been convincingly resolved.

One of my criteria for selecting a question is that it must relate to an issue that is relevant to understanding why certain societies are where there are now. For example, I have been delving into the issue of the agricultural crisis in Canada during the early decades of the 19th century. Why? Because most historians attribute (wrongly in my opinion)  a key role to this crisis in the creation of the Canadian confederation, the migration of the French-Canadians to the United States and the politics of Canada until today. Another debate that I have been involved in relates to the Quiet Revolution in Québec (see my book here) which is argued to be a watershed moment in the history of the province. According to many, it marked a breaking point when Quebec caught up dramatically with the rest of  Canada (I disagreed and proposed that it actually slowed down a rapid convergence in the decade and a half that preceded it). I picked the question because the moment is central to all political narratives presently existing in Quebec and every politician ushers the words “Quiet Revolution” when given the chance.

In both cases, they mattered to understanding what Canada was and what it has become. I used theory to sort out what mattered and what did not matter. As such, I used theory to explain history and in the process I brought theory to life in a way that was relevant to readers (I hope).  The key point is to use theory and history together to bring both to life! That is the craft of the economic historian.

The other difficulty (on top of selecting questions and understanding theories that may be relevant) for the economic historian is the time-consuming nature of data collection. Economic historians are basically monks (and in my case, I have both the shape and the haircut of friar Tuck) who patiently collect and assemble new data for research. This is a high fixed cost of entering in the trade. In my case, I spent two years in a religious congregation (literally with religious officials) collecting prices, wages, piece rates, farm data to create a wide empirical portrait of the Canadian economy.  This was a long and arduous process.

However, thanks to the lists of questions I had assembled by reading theory and history, I saw the many steps of research I could generate by assembling data. Armed with some knowledge of what I could do, the data I collected told me of other questions that I could assemble. Once I had finish my data collection (18 months), I had assembled a roadmap of twenty-something papers in order to answer a wide array of questions on Canadian economic history: was there an agricultural crisis; were French-Canadians the inefficient farmers they were portrayed to be; why did the British tolerate catholic and French institutions when they conquered French Canada; did seigneurial tenure explain the poverty of French Canada; did the conquest of Canada matter to future growth; what was the role of free banking in stimulating growth in Canada etc.

It is necessary for the economic historian to collect a ton of data and assemble a large base of theoretical knowledge to guide the data towards relevant questions. For those reasons, the economic historian takes a longer time to mature. It simply takes more time. Yet, once the maturation is over (I feel that mine is far from being over to be honest), you get scholars like Joel Mokyr, Deirdre McCloskey, Robert Fogel, Douglass North, Barry Weingast, Sheilagh Ogilvie and Ronald Coase (yes, I consider Coase to be an economic historian but that is for another post) who are able to produce on a wide-ranging set of topics with great depth and understanding.

Conclusion

The craft of the economic historian is one that requires a long period of apprenticeship (there is an inside joke here, sorry about that). It requires heavy investment in theoretical understanding beyond the main field of interest that must be complemented with a diligent accumulation of potential research questions to guide the efforts at data collection. Yet, in the end, it generates research that is likely to resonate with the wider public and impact our understanding of theory. History brings theory to life indeed!

On Evonomics, Spelling and Basic Economic Concepts

I am a big fan of exploring economic ideas into greater depth rather than remaining on the surface of knowledge that I accumulated through my studies. As such, I am always happy when I see people trying to promote “alternatives” within the field of economics (e.g. neuroeconomics, behavioral economics, economic history, evolutionary economics, feminist economics etc.). I do not always agree, but it is enjoyable to think about some of the core tenets of the field through the work of places like the Institute for New Economic Thinking. However, things like Evonomics do not qualify for this.

And this is in spite of the fact that the core motivation of the webzine is correct: there are problems with the way we do economics today (on average). However, discomfort towards the existing state of affairs is no excuse for shoddy work and holding up strawmen that can be burned at the stake followed by a vindictive celebratory dance. The most common feature of those who write for Evonomics is to hold such a strawman with regards to rationality. It presents a caricature where humans calculate everything with precision and argue that if, post-facto, all turns out well then it was a rational process. No one, I mean no one, believes that. The most succinct summary  of rationality according to economists is presented by Vernon Smith in his Rationality in Economics: Constructivist and Ecological Forms. 

Such practices have led me to discount much of what is said on Evonomics and it is close to the threshold where the time costs of sorting the wheat from the chaff outweighs the intellectual benefits.

This recent article on “Dierdre” McCloskey may have pushed it over that threshold. I say “Dierdre” because the author of the article could not even be bothered to write correctly the name of the person he is criticizing. Indeed, it is “Deirdre” McCloskey and not “Dierdre”. While, ethymologically, Dierdre is a variant of Deirdre from the Celtic legend that shares similarities to Tristan and Isolde, the latter form is more frequent. More importantly, Dierdre is name more familiar to players of Guild Wars. 

A minor irritant which, unfortunately, compounds my poor view of the webzine. But then, the author of the article in question goes into full strawman mode. He singles out a passage from McCloskey regarding the effects of redistributing income from the top to the bottom. In that passage, McCloskey merely points out that the effects of equalizing incomes would be minimal.  The author’s reply? Focus on wealth and accuse McCloskey of shoddy mathematics.

Now, this is just poor understanding of basic economic concepts and it matters to the author’s whole point. Income is a flux variable and wealth is a stock variable. The two things are thus dramatically different. True, the flux can help build up the stock, but the people with the top incomes (flux) are not necessarily those with the top wealths (stock). For example, most students have negative net worth (negative stock) when they graduate. However, thanks to their human capital (Bryan Caplan would say signal here), they have higher earnings. Thus, they’re closer to the top of the income distribution and closer to the very bottom of the wealth distribution.  My grandpa is the actual reverse. Before he passed away, my grandpa was probably at the top of the wealth distribution, but since he passed most of his time doing  no paid work whatsoever, he was at the bottom of the income distribution.

Nevermind that the author of the Evonomics article misses the basic point of McCloskey (which is that we should care more about the actual welfare of people rather than the egalitarian distribution), this basic flaw in understanding why the difference between a stock and flux leads him astray.

To be fair, I can see why some people disagree with McCloskey. However, if you can’t pass the basic ideological Turing test, you should not write in rebuttal.

The Myth of Common Property

An Observation by L.A. Repucci

It has been proposed that there exists a state in which property — whether defined in the physical sense such as objects, products, buildings, roads, etc, or financial instruments such as monetary instruments, corporate title, or deed to land ownership — may be owned or possessed in common; that is to say, that property may be possessed of multiple rightful claimants simultaneously.  This suggestion, when examined rationally and exhaustively, is untenable from the perspective of any logical school of economic, social, and indeed physical school of thought, and balks at simple scrutiny.

In law, Property may be defined as the tangible product of enterprise and resources, or the gain of capital wealth which it may create.  To ‘hold’ Property, a Party, or private, sentient entity, must have rightful claim to it and be capable of using it freely as they see fit, in keeping with natural law.

Natural resources, including land, are said to be owned either jurisdictionally by State, privately by party, or in common to the natural world.  If property may be legally defined only as a product, then natural resources may be excluded from all laws pertaining to legal property.  If property also may be further defined by the ability of it’s owner to use it as they see fit, in keeping with Ius Naturale, then any property claimed jurisdictionally by the State and said to be held in common amongst the citizenry must meet the article of usage to be legally owned.  Consider Hardin’s tragedy of the commons as an argument for the conservation of private property over a state of nature, rather than an appeal to the economic law of scarcity or an appeal to the second law of thermodynamics ,

In Physics:  Property may be defined as either an observable state of physical being.  The universe of Einstein, Kepler, and Newton rests soundly on the tenet that physical bodies cannot occupy multiple physical locations simultaneously.  The laws that govern the macro-physical world do not operate in the same way on the quantum level.  At that comparatively tiny level, the rules of our known universe break down, and matter may exhibit the observed property of being at multiple locations simultaneously — bully and chalk 1 point for common property on the theoretically-quantum scale.

Currency:  The attempt to simultaneously possess and use currency as defined above would result in praxeologic market-hilarity in the best case, and imprisonment or physical injury in the worst.  Observe: Two friends in common possession of 1$ walk into a corner shop to buy a pack of chewing gum, which costs 1$.  They each place a pack on the counter, and present the cashier with their single dollar bill.  “It’s both of ours!  We earned it in business together!” they beam as the cashier calls the cops and racks a shotgun under the register…

The two friends above may not use the paper currency simultaneously — while the concept of a dollar representing two, exclusively owned fifty-percent equity shares may be widely and innately understood — the single bill is represented in specie among the parties would still be 2 pairs of quarters.  While they could pool their resources and ‘both’ purchase a single pack of gum, they would continue to own a 50% equity share in the pack — resulting in a division yet again of title equally between the dozen-or-so sticks of gum contained therein.  This reduction and division of ownership can proceed ad quantum.

This simple reason is applicable within and demonstrated by current and universal economic realities, including all claims of joint title, common property law, jurisdictional issues, corporate law, and financial liability.  A joint bank account is simply the sum of the parties’ individual interest in that account — claims to hold legal property in common are bunk.

The human condition is marked by the sovereignty, independence and isolation of one’s own thought.  Praxeological thought-experiments like John Searle’s Chinese Room Argument and Alan Turing’s Test would not be possible to pose in a human reality that was other than a state of individual mental separation.  As we are alone in our thoughts, our experience of reality can only be communicated to one another.  It is therefore not possible to ever ‘share’ an experience with any other sentient being, because it is not possible to perceive reality as another person…even if the technology should develop such that multiple individuals can network and share the information within their minds, that information must still filter through another individual consciousness in order to be experienced simultaneously.  The physical separation of two minds is reinforced by the rationally-necessary separation of distinct individuals.  There may exist a potential hive-mind collectivist state, but it would require such a radical change to that which constitutes the human condition, that it would violate the tenets of what it is to be human.

In conclusion, logically, the most plausible circumstance in which property could exist in common would be on the quantum level within a hive-minded non-human collective, and the laws that govern men are and should be an accurate extension of the laws that govern nature — not through Social Darwinism, but rather anthropology.  Humans, as an adaptation, work interdependently to thrive, which often includes the voluntary sharing and trading of resources and property…none of which are held in common.

Ad Quantum,

L.A. Repucci