Before the Fourth

Young man, what we meant in going for those Redcoats was this: we had always governed ourselves and we always meant to. They didn’t mean we should.”

  • Levi Preston, 1842, remembering the Battle of Lexington and Concord

First of all, thanks to anyone who read my post on the deleted clause of the Declaration of Independence. I see many more readers around today every year, and I think that it serves as a constant reminder that our Founders were not a uniform group of morally perfect heroes, but a cobbled-together meeting of disagreeable risk-takers who were all dedicated to ‘freedom’ but struggled to agree on what it meant.

However, as we all get fat and blind celebrating the bravery of the 56 men putting who put pen to paper, I also want to call out the oft-forgotten history of rebellion that preceded this auspicious day. July 4th, 1776 makes such a simple story of Independence that we often overlook the men and women who risked (and many who lost) their lives, fortunes, and sacred honor to make that signing possible, and who kept the flame of rebellion alive when it was most likely to be snuffed out.

This inspired me to collate some of the most important early moments and individuals on whom fate turned. I’m deliberately skipping some well known icons and events (the Boston Massacre and Tea Party, the first shot on Lexington Green, etc.), but if you want to see the whole tapestry and not just the forgotten images, I recommend the following Founding Trilogy:

I owe my knowledge and the stories below to these histories that truly put you among the tiny communities that fought before July 4th, and they have convinced me, there is no Declaration, no Revolution, and no United States of America without the independent actions and decisions of their central cast.

Part I: Before the Ride

Paul Revere’s ride is one of the most mythologized, and mis-cast, moments in history. As early as the day after the Battle of Lexington and Concord, local Whigs were intent on casting the battle as an unprovoked attack by the tyrannical British occupiers and their imperious leader in Boston, General (and Governor) Thomas Gage. That meant that they brushed possibly the most important aspect of the rebellion’s early success under the rug, and honored Revere for his rapid responsiveness on that very night. Since then, unfortunately, the ride has been the center of most messaging honoring or criticizing this patriot.

Revere’s ride would have been useless if, for several years, he had not been New England’s most active community organizer. The battle did not represent the first time Thomas Gage secretly marched on the powder stores of small New England towns (he had successfully done so to Somerville in 1774), or even the first time Boston’s suburbs were called to arms in response (in the Powder Alarm, thousands turned out to defend against a false-alarm gunpowder raid). Revere’s real contribution was that he was one of many go-betweens for small towns intent on protecting their freedom and their guns, and their link to the conspirators led by Sam Adams and John Hancock in Boston. He helped spawn a network of communication, intelligence, and purchase of arms that was built on New England’s uniquely localized community leadership, without which it would have been impossible for over 4,000 revolutionaries to show up, armed and organized, on the road from Concord to Boston, in less than 8 hours.

New England towns had some of the most uniquely localized leadership in the history of political organization. In these small, prosperous towns, with churches as their center of social organization, government was effectively limited to organizing defense, mostly in the purchase of cannon and gunpowder, both of which were rare in the colonies. The towns also mustered and drilled, and while most towns did not have centralized budgets, they honored wealthier citizens who would donate guns to those who could not afford their own. From town government to General Gage’s seizures to the early supply conflicts, then, guns and especially gunpowder (which was not manufactured anywhere in New England in 1775, and rapidly became the scarcest resource of the early Revolution) were literally the center of town government and the central reason underlying the spark and growth of Revolution.

The Whigs (or Patriots) could have, in fact, just as easily noted December 14th, 1774, as the kick-off of the Revolution, since it was the first military conflict between Patriots and British garrisons, centered around gunpowder and cannon. On that day, the ever-present Revere, having learned that Gage may attempt to seize the powder of Portsmouth, NH, rode to warn John Langdon, who organized several hundred men not only to defend the town’s powder but to seize Fort William and Mary. He led several hundred Patriots to the fort, on the strategic Newcastle Island at the mouth of the Portsmouth River, and after asking the six British soldiers garrisoning the fort to surrender, stormed it in the face of cannonfire. No deaths resulted, and more Patriots arrived the next day, stealing a march on Gage and driving him to see his Boston post as a tender toehold in a territory alight with spies, traitors, and rebels.

However, despite the fact that the British governor of New Hampshire fled and the British never regained the territory, December 14th is not our Independence Day. Even though it was a successful pre-Lexington battle, defending the same rights, and resulting from a ride of Paul Revere, it has been largely overshadowed. Is it because it was an attack by Patriots, undermining the ‘innocent townsfolk’ myth of Patriot propaganda? Because no one died for freedom? Because American historians like to remember symbolic signatures by fancy Congresses rather than the trigger fingers of rural nobodies? Either way, December 14th shows that the gunpowder-centric small town networks, and their greatest community organizer, were the kindling, the flint, and the steel of the Revolution, and were all ready to spark long before Lexington Green.

Even the events that led to that fateful battle center around a few key organizers and intelligence operatives. Hancock and Adams were only the most publicly known of hundreds of hidden leaders, and lacking the space to recognize all, I’ll just recognize Dr. Joseph Warren, who asked Revere to make his famous ride and who served as President of the Provincial Congress until he was killed in the desperate fighting on Bunker Hill, and Margaret Kemble Gage, who despite being married to General Gage favored the Whigs likely leaked his secret plans to march on Concord, and was subsequently forced to sail to Britain by her husband, probably to stop her role as the Revolution’s key spy.

These key figures put their lives on the line not for their country, nor for the political philosophies of classic liberalism, nor even because of the ‘oppressive’ Stamp Act or Intolerable Acts, but to defend their own and their community’s visceral freedoms from seizures and suppression by Gage’s men. They were joined by thousands of otherwise-nobodies who jumped out of bed to defend their neighbors, whose motivations are best captured by the quote above by Levi Preston, the longest-lived veteran of Lexington and Concord. The full quote, given to a historian asking about the context for the battle in 1842, is revealing:

Chamberlain (historian): “Captain Preston, why did you go to the Concord Fight, the 19th of April, 1775?”

Preston didn’t answer.

Chamberlain: “Was it because of the Intolerable Oppressions?”

Preston: “I never felt them.”

What of the Stamp Act?

Preston: “I never saw one of those stamps.” he responded.

Tea Tax?

Preston: “I never drank a drop of the stuff…The boys threw it all overboard.”

Chamberlain: “Maybe it was the words of Harrington, Sydney or Locke?”

Preston: “Never heard of ‘em,”

Chamberlain: “Well, then, what was the matter? And what did you mean in going to the fight?”

Preston: “Young man, what we meant in going for those Redcoats was this: we always had governed ourselves, and we always meant to. They didn’t mean we should.”

“And that, gentlemen,” Chamberlain resolved in his history, “is the ultimate philosophy of the American Revolution.”

  • Thus ends Part I! Come back to see Part II: The Navy, where we go into the small band-of-brothers from Marblehead, Massachusetts who served as Washington’s personal navy in the race for gunpowder, the escape from New York, and the crucial Crossing of the Delaware.

Caging the leaders of the future

My journey back to school has made me realize the skill school forbade me from learning is the single most important one I use in my job: delegation.

I have been running a research company I founded for 5 years now, and no single skill I have learned matters to my leadership abilities more than delegation. The only reason our company thrives is that other people do things I could never do myself, and it would be self-destructive and short-sighted to even try to hog the work on any task.

However, I returned to law school to finish my degree, and felt the limitations of my student life fall again squarely on my shoulders. Every assignment, every class felt uncomfortably heavy almost immediately, not because they were meaningless or useless, but because I could not treat them like a problem seeking a solution. Like an obstacle for me to overcome with my greatest asset–my team.

This simple rule, that I must turn in only my own work, makes sense only in the sterile world of the bean counting metric junkies, who worry not about whether I build great things but whether I built them alone. No client has ever peered suspiciously over my work, suggesting that perhaps I may have gotten outside, illicit assistance. Or worse, Googled and found someone else’s solution.

I’m not saying that all schools must immediately revise their grading systems to teach leadership or fit my needs. Far from it–I am telling anyone else struggling under the burdens of leadership, your school simply cannot help you. Recognize that there is no way to prepare for real challenges by getting high grades on fake ones. And learn to value the skills of others, lest you drown in your own inbox and incompetence.

Transaction Costs are Injustice

Every Law Professor: ‘what is justice?’

In law school, I found that the central goal of legal academics and practitioners was to construct systems of thought, regulation, and courts providing justice. In that endeavor, my peers and professors constantly asked, “what is justice?”

I think well intentioned lawyers would agree, the law should provide access to justice via a system that is generally agreeable to those subjected to it, and that matches in rules what the general public aligns on in spirit. However, beyond these generalities, I find the conversation of ‘what is justice’ to be too abstract to be useful. However, that does not mean we should give up on it, we just need to change approaches, and instead ask ‘what is injustice?’

The Via Negativa

The basis for this is that it is easier to agree on what is unjust than on what is just: injustice in the form of concrete, tangible wrongdoing can be protested to, and people from diverse viewpoints can find agreement in what they mutually despise. Through the via negativa, then, we can fill in the negative space around justice, and by recognizing what it is NOT, we can start to give it form.

I know exactly where I would start, since I spend way too much time around lawyers, and I have noticed that they are open to any discussion of how lawyers can bring justice, but get very prickly if you suggest that the cost in time, money, and lost control by delegating justice to lawyers is in any way problematic. Let’s just say, lawyers don’t like being reminded that they are rent seekers in the process of achieving justice. So, my bold assertion is:

Transaction Costs are Injustice

Let me unpack this. What I mean by this is that, whatever a just outcome may be, it is unjust to delay this outcome when speed is possible, it is unjust to have complexity and opacity when simplicity is possible, and it is unjust to demand control when voluntarism and mutuality is possible. In effect, it is unjust to make the process of finding justice costly.

The Appeal Labyrinth: The Town of Castle Rock v. Gonzales

This issue actually came up to me in a conversation about the heartbreaking case of The Town of Castle Rock v. Gonzales. In June 1999, Jessica Lenahan-Gonzales was a resident of Castle Rock whose estranged husband kidnapped her children from her house, and when she called the police and asked them to enforce an active restraining order against him (he had been stalking her and her children). They did not react quickly, and 12 hours later, her children were found murdered in her estranged husband’s car after he engaged in a deadly shootout with the police.

Now, there is no good outcome from such a situation, especially for Jessica. However, one route for her was to sue the police department under, of all things, under a law originally passed to fight the KKK. In her lawsuit, she claimed the federal government had an interest in enforcement of the restraining order and alleged that the police department had “an official policy or custom of failing to respond properly to complaints of restraining order violations.”

Jessica’s case was initially dismissed by the District Court, but she appealed and, in 2002, it was reversed by the Tenth Circuit, which said she could recover under procedural due process but denied that she had a right to recover via substantive due process (for Scalia’s take on substantive due process in general, see this amazing video). However, the Circuit court also noted that while the town was liable, the officers were covered by qualified immunity.

The town appealed and actually was granted cert by the Supreme Court. SCOTUS reversed the Circuit Court in a 7-2 decision; Scalia wrote for the majority that officers were not required by law to immediately enforce restraining orders, that even if they were it would not give individuals a right to sue (instead, the right would be with the state). Lastly, he noted that even if enforceable, this would have no monetary value and could not lead to an individual payout via Due Process.

So, in the end, SCOTUS gave Jessica nothing. Now, we can all weigh in on whether Scalia ‘did justice’ to her; I have incredible sympathy for Jessica but happen to think his argument is correct, that under the law and Constitution, a restraining order does not give her the right to get money from the town. But I will say that the court did her a great injustice, in sending her down a 6-year rabbit hole of being denied, then allowed, then denied again from recovery. How, then, can we all agree that the court was unjust? The injustice was the delay. The injustice was the tremendous cost in time, money, and emotional damage. The injustice was that the process for answering the question of how a mother should react to the murder of her children and how a town should support her gave no closure, and instead just had transaction costs in landing her, in 2005, exactly in the same spot she was in 1999.

The Lazy Counter: justice takes time!

Now, angry lawyers out there, don’t mistake me here: I am not saying appeals never bring justice. I too am in awe of the work of the Equal Justice Initiative, which uses the appeals process to fight wrongful convictions. I am not arguing appeals are unjust. I am arguing that a legal system that takes 6 years and millions of dollars to answer any question is doing an injustice to EJI’s clients as well. Was Walter “Johnny D.” McMillian served well by a justice system that put him in jail for years while his appeal stagnated?

What is obvious here is that lawyers, in their blindered vision of pursuing justice, are doing their best to get to the right outcome, and while cost may be a consideration for process improvement, it is not a consideration for justice. Maybe a simpler, more transparent, faster court process would do a worse job. But I think that every complexity, opacity, and delay is an injustice done by our system to the people who are seeking justice through it, and I would be amazed if Johnny D would have been thankful for all the technicalities that could be used to get the right outcome after what the Alabama prison system put him through.

Is “justice” trying to do too much?

Unlike in the case of Johnny D, Jessica’s case may show how we stretch the bounds of the system to get to an outcome that feels right, rather than being by the rules. Johnny D was caught up by a racist abuse of criminal justice, which is intended to keep citizens safe; there was no ‘community solution’ available for the murder of which he was falsely accused.

Jessica, however, was simply not treated right by her town. Anyone, regardless of their politics or views, would hope that the town has some level of care for their aggrieved, and that the community could pull together around her. Obviously, this did not happen–and especially not by the town’s police department, which had the opportunity to admit it was asleep at the wheel under the knowledge that they had qualified immunity. Since community solutions were lacking, she brought a civil case, which had a desirable end–helping an aggrieved mother and recognizing that her case was mishandled–but inadequate and undesirable means: lawyers lawyering.

I would be amazed if Jessica herself thought of the connection of: restraining order->Ku Klux Klan Act->federal oversight of law enforcement->property recovery under the Due Process Clause->monetary damages for police inaction. From my legal education, this sounds like the highly technical argument of a creative activist lawyer, who wants to change the law as much as he wants to help his clients. So, were Jessica’s lawyers trying to do too much through the justice system? Was the better solution, then, not to turn back to the community and use public truth-telling or even honest requests for help?

The elites-for-the-people against the people

This made me react against a phenomenon I have seen across law schools, firms, and courts. At elite law schools, the administration touts the number of Access to Justice projects and amicus briefs written by faculty in cases like Gonzales. At elite law firms, they attract top performers with huge salaries, sure, but they mostly talk about how many interesting pro bono cases their associates can take on. And on top Circuit Courts, most famously the Ninth, my classmates go on to help judges think creatively about how to reach just outcomes via legal wrangling. All of these activities are done with a mix of noblesse oblige and self-importance, but are honestly intended to help find justice for the downtrodden. I simply think these do-gooders don’t notice that all these activities are costly.

If you are not a lawyer, you may not realize how systematic this cost has become. Non-lawyers view courts as places where people with causes of action come and get answers based on the law. Lawyers know better: this certainly happens, but in parallel, dozens of groups (plaintiffs lawyers and activist groups on all sides of every issue) are targeting certain laws and certain constitutional questions, and are searching madly for standing. As in, they comb the news and low-level lawsuits to find one they can fund through as many appeals as possible to get the law changed or even just to get a ruling on a fact pattern that is friendly to them. In this, let me pick on my own team: in Carpenter v. US, in which the government used the cell phone location records of Carpenter and his friends without a warrant to arrest and convict them of robberies, there were no fewer than 16 amicus briefs by privacy activists (the CEI, EPIC, EFF, the Fourth Amendment Scholars, and the list goes on). Carpenter v. US was about many deep legal deliberations on the importance of privacy, but I have to say, long before it reached SCOTUS, it was no longer about justice for Carpenter, who had been in jail for two years and who wasn’t getting out even if he won. While it was a victory for my ‘team’ in saying that the government needs warrants if it wants cell phone location records, maybe justice isn’t just about getting victories for my team, if that victory comes at the cost of multiple appeals, dozens of lawyers and clerks, national media coverage, uncertainty for cell phone users and companies, and those 16 institutions writing briefs.

I therefore ask proponents of justice, who are trying to use their elite position to improve the system’s outcomes for the downtrodden, to be a little bit more humble and self-focused. Instead of sitting in seminars or court sessions deliberating on ‘what is justice,’ ask whether the justice system is the right way to seek the right outcome. Ask whether, maybe, it would be better to go out and act positively toward your fellow man rather than demand money, time, and attention to the causes, cases, and opinions of the (all elite and elitist) members of legal groups.

Invasiveness is Injustice

Across all legal disputes, I think the thing that rankles me–and all non-lawyers–is how prominent law is in our lives. If I need to use the justice system, I know it will become a major part of my life’s spending, but even if I never am called into court, I know that court cases are going to continue to be high-profile, lawyers are going to continue to increase their share of the economy, and professors are going to keep publishing books, seminars, articles, and blogs about ‘how can people like me bring just outcomes?’

So, maybe, we can find some justice for all if the legal system simply recognizes that ‘what is justice’ is not a question of all-encompassing, existential values, but a question of how to run an institution. Maybe what is important here is not the rights that we seek to gain for the oppressed by any means necessary, but of building and maintaining a structure (a Constitution, if you will) where anyone can engage, or not, with a system that uses just methods. High cost, delay, opacity, and central control are not just methods and show that the system is not working effectively.

We can all agree, left and right, that regardless of the answer, the system, the method of justice is itself broken if it cannot help but be a burden. Justice should not be so costly in our lives, and it is a failing of lawyers and judges to make their own jobs so important, pervasive, in control. I hope, with all the fantastically intelligent amicus-brief-writers out there, we can find a way to at least cut back that injustice.

CTRL + C: How can ideas find freedom in a digital world?

I propose a debate! The place: The NOL podcast. The people: anyone with fresh takes on copyright and patent in software (and who contacts me). The question: what are actions that businesses can take to carry out a vision of open collaboration via IP strategy?

As a former law student and current software company CEO, I have become frustrated with how abstract and academic IP discussions are. I know enough to be dangerous, and actually want to center in on: how can people like me use IP strategy to make our projects more open to collaboration, without making them exposed?

I’d love to get strategic advice in a debate environment. I’d also like to lay out below the IP landscape as I understand it to exist, and recall to some of the great IP visionaries of the early internet days, especially the Grateful Dead lyricist-turned-IP scholar, John Perry Barlow. Enjoy, and I will update this post once Brandon lets me set a date!

Copyrighting Code: Function masquerading as form

When I was taught about intellectual property, I learned about Google vs. Oracle, a case where the US Supreme Court considered the question, “Are API’s functional?” This may seem a strange question (when I ask computer scientists this question they always laugh helplessly), but the background is: According to US Copyright Law, “In no case does copyright protection . . . extend to any idea, procedure, process, system, method of operation, concept, principle, or discovery, regardless of the form in which it is described, explained, illustrated, or embodied in such work.” This means that code may be copyrighted if descriptive but not if a functional, ‘useful article‘–and so, the esteemed Court needed to decide, effectively, is the Application Program Interface (API) code that allows softwares to request or send data purely decorative?

Until the Supreme Court, thank god, ruled that copying API code was in fact a “fair use” of API’s, the lower court’s ruling had actually held that: (1) API’s are creative, nonfunctional, and copyrightable, and (2) Google owed Oracle money for their impudent CTRL + C of API code. I’m relieved Google won, but I was totally shocked that the Supreme Court reversed only part two of the lower court decision, leaving part one unaddressed. I actually was speechless, because if they recognized it was a fair copying (in the case that API’s were useful), how could they still allow Oracle to claim copyright over them in the first place?

This is just one of the ways in which law school showed me that IP law had a reckoning, from the 1990’s to today, on how it should live on in a field that has undermined its very purposes for existing. By that I mean, if intellectual property keeps people from copying inventors and thus reducing their benefit (compared against patent-granted artificial monopolies) or raising their cost (from the cost of printing, one of the key justifications of copyright), how will it live on in the world where printing is free and inventions benefit more from CTRL + C than they suffer?

Patenting Code: Calling Dibs on How Everything Works

While my copyright classes mostly shocked me by showing me how much we lie and pretend useful things are ‘creative’, patent classes astounded me in the ways companies would assert that they invented general practices. Patents are only supposed to be eligible if they are novel, useful, and non-obvious, and they cannot cover nature, abstractions, or mathematical formulas. Or, rather, that is what the rules say; the actuality is that patents constantly used to monopolize basic processes like “one click” buying or “rounding the edges of a square.” However, rather than pick on low-hanging fruit, I’ll note that the current leading case in software process patents is Alice v. CLS, which like Google v. Oracle, struck down IP for a very limited reason that betrays the nonsense that patents are in a digital world.

Alice Corp. had patented a software method for financial trading systems to reduce ‘settlement risk,’ the risk that one party does what they are supposed to do and the other does not. This sounds fancy, but if you read the early opinions, even the district court judge noticed that the patent basically covered the idea “of employing an intermediary to facilitate simultaneous exchange of obligations in order to minimize risk.”

This made it all the way to the Supreme Court, and thank god, they decided that Alice failed the following test of patentability of methods related to abstract ideas: (1) does the software method contain an abstract idea? (2) If yes, did the patenter add an “inventive concept” that gives the idea “something extra.”

In case you were wondering, yes, they literally said “something extra.”

Thus ended a multi-year lawsuit over whether Alice could stop other companies from minimizing risk. As if we need any more proof that judges and lawyers simply cannot understand how coding works, or how invention works, or how natural law works, one appellate judge recommended extremely broad patentability of general principles, abusing the Einstein quote of “even gravity is not a natural law” to imply that, maybe, Einstein could have patented general relativity?

These sorts of vague precedents that leave the door open to patenting basic processes. Outside of software, there are a Myriad of cases (pun intended, about a case where the Supreme Court ruled that excised DNA was patentable because Myriad figured out how to slice it) where judges let companies patent things that stretch credulity. It makes me wonder, especially given that research on the history of patents in the physical world shows that patents often hamper and harm innovators that make me question what we restrict in the name of rewarding innovators. In DNA, patents have overreached in an attempt to control a growing, organic, copying engine. In software, they often do the same, leaving developers in fear of the power of CTRL + C.

The shared vision: Wine without Bottles:

In setting up this debate, I am stealing the creative work of IP pioneer and Grateful Dead lyricist, John Perry Barlow, who posed the following riddle:

If our property can be infinitely reproduced and instantaneously distributed all over the planet without cost, without our knowledge, without its even leaving our possession, how can we protect it? How are we going to get paid for the work we do with our minds? And, if we can’t get paid, what will assure the continued creation and distribution of such work?

Barlow’s central question cuts to the very core of IP. If the goal of restricting CTRL + C was to reward innovators for generating copies of their work, what is the point of these restrictions when generating copies is free? If we no longer must pay to produce bottles to hold our wine, and it flows forth as a bounty from the springs of invention, should we force this flood to be contained at all?

The riddle has but one answer, and I cannot say it better than Barlow; anyone who is interested should read his whole treatise on Wine without Bottles here. I will add only that, as an inventor, I know that his vision of bottlers minding their own business has not come to pass fully, but that the growth of open-source projects shows that bottling code does not, in fact, age it like fine wine. In fact, if you follow the money, “Smart developers like to hang out with smart code. When you open-source useful code, you attract talent.” This gives me hope, and I want to build on that hope with ways to make his vision a reality.

Let’s debate the best way to enact a vision, rather than the vision

As an inventor considering how to build a successful software company meaning that I literally face the question of how to engage with the IP system, this question is one in which I am deeply interested. I’d like to hear fresh takes on how entrepreneurs can realistically act when deciding, should we bottle our wine? Should we allow other people to bottle and sell it? If my goal is to bring wine to those who are thirsty, how can I think about bottles?

I’m looking forward to what I hear, and as a bonus, I’ll give you my most inspiring Barlow quote, from his Declaration of the Independence of Cyberspace:

Cyberspace consists of transactions, relationships, and thought itself, arrayed like a standing wave in the web of our communications. Ours is a world that is both everywhere and nowhere, but it is not where bodies live.

We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.

We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.

Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here

. . . .

You [world governments] are terrified of your own children, since they are natives in a world where you will always be immigrants. Because you fear them, you entrust your bureaucracies with the parental responsibilities you are too cowardly to confront yourselves. In our world, all the sentiments and expressions of humanity, from the debasing to the angelic, are parts of a seamless whole, the global conversation of bits. We cannot separate the air that chokes from the air upon which wings beat.

Second to None in the Creation of Extraordinary Wealth

The most important historical question to help understand our rise from the muck to modern civilization is: how did we go from linear to exponential productivity growth? Let’s call that question “who started modernity?” People often look to the industrial revolution, which is certainly an acceleration of growth…but it is hard to say it caused the growth because it came centuries after the initial uptick. Historians also bring up the Renaissance, but this is also a mislead due to the ‘written bias’ of focusing on books, not actions; the Renaissance was more like the window dressing of the Venetian commercial revolution of the 11th and 12th centuries, which is in my opinion the answer to “who started modernity.” However, despite being the progenitors of modern capitalism (which is worth a blog in and of itself), Venice’s growth was localized and did not spread immediately across Europe; instead, Venice was the regional powerhouse who served as the example to copy. The Venetian model was also still proto-banking and proto-capitalism, with no centralized balance sheets, no widespread retail deposits, and a focus on Silk Road trade. Perhaps the next question is, “who spread modernity across Europe?” The answer to this question is far easier, and in fact can be centered to a huge degree around a single man, who was possibly the richest man of all time: Jakob Fugger.

Jakob Fugger was born to a family of textile traders in Augsburg in the 15th century, and after training in Venice, revolutionized banking and trading–the foundations on which investment, comparative advantage, and growth were built–as well as relationships between commoners and aristocrats, the church’s view of usury, and even funded the exploration of the New World. He was the only banker alive who could call in a debt on the powerful Holy Roman Emperor, Charles V, mostly because Charles owed his power entirely to Fugger. Strangely, he is perhaps best known for his philanthropic innovations (founding the Fuggerei, which were some of the earliest recorded philanthropic housing projects and which are still in operation today); this should be easily outcompeted by:

  1. His introduction of double entry bookkeeping to the continent
  2. His invention of the consolidated balance sheet (bringing together the accounts of all branches of a family business)
  3. His invention of the newspaper as an investment-information tool
  4. His key role in the pope allowing usury (mostly because he was the pope’s banker)
  5. His transformation of Maximilian from a paper emperor with no funding, little land, and no power to a competitor for European domination
  6. His funding of early expeditions to bring spices back from Indonesia around the Cape of Good Hope
  7. His trusted position as the only banker who the Electors of the Holy Roman Empire would trust to fund the election of Charles V
  8. His complicated, mostly adversarial relationship with Martin Luther that shaped the Reformation and culminated in the German Peasant’s War, when Luther dropped his anti-capitalist rhetoric and Fugger-hating to join Fugger’s side in crushing a modern-era messianic figure
  9. His involvement in one of the earliest recorded anti-trust lawsuits (where the central argument was around the etymology of the word “monopoly”)
  10. His dissemination, for the first time, of trustworthy bank deposit services to the upper middle class
  11. His funding of the military revolution that rendered knights unnecessary and bankers and engineers essential
  12. His invention of the international joint venture in his Hungarian copper-mining dual-family investment, where marriages served in the place of stockholder agreements
  13. His 12% annualized return on investment over his entire life (beating index funds for almost 5 decades without the benefit of a public stock market), dying the richest man in history.

The story of Fugger’s family–the story, perhaps, of the rise of modernity–begins with a tax record of his family moving to Augsburg, with an interesting spelling of his name: “Fucker advenit” (Fugger has arrived). His family established a local textile-trading family business, and even managed to get a coat of arms (despite their peasant origins) by making clothes for a nobleman and forgiving his debt.

As the 7th of 7 sons, Jakob Fugger was given the least important trading post in the area by his older brothers; Salzburg, a tiny mountain town that was about to have a change in fortune when miners hit the most productive vein of silver ever found by Europeans until the Spanish found Potosi (the Silver Mountain) in Peru. He then began his commercial empire by taking a risk that no one else would.

Sigismund, the lord of Salzburg, was sitting on top of a silver mine, but still could not run a profit because he was trying to compete with the decadence of his neighbors. He took out loans to fund huge parties, and then to expand his power, made the strategic error of attacking Venice–the most powerful trading power of the era. This was in the era when sovereigns could void debts, or any contracts, within their realm without major consequences, so lending to nobles was a risky endeavor, especially without backing of a powerful noble to force repayment or address contract breach.

Because of this concern, no other merchant or banker would lend to Sigismund for this venture because sovereigns could so easily default on debts, but where others saw only risk, Fugger saw opportunity. He saw that Sigismund was short-sighted and would constantly need funds; he also saw that Sigismund would sign any contract to get the funds to attack Venice. Fugger fronted the money, collateralized by near-total control of Sigismund’s mines–if only he could enforce the contract.

Thus, the Fugger empire’s first major investment was in securing (1) a long-term, iterated credit arrangement with a sovereign who (2) had access to a rapidly-growing industry and was willing to trade its profits for access to credit (to fund cannons and parties, in his case).

What is notable about Fugger’s supposedly crazy risk is that, while it depended on enforcing a contract against a sovereign who could nullify it with a word, he still set himself up for a consistent, long-term benefit that could be squeezed from Sigismund so long as he continued to offer credit. This way, Sigismund could not nullify earlier contracts but instead recognized them in return for ongoing loan services; thus, Fugger solved this urge toward betrayal by iterating the prisoner’s dilemma of defaulting. He did not demand immediate repayment, but rather set up a consistent revenue stream and establishing Fugger as Sigismund’s crucial creditor. Sigismund kept wanting finer things–and kept borrowing from Fugger to get them, meaning he could not default on the original loan that gave Fugger control of the mines’ income. Fugger countered asymmetrical social relationships with asymmetric terms of the contract, and countered the desire for default with becoming essential.

Eventually, Fugger met Maximilian, a disheveled, religion-and-crown-obsessed nobleman who had been elected Holy Roman Emperor specifically because of his lack of power. The Electors wanted a paper emperor to keep freedom for their principalities; Maximilian was so weak that a small town once arrested and beat him for trying to impose a modest tax. Fugger, unlike others, saw opportunity because he recognized when aligning paper trails (contracts or election outcomes) with power relationships could align interests and set him up as the banker to emperors. When Maximilian came into conflict with Sigismund, Fugger refused any further loans to Sigismund, and Maximilian forced Sigismund to step down. Part of Sigismund’s surrender and Maximilian’s new treaty included recognizing Fugger’s ongoing rights over the Salzburg mines, a sure sign that Fugger had found a better patron and solidified his rights over the mine through his political maneuvering–by denying a loan to Sigismund and offering money instead to Maximilian. Once he had secured this cash cow, Fugger was certainly put in risky scenarios, but didn’t seek out risk, and saw consistent yearly returns of 8% for several decades followed by 16% in the last 15 years of his life.

From this point forward, Fugger was effectively the creditor to the Emperor throughout Maximilian’s life, and built a similar relationship: Maximilian paid for parties, military campaigns, and bought off Electors with Fugger funds. As more of Maximilian’s assets were collateralized, Fugger’s commercial empire grew; he gained not only access to silver but also property ownership. He was granted a range of fiefs, including Arnoldstein, a critical trade juncture where Austria, Italy, and Slovenia border each other; his manufacturing and trade led the town to be renamed, for generations, Fuggerau, or Place of Fugger.

These activities that depended on lending to sovereigns brings up a major question: How did Fugger get the money he lent to the Emperor? Early in his career, he noted that bank deposit services where branches were present in different cities was a huge boon to the rising middle-upper class; property owners and merchants did not have access to reliable deposit services, so Fugger created a network of small branches all offering deposits with low interest rates, but where he could grow his services based on the dependability of moving money and holding money for those near, but not among, society’s elites. This gave him a deep well of dispersed depositors, providing him stable and dependable capital for his lending to sovereigns and funding his expanding mining empire.

Unlike modern financial engineers, who seem to focus on creative ways to go deeper in debt, Fugger’s creativity was mostly in ways that he could offer credit; he was most powerful when he was the only reliable source of credit to a political actor. So long as the relationship was ongoing, default risk was mitigated, and through this Fugger could control the purse strings on a wide range of endeavors. For instance, early in their relationship (after Maximilian deposed Sigismund and as part of the arrangement made Fugger’s interest in the Salzburg mines more permanent), Maximilian wanted to march on Rome as Charlemagne reborn and demand that the pope personally crown him; he was rebuffed dozens of times not by his advisors, but by Fugger’s denial of credit to hire the requisite soldiers.

Fugger also innovated in information exchange. Because he had a broad trading and banking business, he stood to lose a great deal if a region had a sudden shock (like a run on his banks) or gain if new opportunities arose (like a shift in silver prices). He took advantage of the printing press–less than 40 years after Gutenberg, and in a period when most writing was religious–to create the first proto-newspaper, which he used to gather and disseminate investment-relevant news. Thus, while he operated a network of small branches, he vastly improved information flow among these nodes and also standardized and centralized their accounting (including making the first centralized/combined balance sheet).

With this broad base of depositors and a network of informants, Fugger proceeded to change how war was fought and redraw the maps of Europe. Military historians have discussed when the “military revolution” that shifted the weapons, organization, and scale of war for decades, often centering in on Swedish armies in the 1550s as the beginning of the revolution. I would counter-argue that the Swedes simply continued a trend that the continent had begun in the late 1400’s, where:

  1. Knights’ training became irrelevant, gunpowder took over
  2. Logistics and resource planning were professionalized
  3. Early mechanization of ship building and arms manufacturing, as well as mining, shifted war from labor-centric to a mix of labor and capital
  4. Multi-year campaigns were possible due to better information flow, funding, professional organization
  5. Armies, especially mercenary groups, ballooned in size
  6. Continental diplomacy became more centralized and legalistic
  7. Wars were fought by access to creditors more than access to trained men, because credit could multiply the recruitment/production for war far beyond tax receipts

Money mattered in war long before Fugger: Roman usurpers always took over the mints first and army Alexander showed how logistics and supply were more important than pure numbers. However, the 15th century saw a change where armies were about guns, mercenaries, technological development, and investment, and above all credit, and Fugger was the single most influential creditor of European wars. After a trade dispute with the aging Hanseatic League over their monopoly of key trading ports, Fugger manipulated the cities into betraying each other–culminating in a war where those funded by Fugger broke the monopolistic power of the League. Later, because he had a joint venture with a Hungarian copper miner, he pushed Charles V into an invasion of Hungary that resulted in the creation of the Austro-Hungarian Empire. These are but two of the examples of Fugger destroying political entities; every Habsburg war fought from the rise of Maximilian through Fugger’s death in 1527 was funded in part by Fugger, giving him the power of the purse over such seminal conflicts as the Italian Wars, where Charles V fought on the side of the Pope and Henry VIII against Francis I of France and Venice, culminating in a Habsburg victory.

Like the Rothschilds after him, Fugger gained hugely through a reputation for being ‘good for the money’; while other bankers did their best to take advantage of clients, he provided consistency and dependability. Like the Iron Bank of Braavos in Game of Thrones, Fugger was the dependable source for ambitious rulers–but with the constant threat of denying credit or even war against any defaulter. His central role in manipulating political affairs via his banking is well testified during the election of Charles V in 1519. The powerful kings of Europe– Francis I of France, Henry VIII of England, and Frederick III of Saxony all offered huge bribes to the Electors. Because these sums crossed half a million florins, the competition rapidly became one not for the interest of the Electors–but for the access to capital. The Electors actually stipulated that they would not take payment based on a loan from anyone except Fugger; since Fugger chose Charles, so did they.

Fugger also inspired great hatred by populists and religious activists; Martin Luther was a contemporary who called Fugger out by name as part of the problem with the papacy. The reason? Fugger was the personal banker to the Pope, who was pressured into rescinding the church’s previously negative view of usury. He also helped arrange the scheme to fund the construction of the new St. Peter’s basilica; in fact, half of the indulgence money that was putatively for the basilica was in fact to pay off the Pope’s huge existing debts to Fugger. Thus, to Luther, Fugger was greed incarnate, and Fugger’s name became best known to the common man not for his innovations but his connection to papal extravagance and greed. This culminated in the 1525 German Peasant’s War, which saw an even more radical Reformer and modern-day messianic figure lead hordes of hundreds of thousands to Fuggerau and many other fortified towns. Luther himself inveighed against these mobs for their radical demands, and Fugger’s funding brought swift military action that put an end to the war–but not the Reformation or the hatred of bankers, which would explode violently throughout the next 100 years in Germany.

This brings me to my comparison: Fugger against all of the great wealth creators in history. What makes him stand head and shoulders above the rest, to me, is that his contributions cross so many major facets of society: Like Rockefeller, he used accounting and technological innovations to expand the distribution of a commodity (silver or oil), and he was also one of the OG philanthropists. Like the Rothschilds’ development of the government bond market and reputation-driven trust, Fugger’s balance-sheet inventions and trusted name provided infrastructural improvement to the flow of capital, trust in banks, and the literal tracking of transactions. However, no other capitalist had as central of a role in religious change–both as the driving force behind allowing usury and as an anti-Reformation leader. Similarly, few other people had as great a role in the Age of Discovery: Fugger funded Portuguese spice traders in Indonesia, possibly bankrolled Magellan, and funded the expedition that founded Venezuela (named in honor of Venice, where he trained). Lastly, no other banker had as influential of a role in political affairs; from dismantling the Hanseatic League to deciding the election of 1519 to building the Habsburgs from paper emperors to the most powerful monarchs in Europe in two generations, Fugger was the puppeteer of Europe–and such an effective one that you have barely heard of him. Hence, Fugger was not only the greatest wealth creator in history but among the most influential people in the rise of modernity.

Fugger’s legacy can be seen in his balance sheet of 1527; he basically developed the method of using it for central management, its only liabilities were widespread deposits from the upper-middle class (and his asset-to-debt ratio was in the range of 7-to-1, leaving an astonishingly large amount of equity for his family), and every important leader on the continent was literally in his debt. It also showed him to have over 1 million florins in personal wealth, making him one of the world’s first recorded millionaires. The title of this post was adapted from a self-description written by Jakob himself as his epitaph. As my title shows, I think it is fairer to credit his wealth creation than his wealth accumulation, since he revolutionized multiple industries and changed the history of capitalism, trade, European politics, and Christianity, mostly in his contribution to the credit revolution. However, the man himself worked until the day he died and took great pride in being the richest man in history.

All information from The Richest Man Who Ever Lived. I strongly recommend reading it yourself–this is just a taster!

Pandemics and Hyperinflations

I wrote an article a few years ago about hyperinflation in ancient Rome (and blogged about it here), arguing that the social trust in issuing bodies has been a foundation for monetary value long before modern institutions.

I got a random notification that someone had actually read and cited my work in a recent article “The US Money Explosion of 2020, Monetarism and Inflation: Plagued by History?” I really liked the author’s concept: inflation during pandemic periods is staved off for years because of saving rates, but then the post-crisis period is actually when the most inflation occurs.

This passed my ‘gut check’: during a crisis, who blows their entire budget? It also passed my historical-precedent check, and not only because he researched the Spanish flu and medieval precedent; in the Roman hyperinflation, the inflation lagged decades behind the expanded monetary volume, and in fact came right as the civil wars that nearly brought the Empire to its knees came to an end.

So, in short, inflation-hawks, you are probably right to fear the dramatic expansion of the money supply; however, you won’t feel vindicated for potentially years to come. In an age where people look for causes today to become results tomorrow (EVERY DAY, the WSJ tells me “stocks moved up/down because MAJOR EVENT TODAY”), we need to lengthen our time horizons of analysis and recognize that, just maybe, the ramifications of today’s policies will not really be felt for years. Or, put in a more dire light, by the time we realize who is right, it will be too late to reassert social trust in monetary value, and the dollar will follow the denarius into histories of hyperinflations.

Disruption arises from Antifragility

One of my favorite classics about why big businesses can’t always innovate is Clayton Christiansen’s The Innovator’s Dilemma. It is one of the most misunderstood business books, since its central concept–disruption–has been misquoted, and then popularized. Take the recent post on Investopedia that says in the second sentence that “Disruptive technology sweeps away the systems or habits it replaces because it has attributes that are recognizably superior.” This is the ‘hype’ definition used by non-innovators.

I think part of the misconception comes from thinking of disruption as major, public, technological marvels that are recognizable for their complexity or for even creating entire new industries. Disruptive innovations tend instead to be marginal, demonstrably simpler, worse on conventional scales, and start out by slowly taking over adjacent, small markets.

It recently hit me that you can identify disruption via Nassim Nicholas Taleb’s simple heuristics of recognizing when industry players are fragile. Taleb is my favorite modern philosopher, because he actually brought a new, universally applicable concept to the table, that puts into words what people have been practicing implicitly–but without a term to use. Anti-fragility is the inverse of fragile and actually helps you understand it better. Anti-fragile does not mean ‘resists breaking,’ which is more like ‘robust;’ instead, it means gains from chaos. Ford Pintos are fragile, Nokia phones are robust, but mechanical things are almost never anti-fragile. Bacteria species are anti-fragile to anti-biotics, as trying to kill them makes them stronger. Anti-fragile things are usually organic, and usually made up of fragile things–the death of one bacterium makes the species more resistant.

Taleb has a simple heuristic for finding anti-fragility. I recommend you read his book to get the full picture, but the secret to this concept is a simple thought experiment. Take any concept (or thing), and identify how it works (or fails to work). Now ask, if you subject it to chaos–by that, I mean, if you try to break it–and slowly escalate how hard you try, what happens?

  • If it gets disproportionately harmed, it is fragile. E.g., traffic: as you add cars, time-to-destination gets worse slowly at first, then all of the sudden increases rapidly, and if you do it enough, cars literally stop.
  • If it gets proportionately harmed or there is no effect, it is robust. Examples are easy, since most functional mechanical and electric systems are either fragile (such as Ford Pintos) or robust (Honda engines, Nokia phones, the Great Pyramids).
  • If it gets better, it is anti-fragile. Examples are harder here, since it is easier to destroy than build (and anti-fragility usually occurs based on fragile elements, which gets confusing); bacterial resistance to anti-biotics (or really, the function of evolution itself) is a great one.

The only real way to get anti-fragility outside of evolution is through optionality. Debt (obligation without a choice) is fragile to any extraneous shock, so a ‘free option’–choice without obligation, the opposite, is pure anti-fragility. Not just literal ‘options’ in the market; anti-fragile takes a different form in every case, and though the face is different, the structure is the same. OK, get it? Maybe you do. I recommend coming up with your own example–if you are just free riding on mine, you don’t get it.

Anyway, back to Christiansen. Taleb likes theorizing and leaves example-finding to you, while Christiansen scrupulously documented what happened to hundreds of companies and his concepts arose from his data; think about it like Christiansen is Darwin, carefully measuring beaks, and recognizing natural selection, where Taleb is Wallace, theorizing from his experience and the underlying math of reality. Except in this case, Taleb is not just talking about natural selection, he is also showing how mutation works, and giving a theory of evolution that is not restricted to just biology.

I realized that you can actually figure out whether an innovation is disruptive using this heuristic. It takes some care, because people often look at the technology and ask if it is anti-fragile–which is a mistake. Technologies are inorganic, so usually robust or fragile. Industries are organic, strategies are organic, companies are organic. Many new strategies build on companies’ competencies or existing customer bases, and though they may meet the ‘hype’ definition above, they give upside to incumbents, and are thus not fragilizing. Disruption happens when a company has an exposure to a strategy that it has little to gain from, but that could cannibalize its market if it grows, as anti-fragile things are wont to do.

The questions is: is a given incumbent company fragile with respect to a given strategy? Let’s start with some examples–first Christiansen’s, then my own:

  • Were 3″ drive makers fragile with respect to using smaller drives in cars?
    • In my favorite Christiansen anecdote, a 3″ drive-making-CEO, whose company designed a smaller 1.8″ drive but couldn’t sell it to their PC or mainframe customers, complained that he did exactly what Christiansen said, and built smaller drives, and there was no market. Meanwhile, startups were selling 1.8″ drives like crazy–to car companies, for onboard computers.
    • Christiansen notes that this was a tiny market, which would be an 0.01% change on a big-company income statement, and a low-profit one at that. So, since these companies were big, they were fragile to low-margin, low-volume, fast-growing submarkets. Meanwhile, startups were unbelievably excited about selling small drives at a loss, just so that Honda would buy from them.
    • So, 3″ drive makers had everything to lose (the general drive market) and a blip to gain, where startups had everything to gain and nothing to lose. Note that disruptive technologies are not those that are hard to invent or that immediately revolutionize the industry. Big companies (as Christiansen proved) are actually better at big changes and at invention. They are worse at recognizing value of small changes and jumps between industries.
  • Were book retailers fragile with respect to online book sales?
    • Yes, Amazon is my Christiansen follow-on. Jeff Bezos, as documented in The Everything Store, gets disruption: he invented the ‘two-pizza meeting’, so he ‘gets’ smallness; he intentionally isolates his innovation teams, so he ‘gets’ the excitement of tiny gains and allows cannibalism; he started in a proof-of-concept, narrow, feasible discipline (books) with the knowledge that it would grow into the Everything Store if successful, so he ‘gets’ going from simple beginnings to large-scale, well, disruption.
    • The Everything Store reads like a manual on how to be disrupted. Barnes & Noble first said “We can do that whenever we want.” Then when Bezos got some traction, B&N said “We can try this out but we need to figure out how to do it using our existing infrastructure.” Then when Bezos started eating their lunch, B&N said “We need to get into online book sales,” but sold the way they did in stores, by telling customers what they want, not by using Bezos’ anti-fragile review system. Then B&N said “We need to start doing whatever Bezos does, and beat him by out-spending,” by which time he was past that and selling CDs and then (eventually) everything.
    • Book sellers were fragile because they had existing assets that had running costs; they were catering to customers with not just a book, but with an experience; they were in the business of selecting books for customers, not using customers for recommendations; they treasured partnerships with publishers rather than thinking of how to eliminate them.
  • Now, some rapid-fire. Think carefully, since it is easy to fall into the trap of thinking industry titans were stupid, not fragile, and it is easy to have false positives unless you use Taleb’s heuristic.
    • Car companies were fragile to electric sports cars, and Elon Musk was anti-fragile. Sure, he was up-market, which doesn’t follow Christiansen’s down-market paradigm, but he found the small market that the Nissan Leaf missed.
    • NASA was fragile to modern, cheap, off-the-shelf space solutions, and…yet again…Elon Musk was anti-fragile.
    • Taxis were fragile to app-based rides.
    • Hotels were fragile to app-based rentals.
    • Cable was fragile to sticks you put in your TV.
    • Hedge funds were fragile to index funds, currently are fragile to copy trading, and I hope to god they break.
  • Lastly, some counter-examples, since it is always better to use the via negativa, and assuming you have additive knowledge is dangerous. If you disagree, prove me wrong, found a startup, and make a bajillion dollars by disrupting the big guys who won’t be able to find a market:
    • There is nothing disruptive about 5G.
    • Solar and wind are fragile and fragilizing.
    • What was wrong with WeWork’s business model? Double fragility–fixed contracts with building owners, flexible contracts with customers.
    • On a more optimistic note, cool tech can still be sustaining (as opposed to disruptive), like RoboAdvisors or induction stoves or 3D printed shoes.
    • Artificial intelligence or blockchain any use you have heard of (but not in any that you don’t know yet).

So, to summarize, if a company is fragile to a new strategy, the best it can do is try to robustify itself, since it has little upside. Many innovations give upside to incumbents at the marginal cost of R&D, and thus sustain them; disruption happens when the incumbents have little to gain from adopting a strategy, but startups have a high exposure to positive impact from possible adoption of a strategy due to the potential growth from small-market, incremental/simplifying opportunities, which is definitionally anti-fragility to the strategy.

Now, I hope you have a tool for judging whether industrial incumbents are fragile. Rather than trying to predict success or failure of any, you should just use Taleb’s heuristic–that will help you sort things into ‘hyped as disruptive’ vs. ‘actually probably disruptive.’ A last thought: if you found this wildly confusing, just remember, disruptive innovations tend to steal the jobs of incumbents. So, if an incumbent (say, a Goldman Sachs/Morgan Stanley veteran writing the definition of “disruptive” for Investopedia) is talking about a banking or trading technology, it is almost certainly not disruptive, since he would hardly tell you how to render him extraneous. You will find out what is disruptive when he makes an apology video while wearing a nice watch and French cuffs.

You vote is your voice–but actions speak louder than words

On voting day, with everyone tweeting and yelling and spam-calling you to vote, I want to offer some perspective. Sure, ‘your vote is your voice,’ and those who skip the election will remain unheard by political leaders. Sure, these leaders probably determine much more of your life than we probably would like them to. And if you don’t vote, or ‘waste’ your vote on a third party or write in Kim Jong Un, you are excluded from the discussion of how these leaders control you.

But damn, if that is such a limited perspective. It’s like the voting booth has blinders that conceal what is truly meaningful. I’m not going to throw the traditional counter-arguments to ‘vote or die’ at you, though my favorites are Arrow’s Impossibility Theorem and South Park’s Douche and Turd episode. Instead, I just want to say, compared to how you conduct your life, shouting into the political winds is simply not that important.

The wisdom of the stoics resonates greatly with me on this. Seneca, a Roman philosopher, tutor, and businessman, had the following to say on actions, on knowledge, on trust, on fear, and on self-improvement:

  • Lay hold of today’s task, and you will not need to depend so much upon tomorrow’s. While we are postponing, life speeds by. Nothing is ours, except time. On Time
  • Each day acquire something that will fortify you against poverty, against death, indeed against other misfortunes as well; and after you have run over many thoughts, select one to be thoroughly digested that day. This is my own custom; from the many things which I have read, I claim some one part for myself. On Reading
  • If you consider any man a friend whom you do not trust as you trust yourself, you are mightily mistaken and you do not sufficiently understand what true friendship means. On Friendship
  • Reflect that any criminal or stranger may cut your throat; and, though he is not your master, every lowlife wields the power of life and death over you… What matter, therefore, how powerful he be whom you fear, when every one possesses the power which inspires your fear? On Death
  • I commend you and rejoice in the fact that you are persistent in your studies, and that, putting all else aside, you make it each day your endeavour to become a better man. I do not merely exhort you to keep at it; I actually beg you to do so. On the Philosopher’s Lifestyle

Seneca goes on, in this fifth letter, to repeat the stoic refrain of ‘change what you can, accept what you cannot.’ But he expands, reflecting that your mind is “disturbed by looking forward to the future. But the chief cause of [this disease] is that we do not adapt ourselves to the present, but send our thoughts a long way ahead. And so foresight, the noblest blessing of the human race, becomes perverted.”

Good leadership requires good foresight, but panic over futures out of our control pervert this foresight into madness. So, whether you think that Biden’s green promises will destroy the economy or Trump’s tweets will incite racial violence, your actions should be defined by what you can do to improve the world–and this is the only scale against which you should be judged.

So, set aside voting as a concern. Your voice will be drowned out, and then forgotten. But your actions could push humanity forward, in your own way, and if you fail in that endeavor, then no vote will save you from the self-knowledge of a wasted life. If you succeed, then you did the only thing that matters.

Offensive advantage and the vanity of ethics

I have recently shifted my “person I am obsessed with listening to”: my new guy is George Hotz, who is an eccentric innovator who built a cell phone that can drive your car. His best conversations come from Lex Fridman’s podcasts (in 2019 and 2020).

Hotz’s ideas bring into question the efficacy of any ethical strategy to address ‘scary’ innovations. For instance, based on his experience playing “Capture the Flag” in hacking challenges, he noted that he never plays defense: a defender must cover all vulnerabilities, and loses if he fails once. An attacker only needs to find one vulnerability to win. Basically, in CTF, attacking is anti-fragile, and defense is fragile.

Hotz’s work centers around reinforcement learning systems, which learn from AI errors in automated driving to iterate toward a model that mimics ‘good’ drivers. Along the way, he has been bombarded with questions about ethics and safety, and I was startled by the frankness of his answer: there is no way to guarantee safety, and Comma.ai still depends on human drivers to intervene to protect themselves. Hotz basically dismisses any system that claims to take an approach to “Level 5 automation” that is not learning-based and iterative, as driving in any condition, on any road, is an ‘infinite’ problem. Infinite problems have natural vulnerabilities to errors and are usually closer to impossible where finite problems often have effective and world-changing solutions. Here are some of his ideas, and some of mine that spawned from his:

The Seldon fallacy: In short, 1) It is possible to model complex, chaotic systems with simplified, non-chaotic models; 2) Combining chaotic elements makes the whole more predictable. See my other post for more details!

Finite solutions to infinite problems: In Hotz’s words regarding how autonomous vehicles take in their environments, “If your perception system can be written as a spec, you have a problem”. When faced with any potential obstacle in the world, a set of plans–no matter how extensive–will never be exhaustive.

Trolling the trolley problem: Every ethicist looks at autonomous vehicles and almost immediately sees a rarity–a chance for an actual direct application of a philosophical riddle! What if a car has to choose between running into several people or alter path to hit only one? I love Hotz’s answer: we give the driver the choice. It is hard to solve the trolley problem, but not hard to notice it, so the software alerts the driver whenever one may occur–just like any other disengagement. To me, this takes the hot air out of the question, since it shows that, as with many ethical worries about robots, the problem is not unique to autonomous AIs, but inherent in driving–and if you really are concerned, you can choose yourself which people to run over.

Vehicle-to-vehicle insanity: While some autonomous vehicle innovators promise “V2V” connections, through which all cars ‘tell’ each other where they are and where they are going and thus gain tremendously from shared information. Hotz cautions (OK, he straight up said ‘this is insane’) that any V2V system depends, for the safety of each vehicle and rider, on 1) no communication errors and 2) no liars. V2V is just a gigantic target waiting for a black hat, and by connecting the vehicles, the potential damage inflicted is magnified thousands-fold. That is not to say the cars should not connect to the internet (e.g. having Google Maps to inform on static obstacles is useful), just that safety of passengers should never depend on a single system evading any errors or malfeasance.

Permissioned innovation is a contradiction in terms: As Hotz says, the only way forward in autonomous driving is incremental innovation. Trial and error. Now, there are less ethically worrisome ways to err–such as requiring a human driver who can correct the system. However, there is no future for innovations that must emerge fully formed before they are tried out. And, unfortunately, ethicists–whose only skin in the game is getting their voice heard over the other loud protesters–have an incentive to promote the precautionary principle, loudly chastise any innovator who causes any harm (like Uber’s first-pedestrian-killed), and demand that ethical frameworks precede new ideas. I would argue back that ‘permissionless innovation‘ leads to more inventions and long-term benefits, but others have done so quite persuasively. So I will just say, even the idea of ethics-before-inventions contradicts itself. If the ethicist could make such a framework effectively, the framework would include the invention itself–making the ethicist the inventor! Since instead, what we get is ethicists hypothesizing as to what the invention will be, and then restricting those hypotheses, we end up with two potential outcomes: one, the ethicist hypothesizes correctly, bringing the invention within the realm of regulatory control, and thus kills it. Two, the ethicist has a blind spot, and someone invents something in it.

“The Attention”: I shamelessly stole this one from video games. Gamers are very focused on optimal strategies, and rather than just focusing on cost-benefit analysis, gamers have another axis of consideration: “the attention.” Whoever forces their opponent to focus on responding to their own actions ‘has the attention,’ which is the gamer equivalent of the weather gauge. The lesson? Advantage is not just about outscoring your opponent, it is about occupying his mind. While he is occupied with lower-level micromanaging, you can build winning macro-strategies. How does this apply to innovation? See “permissioned innovation” above–and imagine if all ethicists were busy fighting internally, or reacting to a topic that was not related to your invention…

The Maginot ideology: All military historians shake their heads in disappointment at the Maginot Line, which Hitler easily circumvented. To me, the Maginot planners suffered from two fallacies: one, they prepared for the war of the past, solving a problem that was no longer extant. Second, they defended all known paths, and thus forgot that, on defense, you fail if you fail once, and that attackers tend to exploit vulnerabilities, not prepared positions. As Hotz puts it, it is far easier to invent a new weapon–say, a new ICBM that splits into 100 tiny AI-controlled warheads–than to defend against it, such as by inventing a tracking-and-elimination “Star Wars” defense system that can shoot down all 100 warheads. If you are the defender, don’t even try to shoot down nukes.

The Pharsalus counter: What, then, can a defender do? Hotz says he never plays defense in CTF–but what if that is your job? The answer is never easy, but should include some level of shifting the vulnerability to uncertainty onto the attacker (as with “the Attention”). As I outlined in my previous overview of Paradoxical genius, one way to do so is to intentionally limit your own options, but double down on the one strategy that remains. Thomas Schelling won the “Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel” for outlining this idea in The Strategy of Conflict, but more importantly, Julius Caesar himself pioneered it by deliberately backing his troops into a corner. As remembered in HBO’s Rome, at the seminal engagement of Pharsalus, Caesar said: “Our men must fight or die. Pompey’s men have other options.” However, he also made another underappreciated innovation, the idea of ‘floating’ reserves. He held back several cohorts of his best men to be deployed wherever vulnerabilities cropped up–thus enabling him to be reactive, and forcing his opponent to react to his counter. Lastly, Caesar knew that Pompey’s ace-in-the-hole, his cavalry, was made up of vain higher-class nobles, so he told his troops, instead of inflicting maximum damage indiscriminately, to focus on stabbing their faces and thus disfigure them. Indeed, Pompey’s cavalry did not flee from death, but did from facial scars. To summarize, the Pharsalus counter is: 1) create a commitment asymmetry, 2) keep reserves to fill vulnerabilities, and 3) deface your opponents.

Offensive privacy and the leprechaun flag: Another way to shift the vulnerability is to give false signals meant to deceive black hats. In Hotz’s parable, imagine that you capture a leprechaun. You know his gold is buried in a field, and you force the leprechaun to plant a flag where he buried it. However, when you show up to the field, you find it planted with thousands of flags over its whole surface. The leprechaun gave you a nugget of information–but it became meaningless in the storm of falsehood. This is a way that privacy may need to evolve in the realm of security; we will never stop all quests for information, but planting false (leprechaun) flags could deter black hats regardless of their information retrieval abilities.

The best ethics is innovation: When asked what his goal in life is, Hotz says ‘winning.’ What does winning mean? It means constantly improving one’s skills and information, while also seeking to find a purpose that changes the world in a way you are willing to dedicate yourself to. I think the important part of this that Hotz does not say “create a good ethical framework, then innovate.” Instead, he is effectively saying do the opposite–learn and innovate to build abilities, and figure out how to apply them later. The insight underlying this is that the ethics are irrelevant until the innovation is there, and once the innovation is there, the ethics are actually easier to nail down. Rather than discussing ‘will AIs drive cars morally,’ he is building the AIs and anticipating that new tech will mean new solutions to the ethical questions, not just the practical considerations. So, in summary, if you care about innovation, focus on building skills and knowledge bases. If you care about ethics, innovate.

The Seldon Fallacy

Like some of my role models, I am inspired by Isaac Asimov’s vision. However, for years, the central ability at the heart of the Foundation series–‘psychohistory,’ which enables Hari Seldon, the protagonist, to predict broad social trends across thousands of galaxies over thousands of years–has bothered me. Not so much because of its impact in the fictional universe of Foundation, but for how closely it matches the real-life ideas of predictive modeling. I truly fear that the Seldon Fallacy is spreading, building up society’s exposure to negative, unpredictable shocks.

The Seldon Fallacy: 1) It is possible to model complex, chaotic systems with simplified, non-chaotic models; 2) Combining chaotic elements makes the whole more predictable.

The first part of the Seldon Fallacy is the mistake of assuming reducibility, or more poetically, of NNT’s Procustean Bed. As F.A. Hayek asserted, no predictive model can be less complex than the model it predicts, because of second-order effects and accumulation of errors of approximation. Isaac Asimov’s central character, Hari Seldon, fictionally ‘proves’ the ludicrous fallacy that chaotic systems can be reduced to ‘psychohistorical’ mathematics. I hope you, reader, don’t believe that…so you don’t blow up the economy by betting a fortune on an economic prediction. Two famous thought experiments disprove this: the three-body problem and the damped, driven oscillator. If we can’t even model a system with three ‘movers’, because of second-order effects, how can we model interactions between millions of people? Basically, with no way to know which reductions in complexity are meaningful, Seldon cannot know whether, in laying his living system into a Procustean bed, he has accidentally decapitated it. Using this special ability, while unable to predict individuals’ actions precisely, Seldon can map out social forces with such clarity that he correctly predicts the fall of a 10,000-year empire. Now, to turn to the ‘we can predict social, though not individual futures’ portion of the fallacy: that big things are predictable even if their consituent elements are not.

The second part of the Seldon Fallacy is the mistake of ‘the marble jar.’ Not all randomnesses are equal: drawing white and black marbles from a jar (with replacement) is fundamentally predictable, and the more marbles drawn, the more predictable the mix of marbles in the jar. Many models depend on this assumption or similar ones–that random events distribute normally (in the Gaussian sense) in a way that increases the certainty of the model as the number of samples increases. But what if we are not observing independent events? What if they are not Gaussian? What if someone tricked you, and tied some marbles together so you can’t take out only one? What if one of them is attached to the jar, and by picking it up, you inadvertently break the jar, spilling the marbles? Effectively, what if you are not working with a finite, reducible, Gaussian random system, but an infinite, Mandelbrotian, real-world random system? What if the jar contains not marbles, but living things?

I apologize if I lean too heavily on fiction to make my points, but another amazing author answers this question much more poetically than I could. Just in the ‘quotes’ from wise leaders in the introductions to his historical-fantasy series, Jim Butcher tells stories of the rise and fall of civilizations. First, on cumulative meaning:

“If the beginning of wisdom is in realizing that one knows nothing, then the beginning of understanding is in realizing that all things exist in accord with a single truth: Large things are made of smaller things.

Drops of ink are shaped into letters, letters form words, words form sentences, and sentences combine to express thought. So it is with the growth of plants that spring from seeds, as well as with walls built from many stones. So it is with mankind, as the customs and traditions of our progenitors blend together to form the foundation for our own cities, history, and way of life.

Be they dead stone, living flesh, or rolling sea; be they idle times or events of world-shattering proportion, market days or desperate battles, to this law, all things hold: Large things are made from small things. Significance is cumulative–but not always obvious.”

–Gaius Secundus, Academ’s Fury

Second, on the importance of individuals as causes:

“The course of history is determined not by battles, by sieges, or usurpations, but by the actions of the individual. The strongest city, the largest army is, at its most basic level, a collection of individuals. Their decisions, their passions, their foolishness, and their dreams shape the years to come. If there is any lesson to be learned from history, it is that all too often the fate of armies, of cities, of entire realms rests upon the actions of one person. In that dire moment of uncertainty, that person’s decision, good or bad, right or wrong, big or small, can unwittingly change the world.

But history can be quite the slattern. One never knows who that person is, where he might be, or what decision he might make.

It is almost enough to make me believe in Destiny.”

–Gaius Primus, Furies of Calderon

If you are not convinced by the wisdom of fiction, put down your marble jar, and do a real-world experiment. Take 100 people from your community, and measure their heights. Then, predict the mean and distribution of height. While doing so, ask each of the 100 people for their net worth. Predict a mean and distribution from that as well. Then, take a gun, and shoot the tallest person and the richest person. Run your model again. Before you look at the results, tell me: which one do you expect shifted more?

I seriously hope you bet on the wealth model. Height, like marble-jar samples, is normally distributed. Wealth follows a power law, meaning that individual datapoints at the extremes have outsized impact. If you happen to live in Seattle and shot a tech CEO, you may have lowered the mean income in the group by more than the average income of the other 99 people!

So, unlike the Procustean Bed (part 1 of the Seldon Fallacy), the Marble Jar (part 2 of the Seldon Fallacy) is not always a fallacy. There are systems that follow the Gaussian distribution, and thus the Marble Jar is not a fallacy. However, many consequential systems–including earnings, wars, governmental spending, economic crashes, bacterial resistance, inventions’ impacts, species survival, and climate shocks–are non-Gaussian, and thus the impact of a single individual action could blow up the model.

The crazy thing is, Asimov himself contradicts his own protagonist in his magnum opus (in my opinion). While the Foundation Series keeps alive the myth of the predictive simulation, my favorite of his books–The End of Eternity (spoilers)–is a magnificent destruction of the concept of a ‘controlled’ world. For large systems, this book is also a death knell even of predictability itself. The Seldon Fallacy–that a simplified, non-chaotic model can predict a complex, chaotic reality, and that size enhances predictability–is shown, through the adventures of Andrew Harlan, to be riddled with hubris and catastrophic risk. I cannot reduce his complex ideas into a simple summary, for I may decapitate his central model. Please read the book yourself. I will say, I hope that as part of your reading, I hope you take to heart the larger lesson of Asimov on predictability: it is not only impossible, but undesirable. And please, let’s avoid staking any of our futures on today’s false prophets of predictable randomness.

Necessity constrains even the gods

I was recently talking to my cofounder about the concept of “fuck-you” money. “Fuck-you” money is the point at which you no longer need to care what other people think, you can fund what you want without worrying about ending up broke–so long as you recognize the power of necessity.

It reminded me of three things I have read before. One is from the brilliant economist and historian Thomas Sowell, who wrote in The Conflict of Visions that ideological divides often crop on the disagreement between “constrained” and “unconstrained” visions of the world and humanity. Effectively, the world contains some who recognize that humans have flaws that culture has helped us work through, but that we should be grateful for the virtues handed to us and understand that utopianism is dangerous self-deception. But it contains many others who see all human failings stemming from social injustices, since in nature, humans have no social problems. Those who line up behind Hobbes fight those who believe, still, the noble savage and Rousseau’s perfect state of nature. To me, this divide encapsulates the question of, did necessity emerge before human society? And if so, does it still rule us?

I know what the wisdom of antiquity says. The earliest cosmogonies–origin stories of the gods–identify Ananke (Necessity) as springing forth from the Earth herself, before the gods, and restricting even them. This story was passed on to Greek thinkers like Plato (Republic) and playwrites like Euripides (Alcestis), who found human government and the fate of heroes to also be within the tragic world of necessity first, all else second.

Lastly, this reminds me of Nassim Nicholas Taleb’s Anti-Fragile. He points out that the first virtue is survival, and that optionality is pure gain. Until you address necessity, your optionality–your choices and your chances–are fundamentally limited. As an entrepreneur who literally lives the risk of not surviving, I do not need to be convinced. Necessity rules even the gods, and it certainly rules those with “fuck-you” money. But it rules me even more. I am ruled by the fear that I may fail my family, myself, and my company at the Maslow’s level of survival. Those with “fuck-you” money at least have moved to the level where they have chances to fail society. And the lesson from history, from mythology, and from surviving in the modern economy, is not that one should just be resigned to reaching one’s limits. It is to strive to reach the level where you are pushing them, and the whole time to recognize the power of Necessity.

Triple-blinded trials in political economy

In medicine, randomized controlled trials are the most highly regarded type of primary study, as they separately track treatment and control groups to determine whether an observed effect is actually caused by the intervention.

Bias, the constant bane of statisticians, can be minimized further by completing a blinded trial. In a single-blinded trial, the patient population is not informed which group they are in, to prevent knowledge of therapy from impacting results. Placebos are powerful, so blinding has helped identify dozens of therapies that are no better than sugar pills!

However, knowledge can contaminate studies in another way–through the physicians administering the therapies. Bias can be further reduced by double blinding, in which the physicians are also kept in the dark about which therapy was administered, so that their knowledge does not contaminate their reporting of results. In a double-blind trial, only the study administrators know which therapy is applied to each patient, and sometimes an independent lab is tasked with analysis to further limit bias.

Overall, these blinding mechanisms are meant to make us more certain that the results of a study are reflective of an intervention’s actual efficacy. However, medicine is not the only field where the efficacy of many interventions is impactful, highly debated, and worthy of study. Why, then, do we not have blinded studies in political economy?

We all know that randomized controlled trials are pretty much impossible in political economy. North/South Korea and West/East Germany were amazing accidental trials, but we can still hope that politicians and economists make policies that can at least be tracked to determine their ‘change from baseline’ even if we have no control group. Because of how easy it is to harm socioeconomic systems and sweep the ruinous results under the rug, I personally consider it unethical to intervene in a complex system without careful prior consideration, and straight up evil to do so without plans to track the impact of that intervention. So, how can politicians take an ‘evidence-based approach’ to their interventions?

I think that, in recent years, politicians–especially in the US and especially liberals and COVID-reactionaries–have come up with an amazing new experimental method: the triple blinded study. Examples include the ACA, the ARRA, and the recent $3 trillion stimulus package. In a triple blinded study, politicians carefully draft bills so that they are (1) too long for anyone, especially the politicians themselves, to read; (2) filled with a mish-mash of dozens of strategies implemented simultaneously or that are delegated vaguely to administrative agencies; and (3) have no pre-specified metrics by which the policy will be judged, thus blinding everyone to any useful study of signal and response.

I am reminded of one of the most painful West Wing episodes ever made, in which “President Bartlett” is addressing an economic crisis, and is fielding dozens of suggestions from experts–without being able to choose among the candidate interventions. Donna, assistant to his Deputy Chief of Staff, tells a parable about how her grandmother would use ‘a little bit of this, a little bit of that’ to cure minor illnesses. Inspired, Bartlett adopts a policy of ALL suggested economic interventions, thus ensuring that we try everything–and learn nothing. I shudder to think that this strategy was ever broached publicly…and copied from fiction into reality.

In this way, politicians have cleverly enabled us to reduce the bias caused by any knowledge of the intervention or its impact. The patients (citizens), physicians (politicians), and study administrators (economists?) are all kept carefully in the dark so that none of them can know how a policy impacted the economy. Thus, anyone debating any of these topics is given the full freedom to invent whatever argument they want, cherry-pick any data they want, and continue peddling their politics without ever being called to task by the data.

Even more insanely, doctors are held not only to the standard of evidence-based medicine, but also to that of of the precautionary principle–where passivity is preferred to action and novel methods are treated with special scrutiny. “Evidence-based policy”, on the other hand, is a buzzword and not an actual practice to align with RCTs, and any politician who actually followed the precautionary principle would be considered ‘do-nothing’. Thus, we carefully keep both evidence and principles of ‘do no harm’ far from the realm of political action, and continue a general practice across politics of the blind making sure that they lead the blind.

In sum, political leaders, please ignore Donna. Stop intentionally blinding us to policy impacts. Stop doing triple-blinded studies with the future of our country. Sincerely, all data-hounds, ever.

Why snipers have spotters

Imagine two highly skilled snipers choosing and eliminating targets in tandem. Now imagine I take away one of their rifles, but leave him his scope. How much do you expect their abilities to be decreased?

Surprisingly, there is a strong case that this will actually increase their combined sniping competence. As an economist would point out, this stems from specialization: the sniper sacrifices total situational awareness to improve accurate intervention, and the spotter sacrifices ability to intervene to improve awareness and planning. We can push out beyond the production possibilities curve.

It is also a result of communication. Two independent snipers pick their own shots, and may over-kill a target or miss a pressing threat. By explicitly designating roles, the sniper can depend on the spotter for guidance, and the two-person system means that both parties actually have more information than their cumulative, but separate knowledge without spotting.

There are also long-term positive impacts that likely escape an economist’s models from switching off in each role, or from an apprenticeship model. Eye fatigue that limits accuracy, and mental fatigue that may result from constant awareness, can be eliminated by taking turns. Also, if a skilled sniper has a novice spotter, the spotter observes the sniper’s tactics and can assimilate best practices–and the sniper, by previously working as a spotter, can be more productively empathetic. The system naturally encourages learning and improvement.

I love the sniper-spotter archetype, because it clarifies the advantages of:

  • Going from zero to one: Between two independent snipers, there zero effective lines of communication. Between a sniper and a spotter, there is one. This interaction unlocks potential held in both.
  • More from less: Many innovate by adding new things; however, anti-fragile innovations are more likely to come from removing unnecessary things than by adding new ones.
  • Not the number of people, the number of interactions: Interactions have advantages (specialization, coordination) and disadvantages (communication friction, lack of individual decision-making responsibilities). Scrutinize what interactions you want on your teams and which to avoid.
  • Isolation: Being connected to everyone promotes noise over signal. It also promotes focusing on competitors over opportunities and barriers over permissionless innovation.
  • Separate competencies, shared goals and results: To make working together worth it, define explicit roles that match each individual’s competencies. Then, so long as you have vision alignment, all team members know what they are seeking and how they will be depended upon to succeed.
  • Iterative learning and feedback: Systems that promote self-improvement of their parts outperform systems that do not. Also, at the end of the day, education comes from experimentation and observation of new phenomena, balance on the edge between known and unknown practices.
  • Establish ‘common knowledge’: Communication failures and frictions often occur because independent people assume others have the same assumed set of ‘common knowledge’. If you make communication the root of success, so long as the group is small enough to actual have–and know it has–the same set of ‘common knowledge’, they can act confidently on these shared assumptions.
  • Delegation as productivity: Recognize that doing more does not mean more gets done. Without encouraging slacking off, explicitly rewarding individuals for choosing the right things to delegate and executing effectively will get more from less.
  • Cheating Goodhart: Goodhart’s Law states that the metric of success becomes the goal. If you make the metric of success joint, rather than individual, and shape its incentives to match your vision, your metrics will create an atmosphere bent on achieving your actual goals.
  • Leadership is empowerment: Good leaders don’t tell people what to do, they inform, support, listen, and match people’s abilities and passions to larger purpose.
  • Smallness: Small is reactive, flexible, cohesive, connected, fast-moving, accurate, stealthy, experimental, permissionless, and, counterintuitively, scalable.

My most recent encounter with “sniper and spotter” is in my sister’s Montessori classroom (ages 3-6). She is an innovative educator who noticed that her public school position was rife with top-down management, politics, and perverse incentives, and was not finding systems to promote curiosity or engagement. She has applied the “sniper and the spotter” after noticing that children thrive best in either one-on-one, responsive guidance, where the instructor is totally dedicated to the student, or when left to their own devices in a materials-rich environment, engaging in discovery (or working with other children, or even teaching what they have already learned to newcomers). However, believe it or not, three-year-olds can often cause disruptions or even physical threats if left totally without supervision.

She therefore promotes a teaching model where there are two teachers, one who watches for children’s safety and minimizes disruptiveness. This frees the other teacher to rove student-to-student and give either individual or very-small-group attention. The two teachers communicate to plan next steps, and to ‘spot’ children who most need intervention. This renders ‘class size’ a stupid metric: what matters is how much one-on-one guidance plus permissionless discovery a child engages in. It is also a “barbell” strategy: instead of wallowing in the mediocrity of “group learning”, children get the most of the two extremes–total attention and just-enough-attention-to-remain-safe.

PS: On Smallness, Jeff Bezos has promised $1 billion to support education innovation. So far, despite starting before my sister, he has so far opened as many classrooms: one. As the innovator in the ‘two-pizza meeting’, I wish Bezos would start with many, small experiments in education rather than big public dedications, so he could nurture innovation and select strategies for success.

I would love to see more examples of “sniper and spotter” approaches in the comments…but no sniping please 🙂

Game theory in the wild

Game theory is an amazing way to simulate reality, and I strongly recommend any business leader to educate herself on underlying concepts. However, I have found that the way that it is constructed in economic and political science papers has limited connection to the real world–apart from nuclear weapons strategies, of course.

If you are not a mathematician or economist, you don’t really have time to assign exact payoffs to outcomes or calculate an optimal strategy. Instead, you can either guess, or you can use the framework of game theory–but none of the math–to make rapid decisions that cohere to its principles, and thus avoid being a sucker (at least some of the time).

As Yogi Berra didn’t say, “In theory, there is no difference between practice and theory. In practice, there is.” As a daily practitioner of game theory, here are some of its assumptions that I literally had to throw out to make it actually work:

  • Established/certain boundaries on utility: Lots of games bound utility (often from 0 to 1, or -1 to 1, etc. for each individual). Throw away those games, as they preferenced easier math over representation of random, infinite realities, where the outcomes are always more uncertain and tend to be unbounded.
  • Equating participants: Similar to the above, most games have the same utility boundaries for all participants, when in reality it literally always varies. I honestly think that game theorists would model out the benefits of technology based on the assumption that a Sumerian peasant in 3000 BC and an American member of the service economy in 2020 can have equivalent utility. That is dumb.
  • Unchanging calculations: In part because of the uncertainty and asymmetries mentioned above, no exact representation of a game sticks around–instead, the equation constantly shifts as participants change, and utility boundaries move (up with new tech, down with new regs, etc). That is why the math is subordinate to structure: if you are right about the participants, the pathways, and have an OK gut estimate of the payoff magnitudes, you can decide rapidly and then shift your equation as the world changes.
  • Minimal feedback/second order effects: Some games have signal-response, but it is hard to abstract the concept that all decisions enter a complex milieu of interacting causes and effects where the direction arrow is hard to map. Since you can’t model them, just try to guess–what with the response to the game outcome be? Focus on feedback loops–they hold secrets to unbounded long-term utilities.
  • The game ends: Obviously, since games are abstractions, it makes sense to tie them up nicely in one set of inputs and then a final set of outputs. In reality, there is really only one game, and each little representation is a snapshot of life. That means that many games forget that the real goal of the game is to stay in it.

These examples–good rules of thumb to practitioners, certain to be subject to quibbling by any academic reader–remind me of how wrong even the history of game theory is. As with many oversights by historians of science, the attribution of game theory’s invention credits the first theoretician (John von Neumann, who was smart enough to both practice and theorize), not the first practitioner (probably lost to history–but certainly by the 1600’s, as Pascal’s Wager actually lines up better with “game theory in the wild” in that he used infinite payoffs and actually did become religious). Practitioners, I would ignore the conventional history, theory, actual math, and long papers. Focus on easily used principles and heuristics that capture uncertainty, unboundedness, and asymmetries. Some examples:

  • Principle: Prediction is hard. Don’t do it if you can help it.
  • Heuristic: Bounded vs. Unbounded. Magnitude is easier to measure (or at least cap) than likelihood is.

  • Principle: Every variable introduces more complexity and uncertainty.
  • Heuristic: Make decisions for one really good reason. If your best reason is not enough, don’t depend on accumulation.

  • Principle: One-time experiments don’t optimize.
  • Heuristic: If you actually want to find useful methods, iterate.

  • Principle: Anything that matters (power, utility, etc.) tends to be unequally distributed.
  • Heuristic: Ignore the middle. Either make one very rich person very happy (preferred) or make most people at least a little happier. Or pull a barbell strategy if you can.

  • The Academic Certainty Principle: Mere observation of reality by academics inevitably means they don’t get it. (Actually a riff on observer effects, not Hiesenberg, but the name is catchier this way).
  • Heuristic: In game theory as in all academic ideas, if you think an academic stumbled upon a good practice, try it–but assume you will need trial and error to get it right.

  • Principle: Since any action has costs, ‘infinite’ payoffs, in reality, come from dividing by zero.
  • The via negativa: Your base assumption should be inaction, followed by action to eliminate cost. Be very skeptical of “why not” arguments.

So, in summary, most specific game theories are broken because they preference math (finite, tidy, linear) over practice (interconnected, guess-based, asymmetric). That does not mean you can’t use game theory in the wild, it just means that you should focus on structure over math, unbounded/infinite payoffs over solvable games, feedback loops over causal arrows, inaction over action, extremes over moderates, and rules of thumb over quibbles.

Good luck!

Why the US is behind in FinTech, in two charts

The US is frankly terrible at innovation in banking. When Kenya (and its neighbors) has faster adoption of mobile banking–as they have since at least 2012–it is time to reconsider our approach.

Here is the problem: we made new ideas in banking de facto illegal. Especially since the 2008 financial crisis, regulatory bodies (especially the CFPB) has piled on a huge amount of potential liability that scares away any new entrant. Don’t believe me? Let’s look at the data:

bank creation

Notice anything about new bank creation in the US after 2008?

A possible explanation, in a “helpful resource” provided to banking regulators and lawyers for banks:

regulatory complexity

This shows: 8 federal agencies reporting to the FSOC, plus another independent regulatory body for fintech (OFAC/FinCEN). Also, the “helpful” chart notes state regulations just as an addendum in a circle…probably because it would take 50 more, possibly complex and contradictory charts.

So, my fellow citizens, don’t innovate in banking. No one else is, but they are probably right.