An Alternative to Chasing New Variants of SARS-CoV-2

In discussing the push to vaccinate against COVID-19 in developed economies like the US, UK, etc., what gets lost in political rhetoric is the importance of effective vaccination among the immunosuppressed, especially the HIV+ group.

In groups with underlying immunosuppression [i.e., people with hematological malignancies, people receiving immunosuppressive therapies for solid organ transplants, or other chronic medical conditions], there have been reports of prolonged COVID-19 infection. The latest one is from South Africa, where an HIV+ patient experienced persistent COVID-19 infection –of 216 days– with moderate severity. The constant shedding of the SARS-CoV-2 virus accelerated its evolution inside this patient. This is possible because suboptimal adaptive immunity delays clearance of the SARS-CoV-2 virus but provides enough selective pressure to drive the evolution of new viral variants. In this case, the mutational changes of the virus within the patient resembled the Beta variant.

The largest population of immunosuppressed [HIV+] is in South Africa. So an alternative to chasing variants like Delta and Beta after their largescale emergence or trying to convince people who reject vaccination in the Global North is to tackle super-spreading micro-epidemics of novel variants among the immunosuppressed in the Global South. Since Novovax and J&J are demonstrably ineffective among the immunosuppressed, the Moderna vaccine is the best bet to slow down the emergence of future variants.

Who has millions of unused mRNA Covid-19 vaccines that are set to go to waste? The answer is the United States. As demand dwindles across the United States and doses will likely expire this summer, why not use them in the Global South, especially South Africa, by a concerted international effort?

US and China: Knowledge Deficit or Trade Deficit?

The problems with headlines such as this: “US Trade Balance With China Improves, but Sources of Tension Linger” are twofold. 

https://www.wsj.com/articles/u-s-trade-deficit-narrowed-in-december-as-exports-outpaced-imports-11612532757

A: It furnishes support to the notion that trade surpluses are FOREVER safe and trade deficits are INVARIABLY grave. That is not accurate because foreign countries will always wish to invest capital in countries like the US, which employ it relatively well. One clear case of a nation that borrowed massively from abroad, invested wisely and did excellently well is the United States itself. Although the US ran a trade deficit from 1831 to 1875, the borrowed financial capital went into projects like railroads that brought a substantial economic payoff. Likewise, South Korea had large trade deficits during the 1970s. However, it invested heavily in industry, and its economy multiplied. As a result, from 1985 to 1995, South Korea had enough trade surpluses to repay its past borrowing by exporting capital abroad. Furthermore, Norway’s current account deficit had reached 14 percent of GDP in the late 1970s, but the capital it imported enabled it to create one of the largest oil industries.

B: The headline makes a normative claim while equating bilateral trade deficit with the overarching narrative of bilateral tensions. Such normative claims follow from the author’s value-based reasoning, not from econometric investigation. China and the US may have ideological friction on many levels, but the surplus or deficit has much to do with demographics and population changes within a country at a given time. Nonetheless, a legacy of political rhetoric relishes on inflating and conflating matters. We hear a lot about the prediction that China is forecasted to become the largest economy by 2035, provoking many in the US to bat for protectionist policies. But we ignore the second part of this prediction. Based on population growth, migration (aided by liberal immigration policies) and total fertility rate, the US is forecasted to become the largest economy once again in 2098. 

Therefore, it is strange that a lot of the “trade deficit imbalance” headlines neglect to question if the borrower is investing the capital productively or not. A trade deficit is not always harmful, and no guarantee that running a trade surplus will bring substantial economic health. In fact, enormous trade asymmetries can be essential for creating economic development.

Lastly, isn’t it equally odd that this legacy of political rhetoric between the US and China makes it natural to frame trade deficits with China under the ‘China’s abuses and coercive power’ banner but intimidates the US establishment from honestly and openly confronting the knowledge deficit in SARS-CoV-2’s origin? How and when does a country decide to bring sociology to epistemology? Shouldn’t we all be concerned more about significant knowledge deficits?

Mother: A Verb and a Noun

I’m sharing what I gleaned from a very insightful discussion with the author, Sarah Knott, who visited Cincinnati early last year for an open house discussion about her book, Mother is a Verb: An Unconventional History.

Sarah points out that in the modern world, the mother is the only caregiver left. However, in traditional European and Asian societies (for example, the Indian society), mothering was—in the case of India, to a large extent still is—a communal effort. Aunts were central and were called Big Momma. Now they are just aunts. This clarifies for me why we in India, while growing up, refer to every stranger on the street of a certain age as an uncle or an aunt irrespective of who they are; it is a vestige of our traditional society heavily focused on ‘other mothering.’ 

From the open house discussion I discovered, after the First World War, there was despair about the world, leading to childlessness or child-free couples. This mirrors our generation’s cultural concerns and climate change anxieties in leading to more child-free couples. In this context, the American baby boom in the 1940s and 50s was not an everyday occurrence but a striking anomaly. After the baby boom, a period of childlessness (child-free couples) came back. Although many things are recorded about mothers and women in child-free marriages, what we know about fathers and childless men is nada. This is a gap in our history we need to correct. Back to the topic of mothers, the US and France, after the world wars, reduced their childbirths first. Women stood up for individual liberty—about time—in these countries. This trend eventually led to other places in the world adopting the same values. So, a trade-off the post-Second World War societies made was that they no longer cared for big families. Instead, they looked to invest in different versions of big and caring governments with mixed results.

As we decided to move from big, interconnected families, which helped protect (in terms of social capital) the most vulnerable people in society from the shocks of life to smaller, detached nuclear families, we made room to maximize talents and expand individual freedom. This shift has ultimately led to a familial system that liberates people of a certain capability and ravages others. 

Altogether, Sarah Knott reminds us that our contemporary society often forgets that a ‘mother’ is just as much a verb as it is a noun.

Health policy is a less mature field in India

The raging second wave of Covid-19 hasn’t just collapsed the Indian healthcare, it has devastatingly uncovered preexisting public health policy deficits and healthcare frailties.

[As of May 3, 2021]
[As of May 3, 2021]

In India, there is a need to revive a serious conversation around public health policy, along with upgrading healthcare. But wait, isn’t the term ‘public health’ interchangeable with healthcare? Actually, no. ‘Public health’ is the population-scale program concerned with prevention and not cure. In contrast, healthcare essentially involves a private good and not public good. Most public health experts point out, the weaker the healthcare system (such as in India), the greater the gains from implementing public health prevention strategies.

India focused its energies on preventing malaria by fighting mosquitoes in the 1970s and then regressed to treating patients who have malaria, dengue or Zika ineffectively. A developing public health policy got sidelined for a more visible, vote-grabbing, yet inadequate healthcare program. Why? Indian elites tend to transfer concepts and priorities, from the health policy debates of advanced economies, into Indian thinking about health policy without much thought. As a result, there is considerable confusion around terminologies. There is a need for a sharp delineation between: ‘public good,’ ‘public health’ and ‘healthcare.’ The phrase ‘public health’ is frequently misinterpreted to imply ‘healthcare.’ On the contrary, ‘healthcare’ is repeatedly assumed as a ‘public good.’ In official Indian directives, the phrase ‘public health expenditure’ is often applied for government expenditures on healthcare. It is confusing because it contains the term ‘public health,’ which is the antonym of ‘healthcare.’

Many of the advanced economies of today have been engaged in public health for a very long time. As an example, the UK began work on clean water in 1858 and on clean air in 1952. For over forty-five years the Clean Air Act in the U.S. has cut pollution as the economy has grown. Therefore, the elites in the UK and US can worry about the nuances of healthcare policy. On the other hand, the focus of health policy in India must be upon prevention, as it is not a solved problem. Problems such as air quality has become worse today in India. Can the Ministry of Health do something about it? Not much, because plenty of public health-related issues lie outside the administrative boundaries of the Ministry of Health. Air quality—that afflicts North India—lies in the Ministry of Environment and internal bureaucracy—“officials love the rule of officials”—deters the two departments from interacting and working out such problems productively. Economist Ajay Shah points out, Indian politicians who concern themselves with health policy take the path of least resistance—to use public money to buy insurance (frequently from private health insurance companies) for individuals who would obtain healthcare services from private healthcare providers. This is an inefficient path because a fragile public health policy bestows a high disease burden, which induces a market failure in the private healthcare industry, followed by a market failure in the health insurance industry.

In other words, Ajay Shah implies that the Indian public sector is not effective at translating expenditures into healthcare services. Privately produced healthcare is plagued with market failure. Health insurance companies suffer from poor financial regulation and from dealing with a malfunctioning healthcare system. No matter the amount of money one assigns into government healthcare facilities or health insurance companies, both these routes work poorly. As a consequence, increased welfare investment by the government on healthcare, under the present paradigm of the Indian healthcare, is likely to be under par.

The long-term lessons from the second wave of COVID-19 is that inter-departmental inefficiencies cannot be tolerated anymore. Public health considerations and economic progress need to shape future elections and the working of many Indian ministries in equal measure. India deserves improved intellectual capacity in shaping public health policy and upgrading healthcare to obtain a complete translation of higher GDP into improved health outcomes. This implies that health policy capabilities—data sets, researchers, think tanks, local government—will need to match the heterogeneity of Indian states. What applies to the state of Karnataka will not necessarily apply to the state of Bihar. The devastating second wave is not arguing for imposing more centralized uniformity in the working of healthcare and public health policy proposals across India, as it will inevitably reduce the quality of executing these proposals in its diverse states with various hurdles. Instead, Indian elites need to place ‘funds, functions and functionaries’ at the local level for better health outcomes. After all, large cities in India match the population of many countries. They deserve localized public health and healthcare policy proposals.

The need to address the foundations of public health and healthcare in India around the problems of market failure and low state capacity has never been greater.

St. Patrick’s Day Cerebration

It is fascinating how foreign societies that loathe someone else’s nationalist movements over time amplify the revitalized traditions from such movements in their mass culture and celebrate a parodied version of it. A case in point is the St. Patrick’s day celebration.

Ireland’s vibrant folklore tradition was revitalized in the late 19th and early 20th centuries with a vigorous nationalist movement. Following this movement, a tiny rural Irish folklore tradition carried over to the Irish American context, like leprechauns decorating St. Patrick’s day cards, for example. The communal identity of the pastoral Irish homeland revitalized by their nationalist movement was vital for the Irish group consciousness in America over many generations, even for the new generations who didn’t visit their old sod.

Today the Irish group has to remind itself of its kinship and traditions as it continues to fade into the American mass culture. As an outsider, what I get from the American mass culture— that supposedly celebrates multiple cultures—is not the rich Irish folklore and tradition that arrived on the scene but mere stereotypes of it. The 19th century WASP anti-Irish caricature can be seen even today. Implicit in every Irish joke is either the image of a drunken Irish devoid of any cultural sophistication or a fighting Irish who is endlessly combative. Had the Irish been a brown or black community, would such a depiction—in a less mean spirited form or not—carry forward in today’s hypersensitive, race-obsessed American society? 

Did you know that Ireland contributes significantly to global science and technology?

St. Patrick’s Day, for the most part, a quiet religious holiday in Ireland, started as an occasion to demonstrate Irish heritage for those living in the United States. Instead, one primarily receives an American mass culture-induced Irish self-parody: holiday associated with alcoholic excess. How much of the self-parody was consciously nurtured by early Irish Americans is debatable. But, I recognize the Irish have resisted from being entirely consumed by the American mass culture in several ways, for example, by employing traditional Irish names such as Bridget, Eileen, Maureen, Cathleen, Sean, Patrick, Dennis, etc.

If you think you know the Irish, then think again.

As a Hindu immigrant myself, I realize it is essential for immigrant groups to assimilate in several ways, like speaking English, which the Irish did effortlessly. Isn’t it the American mass culture’s duty to comprehend a certain authentic Irishness or Hinduness in popular culture without caricatures? As a Hindu, I have faced disdainful “holy cow” jokes from Muslims and Christians in the United States, but of course, Hinduphobia isn’t a politically dominant thing you see. In this light, when just about anything goes on in the name of a “melting pot,” I don’t see cultural salad bowls as regressive but protective. Interestingly, folks who sermonize blending of cultures, dilution of conserved cultural traits like names, etc., as the only form of progressive new beginnings in the social setting also shore up conserved group identities for certain communities in politics.

Not everything is healthy with the immigrants who wish to conserve their authentic identity as well. There is a bias among such immigrants to regard the United States as lacking a unique, respectable culture. Such immigrant-held prejudices get magnified when one-half of the country goes about canceling all faces on Mount Rushmore and actively devalues every founding document and personality of the United States. The impact of immigration and assimilation is complex. It requires the appreciation of traditions maintained by the immigrants and the immigrants appreciating the culture of the land they have entered.

Over time, however, colorful cultural parades that aim to link immigrant folklore and traditions to public policy and popular culture via song, dance, and merriment also dilute the immigrant’s bond with their authentic cultural heritage. In a generation or two, immigrants embody their projected, simplistic self-image parading in the mass-culture. Soon, the marketed superficial traits become deeply authentic pop-cultural heritage worthy of conservation in a melting pot. A similar phenomenon is taking over the diaspora Hindus and some communities back home in India, where a certain “Bollywood-ization” of Indic rituals and culture is apparent. I call this a careless collective self-objectification.

When a critical mass of people recognize a weakening of valuable cultural capital, reviving it is natural. If not for the revivalist Irish cultural nationalism, would there be a sense of pride, a feeling of collaboration among the Irish Americans in the early years? Would there be a grand sweep of Irish heritage for the melting pot to—no matter how superficially—celebrate?

A Tribute to Mary Lasker

In the early twentieth century, cancer assumed a more prominent place in the popular imagination as the threat of contagious diseases reduced and Americans lived longer. In this regard, the American Society for the Control of Cancer (ASCC), founded in 1913, had identified three goals: education, service, and research. However, until midcentury, largely due to the limited budget, the society contributed little to cancer research.

Enter Mary Woodard Lasker

One of the most powerful women in mid-twentieth-century New York City, and perhaps North America, Lasker demonstrated that women could command transformations in medical institutions. Born in Wisconsin in 1900, Mary Lasker, at the age of four, found out that the family’s laundress, Mrs. Belter, had cancer treatment. Lasker’s mother explained, “Mrs. Belter has had cancer and her breasts have been removed.” Lasker responded, “What do you mean? Cut off?” Decades later, Mary Lakser would mention this early memory that had inspired her to engage in cancer work.

After having established herself as an influential New York philanthropist, businesswoman, and political lobbyist, when Mary Lasker inquired about the role of the ASCC in 1943, she learned that the organization had no money to support research. Somewhat astounded by this discovery, she immediately contemplated ways to reorganize the society. She wanted to recreate the society into a powerful organization that prioritized cancer research.

Well-versed in public relations, connected to the financial and political circles of the country, Lasker played a central role in the mid-1940s. Despite notable opposition, Lasker convinced the ASCC to change the composition of the board of directors to include more lay members and more experts in financial management. She urged the council to adopt a new name, the American Cancer Society. She also convinced them to earmark one-quarter of its budget for research. This financial reorganization allowed the ACS to sponsor early cancer screening, including the early clinical trials. The newly formed American Cancer Society articulated a mission that explicitly identified research funding as its primary goal.

By late 1944, the American Cancer Society had become the principal nongovernmental funding agency for cancer research in the country. In its first year, it directed $1 million of its $4 million in revenue to study. Her ardent advocacy for greater funding of all the medical sciences contributed to increased funding for the National Institutes of Health and the creation of several NIH institutes, including the Heart, Lung, and Blood Institute.

Mary Lasker continued to agitate for research funds but resisted any formal association. As she explained it, “I’m always best on the outside.” Undoubtedly, Mary Lasker’s influence and emphasis on funding cancer research contributed to promoting the Pap smear in U.S. culture. As a permanent monument to her efforts, in 1984, Congress named the Mary Woodard Lasker Center for Health Research and Education at the National Institutes of Health.

A portion of a letter ACS administrative director Edwin MacEwan wrote to Lasker encapsulates her contribution to our society. He wrote, “I learned that you were the person to whom present and potential cancer patients owe everything and that you alone really initiated the rebirth of the American Cancer Society late in 1944.”

“If you think research is expensive, try disease.”
Mary Woodard Lasker (November 30, 1900 – February 21, 1994) 

Source:

  1. The Lasker Foundation (http://www.laskerfoundation.org/about/)
  2. Mary Lasker Papers, Columbia University Rare Book and Manuscript Library, New York, N.Y.
  3. Early Detection: Women, Cancer, & Awareness Campaigns in the Twentieth-century United States by Kirsten Elizabeth Gardner

Affirmative Guilt-Gradient and the Overton Window in Identity-Based Pedagogy

Yesterday, I came across this scoop on Twitter; New York Post and several other blogs have since reported it.

Regardless of this scoop’s veracity, the chart of Eight White identities has been around for some time now, and it has influenced young minds. So, here is my brief reflection on such identity-based pedagogy:

As a non-white resident-alien, I understand the history behind the United States’ racial sensitivity in all domains today. I also realize how zealous exponents of diversity have consecrated schools and university campuses in the US to rid the society of prevalent racial power-structures. Further, I appreciate the importance of people being self-critical; self-criticism leads to counter-cultures that balance mainstream views and enable reform and creativity in society. But I also find it essential that critics of mainstream culture don’t feel morally superior to enforce just about any theoretical concept on impressionable minds. Without getting too much into the right vs. left debate, there is something terribly sad about being indoctrinated at a young age —regardless of the goal of social engineering— to accept an automatic moral one-‘downmanship’ for the sake of the density gradient of cutaneous melanin pigment. Even though I’m a brown man from a colonized society, this kind of extreme ‘white guilt’ pedagogy leaves me with a bitter taste. And in this bitter taste, I have come to describe such indoctrination as “Affirmative Guilt-Gradient.”

You should know there is something called the Overton Window, according to which concepts grow larger when their actual instances and contexts grow smaller. In other words, well-meaning social interventionistas easily view each new instance in the decreasingly problematic context of the problem they focus on with the same lens as they consider the more significant problem. This leads to unrealistic enlargement of academic concepts that are then shoved down the throats of innocent, impressionable school kids who will take them as objective realities instead of subjective conceptual definitions overlaid on one legitimate objective problem.

I find the scheme of Eight White identities a symptom of the shifting Overton Window.

According to Thomas Sowell, there is a whole class of academics and intellectuals of social engineering who believe that when the world doesn’t reconcile to their pet theories, that shows something is wrong with the world, not their theories. If we are to project Thomas Sowell’s observation on this episode of “Guilt-Gradient,” it is perfectly reasonable to expect many white kids and their parents to refuse to adopt these theoretically manufactured guilt-gradient identities. We can then —applying Sowell’s observation—predict academics to declare that opposition to the “Guilt Gradient” is evidence for many covert white supremacists in the society who will not change. Such stories may then get blown up in influential Op-Eds, leading to the magnification of a simple problem, soon to be misplaced in the clutter of naïve supporters of such theories, the progressive vote-bank, and hard-right polemics.

We should all acknowledge that attachment to any identity—be it majority or minority—is by definition NOT a hatred for an outgroup. Assistant Professor of Political Science at Duke University, Ashley Jardina, in her noted research on the demise of white dominance and threats to white identity, concludes, “White identity is not, a proxy for outgroup animus. Most white identifiers do not condone white supremacism or see a connection between their racial identity and these hate-groups. Furthermore, whites who identify with their racial group become much more liberal in their policy positions than when white identity is associated with white supremacism.” Everybody has a right to associate with their identity, and equating one’s association with an ethnic majority identity is not automatically toxic. I feel it is destructive to view such identity associations as inherently toxic because it is precisely this sort of warped social engineering that results in unnecessary political polarization; the vicious cycle of identity-based tinkering is a self-fulfilling prophecy. Hence, recognizing the Overton Window at play in such identity-based pedagogy is a must if we have to make progress. We shouldn’t be tricked into assuming that the non acceptance of the Affirmative Guilt Gradient is a sign of our society’s lack of progress.

Finally, I find it odd that ideologues who profess “universalism” and international identities choose schools and universities to keep structurally confined, relative identities going by adding excessive nomenclature so they can apply interventions that are inherently reactionary. However, isn’t ‘reactionary’ a pejorative these ideologues use on others?

The Prose and Poetry of Creation

Every great civilization has simultaneously made breakthroughs in the natural sciences, mathematics, and in the investigation of that which penetrates beyond the mundane, beyond the external stimuli, beyond the world of solid, separate objects, names, and forms to peer into something changeless. When written down, these esoteric percepts have the natural tendency to decay over time because people tend to accept them too passively and literally. Consequently, people then value the conclusions of others over clarity and self-knowledge.

Talking about esoteric percepts decaying over time, I recently read about the 1981 Act the state of Arkansas passed, which required that public school teachers give “equal treatment” to “creation science” and “evolution science” in the biology classroom.  Why? The Act held that teaching evolution alone could violate the separation between church and state, to the extent that this would be hostile to “theistic religions.” Therefore, the curriculum had to concentrate on the “scientific evidence” for creation science.

As far as I can see, industrialism, rather than Darwinism, has led to the decay of virtues historically protected by religions in the urban working class. Besides, every great tradition has its own equally fascinating religious cosmogony—for instance, the Indic tradition has an allegorical account of evolution apart from a creation story—but creationism is not defending all theistic religions, just one theistic cosmogony. This means there isn’t any “theological liberalism” in this assertion; it is a matter of one hegemon confronting what it regards as another hegemon—Darwinism.

So, why does creationism oppose Darwinism? Contrary to my earlier understanding from the scientific standpoint, I now think creationism looks at Darwin’s theory of evolution by natural selection not as a ‘scientific theory’ that infringes the domain of a religion but as an unusual ‘religion’ that oversteps an established religion’s doctrinal province. Creationism, therefore, looks to invade and challenge the doctrinal province of this “other religion.” In doing so, creation science, strangely, is a crude, proselytized version of what it seeks to oppose.

In its attempt to approximate a purely metaphysical proposition in practical terms or exoterically prove every esoteric percept, this kind of religious literalism takes away from the purity of esotericism and the virtues of scientific falsification. Therefore, literalism forgets that esoteric writings enable us to cross the mind’s tempestuous sea; it does not have to sink in this sea to prove anything.

In contrast to the virtues of science and popular belief, esotericism forces us to be self-reliant. We don’t necessarily have to stand on the shoulders of others and thus within a history of progress, but on our own two feet, we seek with the light of our inner experience. In this way, both science and the esoteric flourish in separate ecosystems but within one giant sphere of human experience like prose and poetry.

In a delightful confluence of prose and poetry, Erasmus Darwin, the grandfather of Charles Darwin, wrote about the evolution of life in poetry in The Temple of Nature well before his grandson contemplated the same subject in elegant prose:

Organic life beneath the shoreless waves
Was born and nurs’d in Ocean’s pearly caves;
First forms minute, unseen by spheric glass,
Move on the mud, or pierce the watery mass;
These, as successive generations bloom,
New powers acquire, and larger limbs assume;
Whence countless groups of vegetation spring
And breathing realms of fin, and feet, and wing.

The prose and poetry of creation — science and the esoteric; empirical and the allegorical—make the familiar strange and the strange familiar.

In Pursuit Of The Human Aim Of Leisure

This fascinating isochrone map—how many days it took to get anywhere in the world from London in 1914—at first blush evokes the cliché that the world has now shrunk. Obviously it hasn’t shrunk. While the distance between London and New Delhi is still 4,168 miles, what has shrunk is time, and this has had profound aftermaths on our lives.

One of the great ironies in the remarkable proliferation of time-saving inventions is they haven’t made life simple enough to give us more time and leisure. By leisure, I don’t mean a virtue of some kind of inertia, but a deliberate organization based on a definite view of the meaning and purpose of life. In the urban world, the workweek hasn’t shortened. We still don’t have large swaths of time to really enjoy the good life with our families and friends.

In 1956, Sir Charles Darwin, grandson of the great Charles Darwin, wrote an interesting essay on the forthcoming Age of Leisure in the magazine New Scientist in which he argued: “The technologists, working for fifty hours a week, will be making inventions so the rest of the world need only work twenty-five hours a week. […] Is the majority of mankind really able to face the choice of leisure enjoyments, or will it not be necessary to provide adults with something like the compulsory games of the schoolboy?”

He is wrong in the first part. The world may have shrunk but cities have magnified. Travel technologies have incentivized us to live farther away and simply travel longer distances to work and attract anxiety attacks during peak hour traffic [Google Marchetti’s constant]. So, rather than being bored to death, our actual challenge is to avoid psychotic breakdowns, heart attacks, and strokes resulting from being accelerated to death.

Nonetheless, Sir Charles Darwin is accurate about “compulsory games” for adults. What else are social media platforms, compulsive eating, selfies, texts, Netflix bingeing, and 24/7 news media that dominate our lives? They are the real opiates of the masses. We have been so conditioned to search for happiness in these anodyne pastimes that defying these urges appears to be a denial of life itself. It is not surprising that we can no longer confidently tell the difference between passing pleasure and abiding joy. Lockdown or no lockdown, we are all unwittingly participating in these compulsory games with unwritten rules, believing that we are now that much closer to the good life of leisure. But are we?

‘Time is the wealth of change, but the clock in its parody makes it mere change and no wealth.’— Rabindranath Tagore

The unyielding middlemen: A timeline of 2020-2021 Indian farmers’ protest

What’s the first question in the field of public policy? According to the Indian Economist Ajay Shah, “What should the state do?” is the first question. He says, “A great deal of good policy reform can be obtained by putting an end to certain government activities and by initiating new areas of work that better fit into the tasks of government.

This question is especially essential for a weak state like India. But what if people prefer government subsidies, assertive intermediaries and a weak state? I don’t know the answer to this question. The story of the Indian farm protest is an illustrative example; it is a rebellion to stay bound to the old status quo, fearful of free choice.

Protest Timeline

04 June 2020: Union Cabinet clears three ordinances meant for reforms in the Indian Agricultural sector. These reforms upgrade farmers from being just producers to free-market traders. Agriculture is a state subject in India, but state governments have had no political will to usher in these reforms. China reformed its agriculture sector first, followed by other industries. India is doing it the other way round and thirty years late. So, the union government followed constitutional means to usher in the reforms.

04-05 June 2020: Leader of Bharatiya Kisan Union (BKU), Rakesh Tikait, welcomes the ordinances.

09 August 2020: Two months after the cabinet’s ordinance, voices of dissent emerge in Punjab, Haryana, and U.P. because a minority of well-off farmers in these states are associated with APMC—the post-green revolution status-quo— that makes them comfortable middlemen.

14-20 September 2020: All the three bills cleared in the two houses of the parliament. But a party member from Punjab pulls out as a symbolic protest.

25 September 2020: Protest gets a ‘Bharat Bandh’ (India Shutdown) tag even though farmer unions in only three states oppose the reforms. The Union Govt opens a communication channel and holds several talks with these farmer associations over their concerns.

04 December 2020: The Union Govt offers a work-around the dilution of MSP. By the way, MSP sets an unnaturally high price and cuts out the competition, so the middlemen club in the farmer’s association of Punjab, Haryana, and U.P. want nothing less than the scraping of these reforms.

21 December 2020: Farmer associations boycott Jio and Reliance products unrelated to the farmer bills.

08 January 2021: Greta Thunberg’s online toolkit for a planned Twitter campaign against the Indian government is launched to invoke human rights violations; it confirms a hashtag.

10 January 2021: Online narrative set and future social media posts finalized.

12 January 2021: The supreme court of India makes a committee to examine the laws.

21 January 2021: The Union Govt offers to stay the laws for 18 months for a consultation, but it gets rejected.

26 January 2020: The farmers, during their Tractor Rally protests, breach the Red Fort, leads to a scuffle with the police. They hoist a religious flag at the Red Fort, thereby giving this arcane legal issue an unwanted sectarian color.

Bottom line: A) The Ordinances aim to liberalize Agri trade and increase the number of buyers for farmers. B) de-regulation alone may not be sufficient to attract more buyers.

Almost every economist worth his salt acknowledges the merit in point A) and welcomes these essential reforms that are thirty years late but better late than never. Ajay Shah says, “We [Indians] suffer from the cycle of boom and bust in Indian agriculture because the state has disrupted all these four forces of stabilization—warehousing, futures trading, domestic trade and international trade. The state makes things worse by having tools like MSP and applying these tools in the wrong way. Better intuition into the working of the price system would go a long way in shifting the stance of policy.”

However, the middlemen argue on point B), that acts as a broad cover for their real fears of squandering their upper-hand in the current APMC/MSP system. Although nobody denies that a sudden opening of the field for competition will threaten the income of these middlemen, such uncertainties should not justify violent protests, slandering campaigns, that look to derail the entire process of upgrading the lives of a great majority of poor farmers in the country.

Even worse, these events get branded in broad strokes as state violence and human rights abuses by pre-planned Twitter and street campaigns and unnecessary road blockades. Everybody questions internet outages during these protests but no one questions the ethics of protesters blocking essential roads in the city. A section of the Indian society and diaspora hates Prime Minister Modi for sure. I have no qualms with this, but the reckless hate shouldn’t negate all nuances in analyzing perfectly sane reforms. Social justice warriors legitimize the vicious cycle of dissent without nuance because they don’t take the trouble of even reading the farm bill but make it a virtue to reason from their “bleeding hearts.”

Talking about social justice warriors, the sane voice of Sadanand Dhume, a Resident Fellow at the American Enterprise Institute, one of the few left-leaning voices from India I respect writes, “What do Rihanna, Greta Thunberg, and Vice President Kamala Harris’s niece, Meena Harris, have in common? They’re all rallying support for India’s farmer protests, which are morphing from an arcane domestic dispute into an emotive international cause. And they’re all mostly wrong in their thinking.

Ordinarily, the Indian state works inadequately, experiences confusion when faced with a crisis. It comes out with a communication of a policy package that attempts to address the problem in a short-term way and retreats into indifference. So, there are two aspects to its incompetence, one, there is a lack of political will because special interest groups persuade the government towards the wrong objectives. And two, the state capacity is so weak that it fails to achieve the goal. The farm protest is a hideous third kind of difficulty: a special interest group of assertive, influential middlemen want the strong-willed, long-term thinking Indian government policy—a rare entity— to sway towards short-termism under the pretext of human rights abuse. The hard left is actually supporting the Indian state to remain weak. They will also be the first to blame the state when it comes off as weak in the next debacle.

The story never ends.

‘South Asian’ identity signals alignment without being aligned to anything specific

Of late, a growing number of Indian-Americans look to assert a South Asian identity for most of their sociopolitical and cultural expressions even though actual residents of ‘South Asia’ don’t claim this identity in any way, home or abroad. I realize that second-generation Indian-Americans embrace ‘South Asian’ forums in reaction to various domestic conditions. However, they ignore the polysemy of the term ‘South Asia’ when they project it internationally, for example, to express ‘South Asian’ pride over Kamala Devi Harris’s historic election for the Vice Presidency, instead of just Indian-American pride. Of course, I’m not talking about African-American pride here; it is beyond the purview of my discussion.

According to my understanding, increasing application of the term ‘South Asia’—just like the Middle East—precludes a nuanced perception of the particular countries that make up the region. It permits Americans to perceive the region like it is a monolith. Although the impression of the United States is striking in the Indian imagination, the image of India, as it turns out, is not very obvious for the average citizen in the United States, not even among second-generation Indian Americans, as I see it. To gauge American curiosity in a particular region, language enrollment in US universities is a decent metric. It turns out, around seven times more American students study Russian than all the Indian languages combined. The study of India compares unfavorably with China in nearly every higher education metric, and surprisingly, it also fares poorly compared to Costa Rica! As an aside, to understand India and her neighborhood, an alternate perspective to CNN or BBC on ‘South Asian’ geopolitics is WION (“World is One” News – a take on the Indic vasudhaiva kutumbakam). I highly recommend the Gravitas section of WION for an international audience. 

Back to the central question: ‘South Asia’ and why Indians do not prefer this tag?

For decades, the United States hyphenated its India policy by balancing every action with New Delhi with a counterbalancing activity with Islamabad. So much so that the American focus on Iran and North Korean nuclear proliferation stood out in total contrast to the whitewashing of Pakistan’s private A.Q. Khan network for nuclear proliferation. Furthermore, in a survey conducted by the Chicago Council on Global Affairs that gauges how Americans perceive other countries, India hovered between forty-six and forty-nine on a scale from zero to one hundred since 1978, reflecting its reputation neither as an ally or an adversary. With the civil-nuclear deal, the Bush administration discarded the hyphenation construct and eagerly pursued an independent program between India and the United States. Still, in 2010, only 18 percent of Americans saw India as “very important” to the United States—fewer than those who felt similarly about Pakistan (19%) and Afghanistan (21%), and well below China (54%) and Japan (40%). Even though the Indo-US bilateral relationship has transformed for the better from the Bush era, the increasing use of ‘South Asia’ on various platforms by academics and non-academics alike, while discussing India, represents a new kind of hyphenated view or a bracketed view of India. Many Indian citizens in the US like me find this bracket unnecessary, especially in the present geopolitical context. 

What geopolitical context? There are several reasons why South Asian identity pales in comparison to our national identities:

  1. The word ‘South Asia’ emerged exogenously as a category in the United States to study the Asian continent by dividing it. So, it is a matter of post Second World War scholarship of Asia from the Western perspective.
  2. Despite scholarship, ‘South Asia’ has low intelligibility because there is no real consensus over which countries comprise South Asia. SAARC includes Afghanistan among its members; the World Bank leaves it out. Some research centers include Myanmar—a province of British India till 1937, and Tibet, but leave out Afghanistan and the Maldives. For instance, the UK largely accepts the term ‘Asian’ rather than ‘South Asian’ for academic centers. The rest of Europe uses ‘Southeast Asia.’
  3. Besides, geopolitically, India wants to grow out of the South Asian box; it cares a lot more about the ASEAN and BRICS grouping than SAARC. 
  4. Under Modi, India has a more significant relationship with Japan than with any South Asian neighbor. With Japan and South Korea, India plans to make Indo-pacific a geopolitical reality. 
  5. South Asia symbolizes India’s unique hegemonic fiefdom, which is viewed unfavorably by neighboring Nepal, Sri Lanka, Bangladesh, and Pakistan.
  6. According to the World Bank, South Asia remains one of the least economically integrated regions globally.
  7. South Asia is also among the least physically integrated (by road infrastructure) regions of the world and this disconnect directly affects our politics and culture.

Therefore, the abstract nature of ‘South Asia’ is far from a neutral term that embraces multiple cultures. It is, at best, a placeholder for structured geopolitical co-operation in the subcontinent. However, in socio-cultural terms, ‘South Asia’ used interchangeably with India signals India’s dominance over her neighborhood. Contrarily, in India’s eyes, it is a dilution of her rising aspirations on the world stage. These facts widen the gap between the US’s intentions (general public and particularly, second-generation Indian-Americans) and a prouder India’s growing ambitions. 

Besides, it is worth mentioning that women leaders have already held the highest public office in Pakistan, India, Sri Lanka, Bangladesh, etc. So as you see in this video, the Indian international actress, Priyanka Chopra, tries her best to be diplomatic about this nebulous ‘South Asian’ pride thingy, but she rejoins with the more solid identity, her Indian identity. The next time, say a Nepalese-American does something incredible in the US, and you want to find out how another Nepali feels about this achievement, as a matter of experiment, refer to the accomplishment as Nepali pride, instead of South Asian pride, and see the delight on the person’s face. Repeat this with another Nepali, but this time use the ‘South Asian’ identity tag and note the contrast in the reaction.

Why is the Republic of India a Civilization-State?

Why is the Republic of India a Civilization-State?

On 26 January 1950, India’s Constitution came into effect amidst severe apprehensions about India’s balkanization. So, seventy-one years later, the Indian democratic republic may still appear to be a historical accident, but it is not. Here is why:

India has always been a fertile territory for experiments in governance, but surprisingly, there is no more than a casual reference to the ideas underlying non-western civilizations in Political Science courses or History of Political Thought. The neglect of Indian polity is particularly striking, for apart from Western political thought, Indic political ideas comprise the most extensive and most crucial body of political philosophy. Moreover, these political ideas are integral to Indic civilization—one of the only surviving non-western civilizations. Today, we know that Western ideas have clearly impacted Indian political thought. Still, what is generally not realized is that India has also contributed to Western political thinking in all probability. 

The problem of scant attention given to Indic political thought compared to Indic religion and philosophy was partly remedied with the re-discovery of Kautilya’s Arthashastra —the Indic equivalent of the Machiavellian, The Prince. However, other great works like Kamandaki’s Nitisara— Elements of Polity, the Raj Dharma (administrative ethics) section of the epic, Mahabharata, the epic Ramayana, Digha Nikaya (Collection of Long Discourses), and to some extent antiquated Hitopadesha (Beneficial Advice) also deal with an Indian way of thinking about the state-society relationship. 

Drawing from these essential texts and Indic political thinkers, the king’s role is viewed mainly as an administrator—the ruler is not an agent of social change. This view is radically different from its counterparts in the West. In Western political theory—Rousseau, Locke, and Hegel—political order means the subjugation of society to the state. In Indian tradition, the society and culture are always supreme, and the ruler is accountable to dharma (Indic ethics—a common internal bond) and society. Therefore, the conception of the “state of nature” in Hobbes and Rousseau is irrelevant to Indic tradition because ethics and civilization preceded the state’s development in India. In the Ramayana and Mahabharata’s grand narratives, an esoteric reading accounts for personal ethics and the path to profound spiritual freedom. But an exoteric view informs us of political power, administrative ethics, and the limits of provisional freedom. According to these epics, the state is created to protect against the disintegration of social order, and the state is given only those powers required to do so. Thus, a ruler’s powers are not like those of the Leviathan conceptualized in Hobbes.

Despite these radical Indic political concepts, the popular view on ancient and early medieval India is that it was merely a region invested in despotism with no knowledge of Freedom or Liberty. Hegel assumed that only one tribe of men were free in Asia, and others were their slaves. It is worth noting that for almost thousand eight hundred years after the Greek republics collapsed, the Western world also lived through monarchical despotism and tyranny. Likewise, apart from ancient Greece and Rome, in India too, there existed republics and proto democracies. A fair study of Indic history informs you that ancient Indian republics were not only in existence from the 8th century B.C. to 4th century A.D., but they were doing some fascinating experiments in state-society relations. With time, at least four different forms of constitutions emerged. 

  1. Arajya: A political community without a king. These communities self-governed using Dharma texts (Indic ethics).
  2. Ganarajya: A state or a political community ruled by a ‘gana’ or an assembly of people.
  3. Youvarajya: A political community ruled by a crown prince.
  4. Dvairajya: A political community ruled by two kings.

For various reasons, Ganarajya and Youvarajya systems thrived much more than the other two. 

The ‘Gana‘ seems to be the earliest Indic political forum of the entire community (Jana). The Jana’s formulation of political policies rested with the Samiti (Sanskrit for Committee) and the Sabha (an assembly of elders). Over time, these Ganarajya states developed into Janapada—a self-sufficing political and cultural unit. Every Janapada had its peculiar dialect and customs developed from regional interpretations of Indic Dharma (ethics). Several of these Janapada states even joined hands to form a federation of Mahajanapada (mega-Janapada). Over time, however, powerful Indic monarchies who performed the state’s integrative functions better than the assemblies of Gana overwhelmed them. Fortunately, imperial states incorporated these republics into their fold; republics were not entirely stamped out, even after repeated invasions by the Turks, Mongols, Portuguese, French, and the British. 

The Gana-Sabha system emerged from the shadow as soon as these imperial powers became weak. The Sabha system was active in the village setting as Panchayat (village associations) that included both notable big men and peasants, in contestation with each other and in opposition to the state. Here, different qualities of people and opinions were tested, rather than the scene of a pronunciamento by elders. Even the British acknowledged this system. Henry Maine, who was influenced by J. S. Mill, was sent to India in the 1860s to advise the British government on legal matters. He came across several accounts of thriving indigenous systems of autonomous village governments, whose structure and practice shared many characteristics of participatory democracy. Later, Maine articulated a theory of the village community as an alternative to the centralized state. In the Panchayat system, De Tocqueville saw an ideal model of a society with a limited state. He planned to study it, comparable to Democracy in America but overwhelmed by his political duties, he never managed a trip. So, while Indian electoral democracy was only instituted in the first half of the twentieth century, the practice of public reasoning, deliberation, and toleration of a plurality of ideas is a much older phenomenon, dating back to ancient Indic traditions. 

During the 1947 Constituent Assembly Debates of post-colonial India, there was an Alexander Hamilton vs. Thomas Jefferson sort of debate between Gandhi’s idea of Indic village-style, decentralized administration vs. B. R. Ambedkar’s —the principal architect of the Indian constitution—healthy centralized state. Although Ambedkar’s view prevailed, the village democracy did not entirely disappear from the Indian constitution. India officially called itself Bhārat Gaṇarājya, and the first two words of the Indian national anthem honor Jana and Gana. Hence, the constitutional democracy of the Indian republic was not an accident; it is a sui generis phenomenon reflecting the plural character and age-old but essential values of Indic civilization. Therefore, modern-day India is a Civilization-State. The West can only describe it from the outside, but it is for India to interpret herself from within—an ongoing process.  

Finally, it merits mentioning that Professor of international history Arnold J. Toynbee reminded the world, “India is a whole world in herself; she is a society of the same magnitude as our Western society.”

To know more about India’s constitutional debates, check this excellent ten-episode series. Subtitles are available in English.

Internet villages and algorithmic-speech

We find ourselves in an overlap of classical free-speech abstractions, editorialized-media discourse, and algorithmic-social media diatribe. Each of these is a product that cannot reproduce the stability of the system that produced them. And yet, these platforms—print, electronic and social media—represent disruptions that fill in a vacuum felt in the other system.

Besides, we tend to think that the IT revolution’s transformations with our iPhones, Facebook, and Twitter, are without a parallel, but think of what urbanization brought to the rural life, what the railway brought in the nineteenth century or the telephone in the early twentieth. Disruptive innovations that increased transportation speed in the past couple of hundred years have not lowered commuting time but instead increased commuting distances. The size of an average individual’s ‘extended family’ cluster is an approximate invariant—it doesn’t change with city size. In a village, we are limited to a community by proximity, whereas in a city, we are free to choose our own “village” by our likes and dislikes.

Similarly, social media tools have not brought us closer the way we intended it would. Instead, they have allowed us to construct our “internet villages.” These internet villages are scaled-up, combustible derivatives that cannot reproduce the stability of offline, real-world social interactions that produced them. Instead of free-speech, they cater to our preconceived notions by exposing us to algorithmic-speech that makes each of us a volatile, motivated political actor outside the legal institutions born out of civil society. Their extreme negative externalities include conspiracies, real-world riots, and unrest. Nonetheless, in a primal way, internet populism coming out of these internet villages is gesturing at the real-world rifts created by liberal legalism’s parchment antidotes on the one end and lack of upward mobility on the other end.

As Tyler Cowen points out in his book, The Complacent Class, in our digital realm, the word “disruption” is no longer violent but the peaceful label for an ingenious upheaval of an established business order. Taking a cue from this digital paradox, it is not unreasonable to assume that a radical improvement in our physical realm may occur when we volunteer to act with moderation on social media platforms. If we don’t act with moderation, someone else will moderate it for us. Responsible self-regulation can preclude complicated centralized government regulation.

A criticism of Indian Americans by an Indian national in the US

This Atlantic article got me thinking. As an Indian national in the U.S., I would like to make a limited point about some (definitely not all) Indian Americans. In my interactions with some Indian Americans, the topic of India induces, if you will, a conflicting worldview. India —the developing political state—is often belittled in some very crude ways, using some out-of-context recent western parallels by mostly uninformed but emboldened Indian Americans.

Just mention Indian current affairs, and some of these well-assimilated Indian Americans quickly toss out their culturally informed, empathetic, anti-racist, historically contingent-privilege rhetoric to conveniently take on a sophisticated “self-made” persona, implying a person who ticked all the right boxes in life by making it in the U.S. This reflexive attitude reversal comes in handy to patronize Indians living in India. They often stereotype us as somehow lower in status or at least less competent owing to the lack of an advanced political state or an ”American” experience—therefore deficient in better ways of living and a higher form of ”humanistic” thinking.

This possibly unintentional but ultimately patronizing competence-downshift by a section of Indian Americans results in pejorative language to sketch generalizations about Indian society even as they recognize the same language as racist when attributed to American colored minorities.

In the last decade, I have learned that one must always take those who openly profess to be do-gooders, culturally conscious, anti-racist, and aware of their privileged Indian American status as a contingency of history with a bucket load of salt. Never take these self-congratulatory labels at face value. Discuss the topic of India with them to check if Indian contexts are easily overlooked. If they do, then obviously, these spectacular self-congratulatory labels are just that — skin-deep tags to fit into the dominant cultural narrative in the U.S.

Words of the economist Pranab Bardhan are worth highlighting: “Whenever you find yourself thinking that some behavior you observe in a developing country is stupid, think again. People behave the way they do because they are rational. And if you think they are stupid, it’s because you have failed to recognize a fundamental feature of their current economic environment.”