I spotted two Bald Eagles today in Cincinnati, which is really fitting for the Fourth of July. The two Bald Eagles—one indolent and one vigilant—capture not only the attitudes of two influential figures in American politics about this national symbol that was hotly debated till 1789, but they also reflect the fractured character of these United States.
You probably already know that Benjamin Franklin was an outspoken detractor of the bald eagle. He stated his disinterest in the national symbol in a letter to a friend: “I wish the bald eagle had not been chosen as the representative of our country; he is a bird of bad moral character; like those among men who live by sharping and robbing, he is generally poor, and often very lousy. The turkey is a much more respectable bird and withal a true, original native of America.”
In contrast, President John F. Kennedy wrote to the Audubon Society: “The Founding Fathers made an appropriate choice when they selected the bald eagle as the emblem of the nation. The fierce beauty and proud independence of this great bird aptly symbolizes the strength and freedom of America. But as latter-day citizens we shall fail our trust if we permit the eagle to disappear.”
The two Bald Eagles, which symbolize, in my estimation, two divergent historical viewpoints, show us that American history is splintered into sharp conceptions of the past as it has been politicizedly revised to forge a more perfect union. There is little question that the tendency toward seeking out varied intellectual interpretations of US history is unabating and maybe essential to the growth of a mature republic. On holidays like the Fourth of July, however, a modicum of romanticism of the past is also required if revisionist histories make it harder and harder for the average person to develop a classicist vision of the Republic as a good—if not perfect—union and make it seem like a simple-minded theory.
The two Bald Eagles aren’t just symbolic of the past; they also stand for partisanship and apathy in the present toward issues like inflation, NATO strategy, Roe v. Wade, and a variety of other divisive concerns. In addition, I learned there is an unfortunate debate on whether to have a 4th of July concert without hearing the 1812 Overture. For those who are not familiar, it is a musical composition by the Russian composer Tchaikovsky that has become a staple for July 4th events since 1976.
In view of this divisiveness here is my unsophisticated theory of American unity for the present moment: Although the rhetoric of entrenched divisiveness and the rage of political factions—against internal conflicts and international relations—are not silenced by the Fourth of July fireworks, the accompanying music, festivities, and the promise of harmony, they do present a forceful antidote to both. So why have a double mind on a national ritual that serves as a unifying force and one of the few restraints on partisanship? Despite the fact that I am a resident alien, I propose preserving Tchaikovsky’s 1812 Overture, arousing the inner dozing Bald Eagle, and making an effort to reunite the divided attitude toward all challenges. The aim, in my opinion, should be to manifest what Publius calls in Federalist 63 the “cool and deliberate sense of community.”
Today’s food-for-thought menu includes Eco-Feminism, Indics of Afghanistan, the Fetus Problem, a Mennonite Wedding, the Post-Roe Era, and the Native New World. I’m confident the dishes served today will stimulate your moral taste buds, and your gut instincts will motivate you to examine these themes in greater depth.
Note: I understand that most of us are unwilling to seek the opposing viewpoint on any topic. Our personal opinions are a fundamental principle that will not be altered. However, underlying this fundamental principle is our natural proclivity to prefer some moral taste buds over others. This series represents my approach to exploring our natural tendencies and uncovering different viewpoints on the same themes without doubting the validity of one’s own fundamental convictions.As a result, I invite you to reorder the articles I’ve shared today using moral taste buds that better reflect your convictions about understanding these issues.For instance, an article that appeals to my Care/Harm taste bud may appeal to your Liberty/Oppression taste bud. This moral divergence reveals different ways to look at the same thing.
The Nature-Culture Conflict Paradigm today reigns supreme and seeks to eradicate cultures, societies, and institutions that advocate for and spread the Nature-Culture Continuum Paradigm. Do you see this conflict happening? If so, can you better care for the environment by adopting a Nature-Culture Continuum paradigm? Is there anything one may learn from Hindu philosophy in this regard?
Due to a focus on other issues in Afghanistan, such as terrorism, food and water shortages, and poverty, the persecution of religious minorities in the nation is not as generally known, despite the fact that it has been a human rights crisis for decades. Ignorance of this topic poses a serious risk to persecuted groups seeking protection overseas. Western governments have yet to fully appreciate the risks that Afghan Sikhs and Hindus endure. I also recommend this quick overview of the topic: 5 things to know about Hindus and Sikhs in Afghanistan.
As I’ve discovered, abortion was one of the earliest medical specialties in American history when it became entirely commercialized in the 1840s. As a result, the United States has been wrestling with moral issues about abortion for 182 years! The abortion debate has gone through rights-based assertions and advanced to claims about the policy costs and benefits of abortion and now appears to have returned to rights-based arguments in the last 50 years. Regardless of where you stand on this debate, this much is clear: in the U.S., the circle of moral quandary surrounding abortion never closes. Nevertheless, what is the source of the moral ambiguity surrounding abortion? Can the philosophy of biology help us better comprehend this moral quandary?
Some philosophers would argue that the issue of biological individuality is central to this moral dispute. But why is biological individuality even a point of contention? Counting biologically individual organisms like humans and dogs appears straightforward at first glance, but the biological world is packed with challenges. For instance, Aspen trees appear to be different biological units from above the ground; nonetheless, they all share the same genome and are linked beneath the ground by a sophisticated root system. So, should we regard each tree as a distinct thing in its own right or as organs or portions of a larger organism?
Similarly, humans are hosts to a great variety of gut bacteria that play essential roles in many biological activities, including the immune system, metabolism, and digestion. Should we regard these microorganisms as part of us, despite their genetic differences, and treat a human being and its germs as a single biological unit?
Answers to the ‘Problem of Biological Individuality’ have generally taken two main approaches: the Physiological Approach, which appeals to physiological processes such as immunological interactions, and the Evolutionary Approach, which appeals to the theory of evolution by natural selection. The Physiological Approach is concerned with biological individuals who are physiological wholes [Human + gut bacteria = one physiological whole], whereas the Evolutionary Approach is concerned with biological individuals who are selection units [Human and gut bacteria = two distinct natural selection units].
Is a fetus an Evolutionary individual or a Physiological individual? If we are Evolutionary individuals, we came into being before birth; if we are Physiological individuals, we come into being after birth. While the Physiological Approach makes it evident that a fetus is a part of its mother, the Evolutionary Approach makes it far less clear. But is there an overarching metaphysical approach to solving the problem of biological individuality? Can metaphysics (rather than organized monotheistic religion) lead us to a pluralistic zone where we can accept both perspectives with some measure of doubt?
Do you consider the United States to be a high-power-distance or low-power-distance culture? Coming from India, I used to see the U.S. as the latter, but in the last 12 years of living here, it is increasingly becoming the former.
Does your proximity to an authority strengthen or lessen your loyalty?
Is there a flaw in the mainstream discussion of the U.S. Constitution that the abortion debate has brought to light? In my opinion, although predating the U.S. federal constitution and being significantly more involved in federal politics and constitutional evolution, each American state’s constitution is widely ignored. Keep in mind that state constitutions in the United States are far more open to public pressure. They frequently serve as a pressure release valve and a ‘pressuring lever’ for fractious U.S. national politics, catalyzing policy change. Regrettably, in an era of more contentious national politics, mainstream U.S. discourse largely ignores changes to state constitutions and spends far too much time intensely debating the praise or ridicule the federal Constitution receives for specific clauses, by which time the individual states have already shaped how the nation’s legal framework should perceive them. Altogether, a federal system, where individual state constitutions are ignored, and conflicts are centralized, is the American political equivalent of Yudhishthira’s response to the world’s greatest wonder in the thirty-three Yaksha Prashna [33 questions posed by an Indic tutelary spirit to the perfect king in the Hindu epic of Mahabharata].
The emergence of a distinctly Native New Worldis a founding story that has largely gone unrecorded in accounts of early America. Here’s an excerpt from the article:
To round off this edition, a Western movie question: Are there any examples of American Westerns developed with the opposing premise—valuing the First Nation’s People’s agency, which has gained historical support? Why not have a heroic Old World First Nation protagonist who safeguards indigenous practices and familial networks in a culturally diverse middle ground somewhere in the frontier country, shaping and influencing the emerging New World? Can this alternate perspective revitalize the jaded American Western movie genre?
Based on anthropologist Richard Shweder’s ideas, Jonathan Haidt and Craig Joseph developed the theory that humans have six basic moral modules that are elaborated in varying degrees over culture and time. The six modules characterized by Haidt as a “tongue with six taste receptors” are Care/harm, Fairness/cheating, Loyalty/betrayal, Authority/subversion, Sanctity/degradation, and Liberty/oppression. I thought it would be interesting to organize articles I read into these six moral taste buds and post them here as a blog of varied reading suggestions to stimulate conversation not just on various themes but also on how they may affect our moral taste buds in different ways. To some of you, an article that appeals to my Fairness taste bud may appeal to your taste bud on Authority.
I had planned to post this blog yesterday, but it got delayed. Today, I can’t write a blog without mentioning guns. Given that gun violence is a preventable public health tragedy, which moral taste bud do you favor when considering gun violence? Care and Fairness taste buds are important to me.
I’ve only ever been a parent in the United States, where gun violence is a feature rather than a bug, and my childhood in India has provided no context for this feature. But, I can say that India has not provided me with reference points for several other cultural features that I can embrace, with the exception of this country’s gun culture. It is one aspect of American culture that most foreign nationals, including resident aliens like myself, find difficult to grasp, regardless of how long you have lived here. I’d like to see a cultural shift that views gun ownership as unsettling and undesirable. I know it is wishful thinking, but aren’t irrational ideas salvation by imagination?
Though I’m not an expert on guns and conflict, I can think broadly using two general arguments on deterrence, namely:
A) The general argument in favor of expanding civilian gun ownership is that it deters violence at the local level.
B) The general case for countries acquiring nuclear weapons is that it deters the escalation of international conflict.
I sense an instinctual contradiction when A) and B) are linked to the United States. The US favors a martial culture based on deterrence by expanding civilian gun ownership within its borders while actively preventing the same concept of deterrence from taking hold on a global scale with nuclear weapons. Why? The US understands that rogue states lacking credible checks and balances can harm the international community by abusing nuclear power. Surprisingly, this concept of controlling nuclear ammunition is not effectively translated when it comes to domestic firearms control. I get that trying to maintain a global monopoly on nuclear weapons appeals to the Authority taste bud, but does expanding firearms domestically in the face of an endless spiral of tragedies appeal just to the Liberty taste bud? Where are your Care and Fairness taste buds languishing?
[I’m sharing these two articles because my recent trip to Portland, Oregon, revealed some truly disturbing civic tragedies hidden within a sphere of natural wonders. I hadn’t expected such a high rate of homelessness. It’s a shame. “Rent control does not control the rent,” Thomas Sowell accurately asserts.]
[I’d like to highlight one example of how “rules-based order” affected India: In the 1960s, India faced a severe food shortage and became heavily reliant on US food aid. Nehru had just died, and his successor, Prime Minister Lal Bahadur Shastri, called upon the nation to skip at least one meal per week! Soon after, Shastri died, and Prime Minister Indira Gandhi took over, only to be humiliated by US President Lyndon B. Johnson for becoming dependent on food aid from his country. The progressive US President was irked by India’s lack of support for his Vietnam policy. So he vowed to keep India on a “ship-to-mouth” policy, whereby he would release ships carrying food grain only after food shortages reached a point of desperation. Never to face this kind of humiliation, India shifted from its previous institutional approach to agricultural policy to one based on technology and remunerative prices for farmers. The Green Revolution began, and India achieved self-sufficiency. The harsh lesson, however, remains: in international relations, India is better off being skeptical of self-congratulatory labels like “leader of the free world,” “do-gooders,” “progressives,” and so on.]
[I would like to add that, in the name of advocating liberalism for all, personal liberty is often emphasized over collectivist rights in the majority, while collectivist rights are allowed to take precedence over personal liberty in minority groups, and all religious communities suffer as a result.]
[In my opinion, the indefinite future that awaits us compels us to contextualize our current activities and lives. What do you think will happen if anti-aging technology advances beyond the limits of our evolutionary environment? Furthermore, according to demographer James Vaupel, medical science has already unintentionally delayed the average person’s aging process by ten years [Vaupel, James W. “Biodemography of human ageing.” Nature 464.7288 (2010): 536-542]. We have 10 extra years of mobility compared to people living in the nineteenth century; 10 extra years without heart disease, stroke, or dementia; and 10 years of subjectively feeling healthy.]
[Here is my gaze-reversal on caste as a moderate Hindu looking at a complacent American society: If caste is a social division or sorting based on wealth, inherited rank or privilege, or profession, then it exists in almost every nation or culture. Regardless of religious affiliation, there is an undeniable sorting of American society based on the intense matching of people based on wealth, political ideology, and education. These “American castes,” not without racial or ethnic animus, organize people according to education, income, and social class, resulting in more intense sorting along political lines. As a result, Democrats and Republicans are more likely to live in different neighborhoods and marry among themselves, which is reflected in increased polarization in Congress and perpetual governmental gridlock. The intensification of “American castes,” in my opinion, is to blame for much of the political polarization. What is the United States doing about these castes? Don’t tell me that developing more identity-centered political movements will solve it.]
I intend to regularly blog under this heading. To be clear, I refer to regularly using the Liberty taste bud rather than Fairness.
I’ve discovered and admired a wide variety of original thinkers during my eleven-year stay in the United States, from philosopher Eric Hoffer to economist and social theorist Thomas Sowell. From American history professor Barbara J. Fields to American political philosopher Harvey Mansfield. From Tyler Cowen, an American economist, and David Boaz, a libertarian thinker, to Paul Graham, an English-born American venture capitalist and essayist. One among them is Natasha Trethewey, a two-time US Poet Laureate.
My favorite contemporary American poet, Natasha Trethewey’s poems have an Ekphrastic quality because she graphically and implicitly explores her individuality and deprivations of liberty through deeply evocative accounts of her past and personal photographs rooted in her experience of race and culture.
As it is World Poetry Day, I thought I’d share three of her poems that appeal to me. But first, a little background on her: She was born on the centennial of Confederate Memorial Day in the Deep South to an African American mother and a white father when interracial marriage was still illegal in Mississippi. Though her father, poet Eric Trethewey, had an early impact on her, her mother, Gwendolyn Ann Turnbough’s tragic death, according to Trethewey, prompted her first attempt at writing poetry.
I hope you enjoy these poems and explore more of her work.
I am four in this photograph, standing
on a wide strip of Mississippi beach,
my hands on the flowered hips
of a bright bikini. My toes dig in,
curl around wet sand. The sun cuts
the rippling Gulf in flashes with each
tidal rush. Minnows dart at my feet
glinting like switchblades. I am alone
except for my grandmother, other side
of the camera, telling me how to pose.
It is 1970, two years after they opened
the rest of this beach to us,
forty years since the photograph
where she stood on a narrow plot
of sand marked colored, smiling,
her hands on the flowered hips
of a cotton meal-sack dress.
[Natasha Trethewey, “History Lesson” from Domestic Work.]
Before the war, they were happy, he said. quoting our textbook. (This was senior-year
history class.) The slaves were clothed, fed, and better off under a master’s care.
I watched the words blur on the page. No one raised a hand, disagreed. Not even me.
It was late; we still had Reconstruction to cover before the test, and — luckily —
three hours of watching Gone with the Wind. History, the teacher said, of the old South —
a true account of how things were back then. On screen a slave stood big as life: big mouth,
bucked eyes, our textbook’s grinning proof — a lie my teacher guarded. Silent, so did I.
[Natasha Trethewey, “Southern History” from Native Guard.]
Here, she said, put this on your head.
She handed me a hat.
You ’bout as white as your dad,
and you gone stay like that.
Aunt Sugar rolled her nylons down
around each bony ankle,
and I rolled down my white knee socks
letting my thin legs dangle,
circling them just above water
and silver backs of minnows
flitting here then there between
the sun spots and the shadows.
This is how you hold the pole
to cast the line out straight.
Now put that worm on your hook,
throw it out and wait.
She sat spitting tobacco juice
into a coffee cup.
Hunkered down when she felt the bite,
jerked the pole straight up
reeling and tugging hard at the fish
that wriggled and tried to fight back.
A flounder, she said, and you can tell
’cause one of its sides is black.
The other side is white, she said.
It landed with a thump.
I stood there watching that fish flip-flop,
switch sides with every jump.
[Natasha Trethewey, “Flounder” from Domestic Work.]
Talking about Trethewey’s poetry, Jericho Brown, a Pulitzer Prize-winning poet, says, “Her contribution is that of someone who sees us in individual and human ways and not only representations of resistance. Her black Union soldiers fall in love, her overworked grandmother plays a mischievous trick on a foreman, her black stepfather is a murderer, and her white father, who loves her, can’t resist microaggressions against her. I mean she allows her characters — her own history — to be as complex as history really is. This makes space for readers like me who are interested in life and not a caricature of life, readers who understand that poems must face us to our good and our evil and our personhood no matter what color we are.” Apart from Brown’s insight into Trethewey’s poetry invoking a strand of individuality that goes beyond trying to paint groups of people as symbols of resistance, I have often wondered what it is about her poems that appeal to me. Though I have no firsthand experience of what a troubled relationship with racial identity feels like, I suppose it may have something to do with my uneasy alliance with the English language itself—the medium of Trethewey’s craft. English is both the language of enslavement and revolt in a multilingual India. Though a colonial language, India has adopted English as its father tongue, yet we don’t fall under the Anglosphere-type of society. For most Indians, including me, English is characterized by ambiguity and conflict with our mother tongues, often mirroring a flounder-like situation—flip-flopping, switching sides with every jump, privileging one or the other, yet interpreting each other in the search for liberty.
In 1969, Colonel Luke Quinn, a U.S. Army Air Force officer in World War II, was diagnosed with inoperable gallbladder cancer. Surprisingly, he was referred to Dr. DeVita, the lymphoma specialist at National Cancer Institute, by the great Harvard pathologist Sidney Farber — famous for developing one of the most successful chemotherapies ever discovered. Nobody imagined back then that Colonel Luke Quinn, a wiry man with grey hair and a fierce frown with his unusual and likely incurable cancer, would significantly impact how we look at cancer as a disease.
Having been coerced to take up the case of Colonel Luke Quinn, despite gallbladder cancers not being his specialty, Dr.DeVita began to take a routine history, much to the annoyance of Luke Quinn who was used to being in command. Though Quinn glared at Dr.DeVita for reinitiating another agonizing round of (im)patient history, he said he had gone to his primary care physician in D.C. when his skin and the whites of his eyes had turned a deep shade of yellow — jaundice. Suspecting obstructive jaundice—a blockage somewhere in the gallbladder, Quinn was referred to Claude Welch, a famous abdominal surgeon at Mass general who had treated Pope John Paull II when he was shot in 1981. Instead of gallstones, the renowned surgeon found a tangled mass of tissue squeezing Quinn’s gallbladder—gallbladder cancer was pretty much a death sentence. On the pathologist’s confirmation, Quinn, being declared inoperable, was sent to Dr.DeVita at NCI as he wanted to be treated near his home.
Dr.DeVita, however, noticed something quite odd when he felt Quinn’s armpits during a routine examination. Quinn’s axillary lymph nodes—the cluster of glands working as a sentinel for what’s going on in the body—under his arms were enlarged and rubbery. These glands tend to become tender when the body has an infection and hard if it has solid tumors—like gallbladder cancer; they become rubbery if there is lymphoma. Being a lymphoma specialist, the startled Dr. DeVita questioned the possibility of a misdiagnosis—what if Quinn had lymphoma, not a solid tumor wrapping around his gallbladder leading to jaundice?
On being asked for his biopsy slides to be reevaluated, the always-in-command Colonel Luke Quinn angrily handed them over to the pathologist at NCI and sat impatiently in the waiting room. Costan Berard, the pathologist reviewing Quinn’s biopsy slides, detected an artifact in the image that had made it difficult to differentiate one kind of cancer cell from the other. Gallbladder cancers are elliptical, whereas Lymphoma cells are round. The roundish lymphoma cells can look like the elliptical gallbladder cancer cells when squeezed during the biopsy. This unusual finding by Berard explained why Quinn’s lymph nodes were not hard but rubbery. The new biopsy showed without a doubt that Quinn had non-Hodgkin’s lymphoma —the clumsy non-name we still go by to classify all lymphomas that are not Hodgkin’s disease.
The NCI was working on C-MOPP, a new cocktail of drugs to treat non-Hodgkin’s lymphoma that had shown a two-year remission in forty percent of aggressive versions of this disease. The always-in-command WW II veteran had somehow landed in the right place by accident! It was a long three months for the nurses though, as they hated him for leaning on the call button all-day, for complaining bitterly about the food, for chastising anyone who forgot to address him, Colonel Quinn, and for never thanking anyone. But incredibly, he was discharged without any sign of his tumor; he had gone from certain death to a fighting chance.
The fierce and unpleasant Colonel Quinn is crucial because his initial misdiagnosis unknowingly spurred the creation of a close network of influential people during his remarkable escape from certain death. He could do this because he was a friend and employee of the socialite and philanthropist Mary Lasker—the most consequential person in the politics of medical research. Read my earlier piece on her.
Mary Lasker, the timid, beehived socialite circumvented all conventions of medical research management and got the U.S. Congress to do things her way. Mary’s mantra was: Congress never funds a concept like “cancer research,” but propose funding an institute named after a feared disease, and Congress leaps on it. Her incessant lobbying with the backing of her husband, Albert Lasker and her confidante, Florence Mahoney, wife of the publisher of The Miami News, helped create the National Cancer Institute, the National Heart Institute, the National Eye Institute, the National Institute of Mental Health, the National Institute of Dental and Craniofacial Research, the National Institute of Arthritis and Metabolic Diseases, the National Institute of Aging, and the National Institute of Child Health and Human Development.
Though Mary Lasker knew the value of independent investigators pursuing their unique research interests, she supported projects only when a clinical goal was perceptible, like curing tuberculosis. In 1946, Mary, having noticed microbiologist Selman Waksman’s work on streptomycin—a new class of antibiotics effective against microbes resistant to penicillin—persuaded him and Merck pharmaceutical company to test the new drug against TB. By 1952 Mary’s instinct had won over Waksman’s initial skepticism as the widespread use of streptomycin halved the mortality from TB! Mary Lasker’s catalytic influence on basic research leading to a Nobel Prize-winning discovery is a case in point.
Her clout over Congress was in its prime through the 1950s and 60s when the National Cancer Institute (NCI) was developing the first cancer cures. It was also the period when Colonel Luke Quinn became her influential lieutenant. The Congress believed Luke Quinn represented the American Cancer Society, but he was Mary’s lobbyist in reality. When Quinn got sick, Mary used her contacts to get Welch and Sidney Farber, but it got her special attention when Quinn’s incurable torment was overcome. The ongoing public concern for cancer and Albert Lasker’s death due to pancreatic cancer made it an ideal disease for Mary to draw the battle lines. Quinn’s recovery convinced her that the necessary advance in basic research had occurred to justify taking the disease head-on. In April 1970, she began building bipartisan support by having the Senate create the National Panel of Consultants on the Conquest of Cancer. She prevailed over the Texas Democrat senator Ralph Yarborough to appoint her friend, a wealthy Republican businessman Benno C. Schmidt —the chairman of Memorial Sloan Kettering board of managers—to be the chairman on the conquest of cancer panel. She backed him up by arranging Sidney Farber as the co-chairman. The panel also included Colonel Luke Quinn and Mary herself.
In just six months, the panel issued “The Yarborough Report.” The report, mainly written by Colonel Luke Quinn and Mary Lasker, made far-reaching recommendations, including an independent national cancer authority. It recommended a substantial increase in funding for cancer research from $180 million in 1971 to $400 million in 1972 and reaching $1 billion by 1976. Finally, it recommended that the approval of anticancer drugs be moved from the FDA to the new cancer authority. Senator Edward Kennedy presented the recommendations as new legislation for the Ninety-Second Congress. Though not a Senate staff member, Colonel Quinn, trained by Mary in the art of testifying before the Congress, orchestrated the hearings, set the agenda, and selected the people who would testify.
The Nixon administration did not immediately embrace the bill as he wasn’t thrilled by Edward Kennedy’s involvement. Being Ted Kennedy’s close friend, Mary asked him to withdraw as a sponsor. Under Senator Pete Domenici, the bill renamed the National Cancer Act had to pass in the House. Paul Rogers, who headed the House Health subcommittee—Colonel Quinn and Mary Lasker had no influence over him—objected to removing the NCI from the NIH umbrella. He cautioned the NIH would face similar threats of separation in other disease areas. A revised bill agreed to this demand and kept the NCI under the NIH but gave it a separate budget and a director appointed by the President.
On December 23, 1971— fifty years to this day—the National Cancer Act was signed as a Christmas gift to the nation by President Richard Nixon, two years after Colonel Luke Quinn walked into the NCI with a wrong diagnosis. Though Quinn ultimately died of his relapsed cancer, a few months after the signing of the Cancer Act, the war on cancer had commenced with cancer research on the fast track. It was a victory for Mary Lasker, perhaps the most effective advocate for biomedical research that Washington had ever seen.
In hindsight, Mary Lasker’s triumph came with two significant disappointments. First, her crusade had failed in transferring the authority for approval of anticancer drugs from the FDA to the NCI—a failure that would plague the National Cancer Program well into the future. Second, the premise of the National Cancer Act that the “basic science was already there” and a quantitative boost in resources was all that was needed to bring victory was flawed. In combination, the two disappointments—the subjects of a future blog post—have spotlighted a perceived progress gap in cancer research by the tax-paying general public rather than underlining the tremendous conceptual progress made due to the War on Cancer.
Ultimately, this blog is for you to appreciate the 50th anniversary of the lucky accidents and the incredible effort in creating the National Cancer Act. At the same time, personally, cancer researchers—the boots on the ground—like me who experience the non-triviality of progress in cancer will dwell on the insistence of simplistic linear views of progress in cancer research for public consumption.
Many moons ago, around this time in October, my research collaborators and I were keeping a night’s watch in the lab; I mean pulling an all-nighter. We were peeking into some cells growing in a petri dish using a confocal microscope, and little did we know that the impermissible realm was waiting to stare back at us. The spirit of Halloween spooked us through these sacs of life!
Here’s how they looked back at us.
To the annoyance of my wife, my Facebook memory of this otherworldly microscopic image prompted an outbreak of random reading. In this reading session, I hit upon the pagan festival of Fontanalia, a celebration of fountains celebrated by Romans in October. On digging further, I learned that a Pagan view of October has a deep connection with the House of Stark in Game of Thrones, which links back to Halloween. To the best of my knowledge, these connections are not made explicit by George R. R. Martin, and if you have already connected the dots yourself, consider me a dim tube light.
Any number of GoT fan pages will tell you that Westeros is based on medieval Anglo-Saxon Britain, and the motto for House of Stark—one of the Great Houses of Westeros—is “Winter Is Coming.” The House of Stark is the only noble House whose family motto is a focal warning for the whole Ice and Fire narrative. Apart from the motto being a sign of vigilance for the Starks against a hard winter, it is also a long-forgotten reminder that the White Walkers will return in the winter and overturn the realm. So the Starks are looking to protect the realm’s order and keep the Night King away.
Let’s cut to October—a month sacred for the Roman goddess Astrea. She is a star-goddess with wings, a shining halo, and a flaming torch who lived among humans during the Golden Age. When the human realm began to degenerate, she withdrew to the upper world. Astrea’s departure in October signaled the end of the golden age of light as the chills of autumn alerted that the winter was drawing near. Interestingly, October’s Anglo-Saxon name is Winterfelleth, which means winter is coming, and Westeros is a version of medieval Anglo-Saxon Britain.
Like Fire and Ice, the Pagan October is tinted in celebrations of light and darkness. The month begins brightly with Fides, the goddess of faithfulness, followed by a festival of the Grecian Dionysus, the pagan god of wine and revelry. Barring a brief interlude to honor the departed ancestors on Mundus (the 5th of October), we have the celebrations of Victoria—the Roman goddess of triumph; Felicitas—the Roman goddess of luck; Fortuna Redux—the goddess of successful journeys; Fontinalia—the goddess of holy wells, springs, and fountains just before October turns towards the freezing gloom that is to follow. From the 14th of October, named Vinternatsblot, to the festival of Fyribod on the 28th, a marker for lousy weather, we have ceremonies that cogitate the motto that winter is coming. As winter draws near, we have the feast of Samhain Eve (pronounced: Sow-ain Eve) on the 31st of October.
The close of October (aka Winterfelleth) signals a “calendrical rite of passage” for a temporary reversal of powers. It is a seasonal turning point marking the day’s liminal status as an annual and a seasonal day of transition; April fool’s day is another example of a “calendrical rite of passage” for a temporary reversal of powers. During this disorienting time of Winterfelleth, the Pagans cross the sensory walls of their mundane realm to peek into the impermissible realm. For all of us, the feast of Samhain Eve on the 31st of October is the modern-day festival of Halloween—a time when we playfully welcome the otherwordly. For members of the House of Stark, who embody the month of Pagan October, this time marks the breakdown of the Night’s Watch, the collapse of the physical wall, and the Night King’s arrival.
Remember, tonight, all of us belong to the House of Stark with the duty to keep a Night’s Watch because winter is coming. The point of difference is we welcome the temporary reversal of powers to mildly disorient our sensory walls to have a peek into the impermissible realm.
I have argued elsewhere that Republic of India is a Civilization-State, where Indic civilizational features will find increasing expression as the Indian state evolves away from its legacy of the British Raj. However, India’s history is an area of immense concern and a rate-limiting step in the evolution of the Civilization-State. The past seventy-five years of Indian independence have shown that to break away from the influences of intellectual and cultural imperialism is far more complicated than to draw away from political servitude because the foundation of colonization is cultural and spiritual illiteracy.
Colonization influences Indians in unusual ways as it alienates us from our past. For instance, we refer to our Indic formals as ‘ethnic wear’ whereas a suit and a conservative tie in the heat of tropical India is our ‘natural’ formals. The Constitution of India is written in English when only 0.1% of independent India spoke the language. Likewise, the knowledge about India among the English educated elite is generated with an alienating European view when India was colonized. The main interest of the British was to write a history of India that justified their presence. So, they had to acknowledge the legitimacy of preceding violators like the Turks, Persians, and Mongols while accentuating only those Indic kings who either reformed or renounced the Hindu way of life. Several generations of Indians, including me, have grown up studying Indian history textbooks that scarcely evaluate our impact on the outside world even as we painstakingly document the aftermaths of ideas and actions of the outside world on us.
Elites from colonized societies educated in the wake of the colonial rule often standardize colonial scholarship and legitimize it to the extent of rejecting most native insights about their own land, society, and culture. This dynamic largely explains the bloated investment in defining highly racialized tribes and ethnic types in colonized countries. Consider the state of African studies—with a nauseating inflection of Marxist narrative—is affected by similar categories: static tribes, decadent villages, and clashing ethnic groups. These frameworks were essential narratives that justified foreign rule— devices of hierarchical control by colonizing powers. Professor of African and world history Trevor Getz warns us, “the story we often tell of African tribes, chiefs, and villages tells us more about how Europeans thought of themselves in the period of colonization than of the realities in Africa before they came.”
The same holds for the history of Indic civilization. On the threshold of intellectual and cultural decolonization in 1947, when rehabilitating Indic wisdom traditions and the dignity of Hindu civilization was the need of the hour, newly independent India saw the emergence of a new movement in writing its history. Though more self-reliant than Europeans in broadening the scope of Indian social and economic history, this movement deeply dyed Indian history in an expression of Marxism. India’s ancient past became a theater to assert European society’s preconceived class and material conceptions while obliterating the importance of Indic ideas in defining its own historical events. India’s first prime minister, Jawaharlal Nehru, the Cambridge graduate whose fascination with Marxism and Fabian Socialism had made him an outlandish mix of the East and West, victimized the Indian civilization with his neither here nor their disposition.
Commenting on the sly attempt at thought-control and brainwashing of future generations of newly independent India through a European/Marxist view of Indian history, celebrated Indic thought leader Ram Swarup writes: “Karl Marx was exclusively European in his orientation. He treated all Asia and Africa as an appendage of the West and, indeed, of Anglo-Saxon Great Britain. He borrowed all his theses on India from British rulers and fully subscribed to them. With them, he believes that Indian society has no history at all, at least no known history —implying our history is just a bunch of ancient myths—and that what we call its history is the history of successive intruders. With them, he also believes that India has neither known nor cared for self-rule. He says that the question is not whether the English had a right to conquer India, but whether we are to prefer India conquered by the Turks, by the Persians, to India conquered by the Britons. His own choice was clear.”
As a result of such Marxist scribes, the brutal history of repeated invasions and the forced proselytization of Indic people is described in charitable words, often as the harbinger of an idealized syncretic culture that “refined” the Hindu society. Though I understand the emergence of syncretic cultures, it is absurd for anglicized elites to use them as an all-weather secular rationalization of blatant cultural and intellectual subjugation the Hindu psyche still confronts. The iconoclasm practiced by invading hordes and the resulting ruins of some of the most sacred Hindu sites find no place in our history books because it would ‘communalize’ history. Any inquisitive young Hindu will have to go out of the way to read the works of the trail-blazing Hindu publisher and historian Sita Ram Goel to understand the omitted sections from our history books. His two-volume work titled Hindu Temples, What Happened to Them analyzes nearly two thousand Hindu and Jain temples destroyed by Turkic/Mongol occupiers and their repercussions on the Indic ecosystem.
In a bizarre continuation of subjugating the Hindu temple ecosystem, the “secular” Indian state in 1951 under the Indian National Congress and its ideological Marxist and Communist allies selectively usurped Hindu temples and its lands to bring them under state control, and they continue to stifle the temple ecosystem of India. Every non-Hindu has a say on how Hindu temple rituals should be modified but a Hindu talking about Abrahamic practices will amount to a transgression of secularism and religious bigotry. I urge you to watch a quick primer on the crisis of Hindu temples in India — https://www.youtube.com/watch?v=js936p_cvTE.
Mainstream Indian history, as we learned it in school (India):
-Indian history begins with the Indus Valley Civilization (which is not Hindu)
-Hinduism begins with the Aryan Invasion bringing Vedic Religion of Sacrifice
-Hinduism is full of violent animal sacrifice until Buddhism reforms it
-There is a brief Golden Age for Hinduism in which art and culture ‘flourishes,’ and the epic poems, The Ramayana and Mahabharata are composed.
-There are Islamic invasions, but the Mughals bring harmony under Akbar
-The British colonize India; Hinduism is once again reformed
-Gandhi and Nehru free us.
Mainstream history, as it is taught in American schools, from a California textbook:
-Indian history begins with the Indus Valley Civilization (which is not Hindu)
-Hinduism begins with the Aryan Invasion bringing Vedic Religion of Sacrifice
-Hinduism is full of violent animal sacrifice until Buddhism reforms it
-The Ramayana and Mahabharata are composed. They have talking monkeys and bears and Hindus primitively animal-worship them. They also have the holiest text of Hindus, the Gita, which tells Hindus to do their caste duty and war.
The outline of the ‘alternative’ history proposed in Wendy Doniger’s book:
-The Indus Valley Civilization (is not Hindu)
-Hinduism begins with the Nazi-like Aryans bringing Vedic Religion of Sacrifice.
-Hinduism is full of violent animal sacrifice until Buddhism reforms it.
-There is no real Golden Age for Hinduism, but the Greek Invasion leads to great ideas and works of fiction like The Ramayana and Mahabharata. Greek women presumably inspired the fierce and independent Draupadi.
-After chapters called ‘Sacrifice in the Vedas’ and ‘Violence in The Mahabharata,’ at long last, we have ‘Dialogue and Tolerance under the Mughals.’
-The British colonize India; Hinduism is reformed.
-Once India gained independence, without those civilizing external forces, the Hindus became Hindutva extremists.
How can Doniger’s work be an ‘alternative,’ Vamsee Juluri asks, when its claims are the same as the dominant narrative? He notes that this ‘alternative’ history of Hinduism is at the top of the heap in most bookstores in the West, and its thesis dominates the press coverage of Hindu India. In a normal world, if there were ten titles written by ‘Hindu apologists’ on the shelf, then an ‘alternative’ would have indeed been worthy of the respect that term brings. Hence, the mainstream and the so-called alternative narratives of Indian history are both contemptuous for India of Hindu religion, culture, and philosophy. It is fortified in the republic of letters as ‘progressive’ criticism. This narrative still holds sway over academic appointments, research grants, crafting syllabi, and promoting textbooks. In other words, postcolonial Indian historians have primarily pursued the same blinkered colonizing vision of their masters who saw history only through a narrow prism of class, material wealth, and outsiders descending upon Hindus to ‘reform’ our heathen culture. In contrast, the history of the Hindu past is framed strictly within the confines of a region or sectarian terms. Therefore, instead of seeing the invaders for who they were, the history of Hindu India provincializes itself in this process and legitimizes self-hatred.
With a deep concern for such self-hatred in historical narratives, the Nobel Laureate V. S. Naipaul said: “This inveterate hatred of one’s ancestors and the culture into which they were born is accompanied by a hatred deep enough to want to destroy one’s own land and join the ranks of the violators of the ancestral land and culture, at least in spirit. Pakistan is an example. Its heroes are not the Vedic kings and Indic sages who walked the land but invading vandals like Ghaznavi and Ghori, who ravaged them.”
Accounting for the history of any country is a delicate task. However, it is far more sensitive for a civilization that has had a long colonial past. So, while reading mainstream Indian history, one must be mindful of what the foreigners wanted to know first. Because what they wanted to perceive foremost was beneficial to them in ruling the place as handily as possible, which effectively explains the dominance of hollow postulates surrounding racialized ethnic Aryan and Dravidian divides, and confounding European class concept with Hindu Varna in colonial and Marxist literature. These factors make it practically impossible for modern observers not to visualize previously colonized societies as changeless and unevolved for centuries. They never question if Indic social features are also aftermaths of several invading powers, the resulting conflict, and instinctive adaptations. Entities like factions and castes are often not static social constructs that result from endlessly contested histories. In Indic civilization and other colonized lands, such identities were often situational and flexible, not wholly rigid, and all-encompassing; wholesale rigidity is often a bug that needs exploration and explanation, not a feature to be endorsed unquestioned. A.M.Hocart, regarded by the philosopher-traditionalist Ananda Coomaraswamy as perhaps the only unprejudiced European sociologist who has written on caste, states: “It is at present fashionable to rationalize all customs, and to write up the “economic man” to the exclusion of that far older and more widespread type, the religious man, who, though he tilled and built and reared a family, believed that he could do these things successfully only so long as he played his allotted part in the ritual activities of his community.”
Indeed, the rigidity of the caste system was not the cause but the effect of the breakdown of political order. Moreover, the fixation with caste in Indian historical writings has made us oblivious to the profound ontological assumptions underlying the complexities of Indian society. Hindu society conceived multidirectional relationships at different levels of existence. This ideal of the individual-community relationship assumed that although human beings differ in temperament, each of them must try to develop and realize the full potential of being human through seeking greater intercourse with other members as part of cosmic reality. Dumont misunderstands this notion and argues that the individual in Indian thought did not exist beyond the idea of caste. It is simply not true because the concept of Indic autonomy is different from Individualism. Rights in Western liberal traditions are ‘claims’ against others in the society, but Indic society gave prominence to duties first, as it presupposed the consideration of others without obliterating autonomy. According to treatises like Sukraniti and the grand wisdom of Atharvaveda, autonomy is the state when no one can dominate us against our will or fundamental nature. Indian thought refers to this state as Swaraj; the Indian struggle for independence was predicated on Swaraj.
Several other Indic scholars have highlighted the adverse effect of excessive emphasis on caste coupled with a de-emphasis of equally important other Indic social concepts. Factors such as ashrama, shreni, Kula and Jati, and above all Dharma (Global Ethic) are overlooked, leading to a grave misunderstanding about the character of Hindu society through its historical writings. Indeed, in the Ramayana (venerated Indian epic-poem composed by an author not belonging to the “upper class”) and Mahabharata (revered Indian epic-poem composed by an author born to a fisherwoman) and in the broad scope of Puranic literature, there is a notion of an individual who gives life and substance to the existence in this world. Indeed, so great has been the weight attached to the individual that Indic art and literature are sought to be expressed through the lives of such individuals, be it limited but the exuberant personality of Sri Rama or the unlimited personality of Sri Krishna (a cowherd who is the ultimate avatar; he also happens to be dark-complexioned) or the non-dimensional, half-man half-woman personality of Shiva. Needless to add that the importance of the themes discussed here springs from the fact that Hindu society of 2021 is still largely organized around them.
Indic pantheism may look absurd to atheists and those of Abrahamic persuasion alike. Nevertheless, the Indic tradition has always had a transnational characteristic and considerable geopolitical sphere of influence. Therefore, in the future Hindu scholarship will understandably reject the notion of the colonial or Marxist intelligentsia that overemphasized the ‘local’ or ‘regional’ as more ‘authentic’ than the larger society, territory, or neighborhood of the Hindus. Hindu scholars of today already investigate more into the interplay of local, regional, national, and global settings that have historically responded to Hindu thought and actions. As native scholarship course corrects Indian history, the updated Indian history textbooks of the future will undoubtedly give way to a bad press in the global north with alarmist headlines such as ‘Hindu revision of Indian history.’ Still, anyone with a reasonable amount of grey matter needs to recognize the mobility of Hindu predecessors and think through the different spatial frameworks they occupied and traversed Bhārata (the original name of Indic Civilization mentioned in the Indian constitution).
When external forces become overpowering, or the body of society is unable to assimilate them, inevitable tensions are generated, leading to its decay. The interaction of the outside influences with the total personality of the Hindu society and various traditions within it can be a fascinating study. I’m not suggesting that the mainstream discourse has no valuable insights in this regard. I’m only pointing at the enormous biases and gaps and how native Hindu scholarship is best equipped to address them. As I see it, an inevitable manifestation of cultural and intellectual decolonization of the Indian Civilization-State is the progressive recognition of the depth and scope of its Hindu character. A character that remains currently truncated for ‘communal’ comfort in the dominant narrative of Indian history.
In discussing the push to vaccinate against COVID-19 in developed economies like the US, UK, etc., what gets lost in political rhetoric is the importance of effective vaccination among the immunosuppressed, especially the HIV+ group.
In groups with underlying immunosuppression [i.e., people with hematological malignancies, people receiving immunosuppressive therapies for solid organ transplants, or other chronic medical conditions], there have been reports of prolonged COVID-19 infection. The latest one is from South Africa, where an HIV+ patient experienced persistent COVID-19 infection –of 216 days– with moderate severity. The constant shedding of the SARS-CoV-2 virus accelerated its evolution inside this patient. This is possible because suboptimal adaptive immunity delays clearance of the SARS-CoV-2 virus but provides enough selective pressure to drive the evolution of new viral variants. In this case, the mutational changes of the virus within the patient resembled the Beta variant.
The largest population of immunosuppressed [HIV+] is in South Africa. So an alternative to chasing variants like Delta and Beta after their largescale emergence or trying to convince people who reject vaccination in the Global North is to tackle super-spreading micro-epidemics of novel variants among the immunosuppressed in the Global South. Since Novovax and J&J are demonstrably ineffective among the immunosuppressed, the Moderna vaccine is the best bet to slow down the emergence of future variants.
Who has millions of unused mRNA Covid-19 vaccines that are set to go to waste? The answer is the United States. As demand dwindles across the United States and doses will likely expire this summer, why not use them in the Global South, especially South Africa, by a concerted international effort?
The problems with headlines such as this: “US Trade Balance With China Improves, but Sources of Tension Linger” are twofold.
A: It furnishes support to the notion that trade surpluses are FOREVER safe and trade deficits are INVARIABLY grave. That is not accurate because foreign countries will always wish to invest capital in countries like the US, which employ it relatively well. One clear case of a nation that borrowed massively from abroad, invested wisely and did excellently well is the United States itself. Although the US ran a trade deficit from 1831 to 1875, the borrowed financial capital went into projects like railroads that brought a substantial economic payoff. Likewise, South Korea had large trade deficits during the 1970s. However, it invested heavily in industry, and its economy multiplied. As a result, from 1985 to 1995, South Korea had enough trade surpluses to repay its past borrowing by exporting capital abroad. Furthermore, Norway’s current account deficit had reached 14 percent of GDP in the late 1970s, but the capital it imported enabled it to create one of the largest oil industries.
B: The headline makes a normative claim while equating bilateral trade deficit with the overarching narrative of bilateral tensions. Such normative claims follow from the author’s value-based reasoning, not from econometric investigation. China and the US may have ideological friction on many levels, but the surplus or deficit has much to do with demographics and population changes within a country at a given time. Nonetheless, a legacy of political rhetoric relishes on inflating and conflating matters. We hear a lot about the prediction that China is forecasted to become the largest economy by 2035, provoking many in the US to bat for protectionist policies. But we ignore the second part of this prediction. Based on population growth, migration (aided by liberal immigration policies) and total fertility rate, the US is forecasted to become the largest economy once again in 2098.
Therefore, it is strange that a lot of the “trade deficit imbalance” headlines neglect to question if the borrower is investing the capital productively or not. A trade deficit is not always harmful, and no guarantee that running a trade surplus will bring substantial economic health. In fact, enormous trade asymmetries can be essential for creating economic development.
Lastly, isn’t it equally odd that this legacy of political rhetoric between the US and China makes it natural to frame trade deficits with China under the ‘China’s abuses and coercive power’ banner but intimidates the US establishment from honestly and openly confronting the knowledge deficit in SARS-CoV-2’s origin? How and when does a country decide to bring sociology to epistemology? Shouldn’t we all be concerned more about significant knowledge deficits?
Sarah points out that in the modern world, the mother is the only caregiver left. However, in traditional European and Asian societies (for example, the Indian society), mothering was—in the case of India, to a large extent still is—a communal effort. Aunts were central and were called Big Momma. Now they are just aunts. This clarifies for me why we in India, while growing up, refer to every stranger on the street of a certain age as an uncle or an aunt irrespective of who they are; it is a vestige of our traditional society heavily focused on ‘other mothering.’
From the open house discussion I discovered, after the First World War, there was despair about the world, leading to childlessness or child-free couples. This mirrors our generation’s cultural concerns and climate change anxieties in leading to more child-free couples. In this context, the American baby boom in the 1940s and 50s was not an everyday occurrence but a striking anomaly. After the baby boom, a period of childlessness (child-free couples) came back. Although many things are recorded about mothers and women in child-free marriages, what we know about fathers and childless men is nada. This is a gap in our history we need to correct. Back to the topic of mothers, the US and France, after the world wars, reduced their childbirths first. Women stood up for individual liberty—about time—in these countries. This trend eventually led to other places in the world adopting the same values. So, a trade-off the post-Second World War societies made was that they no longer cared for big families. Instead, they looked to invest in different versions of big and caring governments with mixed results.
As we decided to move from big, interconnected families, which helped protect (in terms of social capital) the most vulnerable people in society from the shocks of life to smaller, detached nuclear families, we made room to maximize talents and expand individual freedom. This shift has ultimately led to a familial system that liberates people of a certain capability and ravages others.
Altogether, Sarah Knott reminds us that our contemporary society often forgets that a ‘mother’ is just as much a verb as it is a noun.
The raging second wave of Covid-19 hasn’t just collapsed the Indian healthcare, it has devastatingly uncovered preexisting public health policy deficits and healthcare frailties.
In India, there is a need to revive a serious conversation around public health policy, along with upgrading healthcare. But wait, isn’t the term ‘public health’ interchangeable with healthcare? Actually, no. ‘Public health’ is the population-scale program concerned with prevention and not cure. In contrast, healthcare essentially involves a private good and not public good. Most public health experts point out, the weaker the healthcare system (such as in India), the greater the gains from implementing public health prevention strategies.
India focused its energies on preventing malaria by fighting mosquitoes in the 1970s and then regressed to treating patients who have malaria, dengue or Zika ineffectively. A developing public health policy got sidelined for a more visible, vote-grabbing, yet inadequate healthcare program. Why? Indian elites tend to transfer concepts and priorities, from the health policy debates of advanced economies, into Indian thinking about health policy without much thought. As a result, there is considerable confusion around terminologies. There is a need for a sharp delineation between: ‘public good,’ ‘public health’ and ‘healthcare.’ The phrase ‘public health’ is frequently misinterpreted to imply ‘healthcare.’ On the contrary, ‘healthcare’ is repeatedly assumed as a ‘public good.’ In official Indian directives, the phrase ‘public health expenditure’ is often applied for government expenditures on healthcare. It is confusing because it contains the term ‘public health,’ which is the antonym of ‘healthcare.’
Many of the advanced economies of today have been engaged in public health for a very long time. As an example, the UK began work on clean water in 1858 and on clean air in 1952. For over forty-five years the Clean Air Act in the U.S. has cut pollution as the economy has grown. Therefore, the elites in the UK and US can worry about the nuances of healthcare policy. On the other hand, the focus of health policy in India must be upon prevention, as it is not a solved problem. Problems such as air quality has become worse today in India. Can the Ministry of Health do something about it? Not much, because plenty of public health-related issues lie outside the administrative boundaries of the Ministry of Health. Air quality—that afflicts North India—lies in the Ministry of Environment and internal bureaucracy—“officials love the rule of officials”—deters the two departments from interacting and working out such problems productively. Economist Ajay Shah points out, Indian politicians who concern themselves with health policy take the path of least resistance—to use public money to buy insurance (frequently from private health insurance companies) for individuals who would obtain healthcare services from private healthcare providers. This is an inefficient path because a fragile public health policy bestows a high disease burden, which induces a market failure in the private healthcare industry, followed by a market failure in the health insurance industry.
In other words, Ajay Shah implies that the Indian public sector is not effective at translating expenditures into healthcare services. Privately produced healthcare is plagued with market failure. Health insurance companies suffer from poor financial regulation and from dealing with a malfunctioning healthcare system. No matter the amount of money one assigns into government healthcare facilities or health insurance companies, both these routes work poorly. As a consequence, increased welfare investment by the government on healthcare, under the present paradigm of the Indian healthcare, is likely to be under par.
The long-term lessons from the second wave of COVID-19 is that inter-departmental inefficiencies cannot be tolerated anymore. Public health considerations and economic progress need to shape future elections and the working of many Indian ministries in equal measure. India deserves improved intellectual capacity in shaping public health policy and upgrading healthcare to obtain a complete translation of higher GDP into improved health outcomes. This implies that health policy capabilities—data sets, researchers, think tanks, local government—will need to match the heterogeneity of Indian states. What applies to the state of Karnataka will not necessarily apply to the state of Bihar. The devastating second wave is not arguing for imposing more centralized uniformity in the working of healthcare and public health policy proposals across India, as it will inevitably reduce the quality of executing these proposals in its diverse states with various hurdles. Instead, Indian elites need to place ‘funds, functions and functionaries’ at the local level for better health outcomes. After all, large cities in India match the population of many countries. They deserve localized public health and healthcare policy proposals.
The need to address the foundations of public health and healthcare in India around the problems of market failure and low state capacity has never been greater.
It is fascinating how foreign societies that loathe someone else’s nationalist movements over time amplify the revitalized traditions from such movements in their mass culture and celebrate a parodied version of it. A case in point is the St. Patrick’s day celebration.
Ireland’s vibrant folklore tradition was revitalized in the late 19th and early 20th centuries with a vigorous nationalist movement. Following this movement, a tiny rural Irish folklore tradition carried over to the Irish American context, like leprechauns decorating St. Patrick’s day cards, for example. The communal identity of the pastoral Irish homeland revitalized by their nationalist movement was vital for the Irish group consciousness in America over many generations, even for the new generations who didn’t visit their old sod.
Today the Irish group has to remind itself of its kinship and traditions as it continues to fade into the American mass culture. As an outsider, what I get from the American mass culture— that supposedly celebrates multiple cultures—is not the rich Irish folklore and tradition that arrived on the scene but mere stereotypes of it. The 19th century WASP anti-Irish caricature can be seen even today. Implicit in every Irish joke is either the image of a drunken Irish devoid of any cultural sophistication or a fighting Irish who is endlessly combative. Had the Irish been a brown or black community, would such a depiction—in a less mean spirited form or not—carry forward in today’s hypersensitive, race-obsessed American society?
St. Patrick’s Day, for the most part, a quiet religious holiday in Ireland, started as an occasion to demonstrate Irish heritage for those living in the United States. Instead, one primarily receives an American mass culture-induced Irish self-parody: holiday associated with alcoholic excess. How much of the self-parody was consciously nurtured by early Irish Americans is debatable. But, I recognize the Irish have resisted from being entirely consumed by the American mass culture in several ways, for example, by employing traditional Irish names such as Bridget, Eileen, Maureen, Cathleen, Sean, Patrick, Dennis, etc.
As a Hindu immigrant myself, I realize it is essential for immigrant groups to assimilate in several ways, like speaking English, which the Irish did effortlessly. Isn’t it the American mass culture’s duty to comprehend a certain authentic Irishness or Hinduness in popular culture without caricatures? As a Hindu, I have faced disdainful “holy cow” jokes from Muslims and Christians in the United States, but of course, Hinduphobia isn’t a politically dominant thing you see. In this light, when just about anything goes on in the name of a “melting pot,” I don’t see cultural salad bowls as regressive but protective. Interestingly, folks who sermonize blending of cultures, dilution of conserved cultural traits like names, etc., as the only form of progressive new beginnings in the social setting also shore up conserved group identities for certain communities in politics.
Not everything is healthy with the immigrants who wish to conserve their authentic identity as well. There is a bias among such immigrants to regard the United States as lacking a unique, respectable culture. Such immigrant-held prejudices get magnified when one-half of the country goes about canceling all faces on Mount Rushmore and actively devalues every founding document and personality of the United States. The impact of immigration and assimilation is complex. It requires the appreciation of traditions maintained by the immigrants and the immigrants appreciating the culture of the land they have entered.
Over time, however, colorful cultural parades that aim to link immigrant folklore and traditions to public policy and popular culture via song, dance, and merriment also dilute the immigrant’s bond with their authentic cultural heritage. In a generation or two, immigrants embody their projected, simplistic self-image parading in the mass-culture. Soon, the marketed superficial traits become deeply authentic pop-cultural heritage worthy of conservation in a melting pot. A similar phenomenon is taking over the diaspora Hindus and some communities back home in India, where a certain “Bollywood-ization” of Indic rituals and culture is apparent. I call this a careless collective self-objectification.
When a critical mass of people recognize a weakening of valuable cultural capital, reviving it is natural. If not for the revivalist Irish cultural nationalism, would there be a sense of pride, a feeling of collaboration among the Irish Americans in the early years? Would there be a grand sweep of Irish heritage for the melting pot to—no matter how superficially—celebrate?
In the early twentieth century, cancer assumed a more prominent place in the popular imagination as the threat of contagious diseases reduced and Americans lived longer. In this regard, the American Society for the Control of Cancer (ASCC), founded in 1913, had identified three goals: education, service, and research. However, until midcentury, largely due to the limited budget, the society contributed little to cancer research.
Enter Mary Woodard Lasker
One of the most powerful women in mid-twentieth-century New York City, and perhaps North America, Lasker demonstrated that women could command transformations in medical institutions. Born in Wisconsin in 1900, Mary Lasker, at the age of four, found out that the family’s laundress, Mrs. Belter, had cancer treatment. Lasker’s mother explained, “Mrs. Belter has had cancer and her breasts have been removed.” Lasker responded, “What do you mean? Cut off?” Decades later, Mary Lakser would mention this early memory that had inspired her to engage in cancer work.
After having established herself as an influential New York philanthropist, businesswoman, and political lobbyist, when Mary Lasker inquired about the role of the ASCC in 1943, she learned that the organization had no money to support research. Somewhat astounded by this discovery, she immediately contemplated ways to reorganize the society. She wanted to recreate the society into a powerful organization that prioritized cancer research.
Well-versed in public relations, connected to the financial and political circles of the country, Lasker played a central role in the mid-1940s. Despite notable opposition, Lasker convinced the ASCC to change the composition of the board of directors to include more lay members and more experts in financial management. She urged the council to adopt a new name, the American Cancer Society. She also convinced them to earmark one-quarter of its budget for research. This financial reorganization allowed the ACS to sponsor early cancer screening, including the early clinical trials. The newly formed American Cancer Society articulated a mission that explicitly identified research funding as its primary goal.
By late 1944, the American Cancer Society had become the principal nongovernmental funding agency for cancer research in the country. In its first year, it directed $1 million of its $4 million in revenue to study. Her ardent advocacy for greater funding of all the medical sciences contributed to increased funding for the National Institutes of Health and the creation of several NIH institutes, including the Heart, Lung, and Blood Institute.
Mary Lasker continued to agitate for research funds but resisted any formal association. As she explained it, “I’m always best on the outside.” Undoubtedly, Mary Lasker’s influence and emphasis on funding cancer research contributed to promoting the Pap smear in U.S. culture. As a permanent monument to her efforts, in 1984, Congress named the Mary Woodard Lasker Center for Health Research and Education at the National Institutes of Health.
A portion of a letter ACS administrative director Edwin MacEwan wrote to Lasker encapsulates her contribution to our society. He wrote, “I learned that you were the person to whom present and potential cancer patients owe everything and that you alone really initiated the rebirth of the American Cancer Society late in 1944.”
“If you think research is expensive, try disease.” —Mary Woodard Lasker (November 30, 1900 – February 21, 1994)
Yesterday, I came across this scoop on Twitter; New York Post and several other blogs have since reported it.
Regardless of this scoop’s veracity, the chart of Eight White identities has been around for some time now, and it has influenced young minds. So, here is my brief reflection on such identity-based pedagogy:
As a non-white resident-alien, I understand the history behind the United States’ racial sensitivity in all domains today. I also realize how zealous exponents of diversity have consecrated schools and university campuses in the US to rid the society of prevalent racial power-structures. Further, I appreciate the importance of people being self-critical; self-criticism leads to counter-cultures that balance mainstream views and enable reform and creativity in society. But I also find it essential that critics of mainstream culture don’t feel morally superior to enforce just about any theoretical concept on impressionable minds. Without getting too much into the right vs. left debate, there is something terribly sad about being indoctrinated at a young age —regardless of the goal of social engineering— to accept an automatic moral one-‘downmanship’ for the sake of the density gradient of cutaneous melanin pigment. Even though I’m a brown man from a colonized society, this kind of extreme ‘white guilt’ pedagogy leaves me with a bitter taste. And in this bitter taste, I have come to describe such indoctrination as “Affirmative Guilt-Gradient.”
You should know there is something called the Overton Window, according to which concepts grow larger when their actual instances and contexts grow smaller. In other words, well-meaning social interventionistas easily view each new instance in the decreasingly problematic context of the problem they focus on with the same lens as they consider the more significant problem. This leads to unrealistic enlargement of academic concepts that are then shoved down the throats of innocent, impressionable school kids who will take them as objective realities instead of subjective conceptual definitions overlaid on one legitimate objective problem.
I find the scheme of Eight White identities a symptom of the shifting Overton Window.
According to Thomas Sowell, there is a whole class of academics and intellectuals of social engineering who believe that when the world doesn’t reconcile to their pet theories, that shows something is wrong with the world, not their theories. If we are to project Thomas Sowell’s observation on this episode of “Guilt-Gradient,” it is perfectly reasonable to expect many white kids and their parents to refuse to adopt these theoretically manufactured guilt-gradient identities. We can then —applying Sowell’s observation—predict academics to declare that opposition to the “Guilt Gradient” is evidence for many covert white supremacists in the society who will not change. Such stories may then get blown up in influential Op-Eds, leading to the magnification of a simple problem, soon to be misplaced in the clutter of naïve supporters of such theories, the progressive vote-bank, and hard-right polemics.
We should all acknowledge that attachment to any identity—be it majority or minority—is by definition NOT a hatred for an outgroup. Assistant Professor of Political Science at Duke University, Ashley Jardina, in her noted research on the demise of white dominance and threats to white identity, concludes, “White identity is not, a proxy for outgroup animus. Most white identifiers do not condone white supremacism or see a connection between their racial identity and these hate-groups. Furthermore, whites who identify with their racial group become much more liberal in their policy positions than when white identity is associated with white supremacism.” Everybody has a right to associate with their identity, and equating one’s association with an ethnic majority identity is not automatically toxic. I feel it is destructive to view such identity associations as inherently toxic because it is precisely this sort of warped social engineering that results in unnecessary political polarization; the vicious cycle of identity-based tinkering is a self-fulfilling prophecy. Hence, recognizing the Overton Window at play in such identity-based pedagogy is a must if we have to make progress. We shouldn’t be tricked into assuming that the non acceptance of the Affirmative Guilt Gradient is a sign of our society’s lack of progress.
Finally, I find it odd that ideologues who profess “universalism” and international identities choose schools and universities to keep structurally confined, relative identities going by adding excessive nomenclature so they can apply interventions that are inherently reactionary. However, isn’t ‘reactionary’ a pejorative these ideologues use on others?
Every great civilization has simultaneously made breakthroughs in the natural sciences, mathematics, and in the investigation of that which penetrates beyond the mundane, beyond the external stimuli, beyond the world of solid, separate objects, names, and forms to peer into something changeless. When written down, these esoteric percepts have the natural tendency to decay over time because people tend to accept them too passively and literally. Consequently, people then value the conclusions of others over clarity and self-knowledge.
Talking about esoteric percepts decaying over time, I recently read about the 1981 Act the state of Arkansas passed, which required that public school teachers give “equal treatment” to “creation science” and “evolution science” in the biology classroom. Why? The Act held that teaching evolution alone could violate the separation between church and state, to the extent that this would be hostile to “theistic religions.” Therefore, the curriculum had to concentrate on the “scientific evidence” for creation science.
As far as I can see, industrialism, rather than Darwinism, has led to the decay of virtues historically protected by religions in the urban working class. Besides, every great tradition has its own equally fascinating religious cosmogony—for instance, the Indic tradition has an allegorical account of evolution apart from a creation story—but creationism is not defending all theistic religions, just one theistic cosmogony. This means there isn’t any “theological liberalism” in this assertion; it is a matter of one hegemon confronting what it regards as another hegemon—Darwinism.
So, why does creationism oppose Darwinism? Contrary to my earlier understanding from the scientific standpoint, I now think creationism looks at Darwin’s theory of evolution by natural selection not as a ‘scientific theory’ that infringes the domain of a religion but as an unusual ‘religion’ that oversteps an established religion’s doctrinal province. Creationism, therefore, looks to invade and challenge the doctrinal province of this “other religion.” In doing so, creation science, strangely, is a crude, proselytized version of what it seeks to oppose.
In its attempt to approximate a purely metaphysical proposition in practical terms or exoterically prove every esoteric percept, this kind of religious literalism takes away from the purity of esotericism and the virtues of scientific falsification. Therefore, literalism forgets that esoteric writings enable us to cross the mind’s tempestuous sea; it does not have to sink in this sea to prove anything.
In contrast to the virtues of science and popular belief, esotericism forces us to be self-reliant. We don’t necessarily have to stand on the shoulders of others and thus within a history of progress, but on our own two feet, we seek with the light of our inner experience. In this way, both science and the esoteric flourish in separate ecosystems but within one giant sphere of human experience — like prose and poetry.
In a delightful confluence of prose and poetry, Erasmus Darwin, the grandfather of Charles Darwin, wrote about the evolution of life in poetry in The Temple of Nature well before his grandson contemplated the same subject in elegant prose:
Organic life beneath the shoreless waves Was born and nurs’d in Ocean’s pearly caves; First forms minute, unseen by spheric glass, Move on the mud, or pierce the watery mass; These, as successive generations bloom, New powers acquire, and larger limbs assume; Whence countless groups of vegetation spring And breathing realms of fin, and feet, and wing.
The prose and poetry of creation — science and the esoteric; empirical and the allegorical—make the familiar strange and the strange familiar.