I’ve discovered and admired a wide variety of original thinkers during my eleven-year stay in the United States, from philosopher Eric Hoffer to economist and social theorist Thomas Sowell. From American history professor Barbara J. Fields to American political philosopher Harvey Mansfield. From Tyler Cowen, an American economist, and David Boaz, a libertarian thinker, to Paul Graham, an English-born American venture capitalist and essayist. One among them is Natasha Trethewey, a two-time US Poet Laureate.
My favorite contemporary American poet, Natasha Trethewey’s poems have an Ekphrastic quality because she graphically and implicitly explores her individuality and deprivations of liberty through deeply evocative accounts of her past and personal photographs rooted in her experience of race and culture.
As it is World Poetry Day, I thought I’d share three of her poems that appeal to me. But first, a little background on her: She was born on the centennial of Confederate Memorial Day in the Deep South to an African American mother and a white father when interracial marriage was still illegal in Mississippi. Though her father, poet Eric Trethewey, had an early impact on her, her mother, Gwendolyn Ann Turnbough’s tragic death, according to Trethewey, prompted her first attempt at writing poetry.
I hope you enjoy these poems and explore more of her work.
I am four in this photograph, standing
on a wide strip of Mississippi beach,
my hands on the flowered hips
of a bright bikini. My toes dig in,
curl around wet sand. The sun cuts
the rippling Gulf in flashes with each
tidal rush. Minnows dart at my feet
glinting like switchblades. I am alone
except for my grandmother, other side
of the camera, telling me how to pose.
It is 1970, two years after they opened
the rest of this beach to us,
forty years since the photograph
where she stood on a narrow plot
of sand marked colored, smiling,
her hands on the flowered hips
of a cotton meal-sack dress.
[Natasha Trethewey, “History Lesson” from Domestic Work.]
Before the war, they were happy, he said. quoting our textbook. (This was senior-year
history class.) The slaves were clothed, fed, and better off under a master’s care.
I watched the words blur on the page. No one raised a hand, disagreed. Not even me.
It was late; we still had Reconstruction to cover before the test, and — luckily —
three hours of watching Gone with the Wind. History, the teacher said, of the old South —
a true account of how things were back then. On screen a slave stood big as life: big mouth,
bucked eyes, our textbook’s grinning proof — a lie my teacher guarded. Silent, so did I.
[Natasha Trethewey, “Southern History” from Native Guard.]
Here, she said, put this on your head.
She handed me a hat.
You ’bout as white as your dad,
and you gone stay like that.
Aunt Sugar rolled her nylons down
around each bony ankle,
and I rolled down my white knee socks
letting my thin legs dangle,
circling them just above water
and silver backs of minnows
flitting here then there between
the sun spots and the shadows.
This is how you hold the pole
to cast the line out straight.
Now put that worm on your hook,
throw it out and wait.
She sat spitting tobacco juice
into a coffee cup.
Hunkered down when she felt the bite,
jerked the pole straight up
reeling and tugging hard at the fish
that wriggled and tried to fight back.
A flounder, she said, and you can tell
’cause one of its sides is black.
The other side is white, she said.
It landed with a thump.
I stood there watching that fish flip-flop,
switch sides with every jump.
[Natasha Trethewey, “Flounder” from Domestic Work.]
Talking about Trethewey’s poetry, Jericho Brown, a Pulitzer Prize-winning poet, says, “Her contribution is that of someone who sees us in individual and human ways and not only representations of resistance. Her black Union soldiers fall in love, her overworked grandmother plays a mischievous trick on a foreman, her black stepfather is a murderer, and her white father, who loves her, can’t resist microaggressions against her. I mean she allows her characters — her own history — to be as complex as history really is. This makes space for readers like me who are interested in life and not a caricature of life, readers who understand that poems must face us to our good and our evil and our personhood no matter what color we are.” Apart from Brown’s insight into Trethewey’s poetry invoking a strand of individuality that goes beyond trying to paint groups of people as symbols of resistance, I have often wondered what it is about her poems that appeal to me. Though I have no firsthand experience of what a troubled relationship with racial identity feels like, I suppose it may have something to do with my uneasy alliance with the English language itself—the medium of Trethewey’s craft. English is both the language of enslavement and revolt in a multilingual India. Though a colonial language, India has adopted English as its father tongue, yet we don’t fall under the Anglosphere-type of society. For most Indians, including me, English is characterized by ambiguity and conflict with our mother tongues, often mirroring a flounder-like situation—flip-flopping, switching sides with every jump, privileging one or the other, yet interpreting each other in the search for liberty.
In 1969, Colonel Luke Quinn, a U.S. Army Air Force officer in World War II, was diagnosed with inoperable gallbladder cancer. Surprisingly, he was referred to Dr. DeVita, the lymphoma specialist at National Cancer Institute, by the great Harvard pathologist Sidney Farber — famous for developing one of the most successful chemotherapies ever discovered. Nobody imagined back then that Colonel Luke Quinn, a wiry man with grey hair and a fierce frown with his unusual and likely incurable cancer, would significantly impact how we look at cancer as a disease.
Having been coerced to take up the case of Colonel Luke Quinn, despite gallbladder cancers not being his specialty, Dr.DeVita began to take a routine history, much to the annoyance of Luke Quinn who was used to being in command. Though Quinn glared at Dr.DeVita for reinitiating another agonizing round of (im)patient history, he said he had gone to his primary care physician in D.C. when his skin and the whites of his eyes had turned a deep shade of yellow — jaundice. Suspecting obstructive jaundice—a blockage somewhere in the gallbladder, Quinn was referred to Claude Welch, a famous abdominal surgeon at Mass general who had treated Pope John Paull II when he was shot in 1981. Instead of gallstones, the renowned surgeon found a tangled mass of tissue squeezing Quinn’s gallbladder—gallbladder cancer was pretty much a death sentence. On the pathologist’s confirmation, Quinn, being declared inoperable, was sent to Dr.DeVita at NCI as he wanted to be treated near his home.
Dr.DeVita, however, noticed something quite odd when he felt Quinn’s armpits during a routine examination. Quinn’s axillary lymph nodes—the cluster of glands working as a sentinel for what’s going on in the body—under his arms were enlarged and rubbery. These glands tend to become tender when the body has an infection and hard if it has solid tumors—like gallbladder cancer; they become rubbery if there is lymphoma. Being a lymphoma specialist, the startled Dr. DeVita questioned the possibility of a misdiagnosis—what if Quinn had lymphoma, not a solid tumor wrapping around his gallbladder leading to jaundice?
On being asked for his biopsy slides to be reevaluated, the always-in-command Colonel Luke Quinn angrily handed them over to the pathologist at NCI and sat impatiently in the waiting room. Costan Berard, the pathologist reviewing Quinn’s biopsy slides, detected an artifact in the image that had made it difficult to differentiate one kind of cancer cell from the other. Gallbladder cancers are elliptical, whereas Lymphoma cells are round. The roundish lymphoma cells can look like the elliptical gallbladder cancer cells when squeezed during the biopsy. This unusual finding by Berard explained why Quinn’s lymph nodes were not hard but rubbery. The new biopsy showed without a doubt that Quinn had non-Hodgkin’s lymphoma —the clumsy non-name we still go by to classify all lymphomas that are not Hodgkin’s disease.
The NCI was working on C-MOPP, a new cocktail of drugs to treat non-Hodgkin’s lymphoma that had shown a two-year remission in forty percent of aggressive versions of this disease. The always-in-command WW II veteran had somehow landed in the right place by accident! It was a long three months for the nurses though, as they hated him for leaning on the call button all-day, for complaining bitterly about the food, for chastising anyone who forgot to address him, Colonel Quinn, and for never thanking anyone. But incredibly, he was discharged without any sign of his tumor; he had gone from certain death to a fighting chance.
The fierce and unpleasant Colonel Quinn is crucial because his initial misdiagnosis unknowingly spurred the creation of a close network of influential people during his remarkable escape from certain death. He could do this because he was a friend and employee of the socialite and philanthropist Mary Lasker—the most consequential person in the politics of medical research. Read my earlier piece on her.
Mary Lasker, the timid, beehived socialite circumvented all conventions of medical research management and got the U.S. Congress to do things her way. Mary’s mantra was: Congress never funds a concept like “cancer research,” but propose funding an institute named after a feared disease, and Congress leaps on it. Her incessant lobbying with the backing of her husband, Albert Lasker and her confidante, Florence Mahoney, wife of the publisher of The Miami News, helped create the National Cancer Institute, the National Heart Institute, the National Eye Institute, the National Institute of Mental Health, the National Institute of Dental and Craniofacial Research, the National Institute of Arthritis and Metabolic Diseases, the National Institute of Aging, and the National Institute of Child Health and Human Development.
Though Mary Lasker knew the value of independent investigators pursuing their unique research interests, she supported projects only when a clinical goal was perceptible, like curing tuberculosis. In 1946, Mary, having noticed microbiologist Selman Waksman’s work on streptomycin—a new class of antibiotics effective against microbes resistant to penicillin—persuaded him and Merck pharmaceutical company to test the new drug against TB. By 1952 Mary’s instinct had won over Waksman’s initial skepticism as the widespread use of streptomycin halved the mortality from TB! Mary Lasker’s catalytic influence on basic research leading to a Nobel Prize-winning discovery is a case in point.
Her clout over Congress was in its prime through the 1950s and 60s when the National Cancer Institute (NCI) was developing the first cancer cures. It was also the period when Colonel Luke Quinn became her influential lieutenant. The Congress believed Luke Quinn represented the American Cancer Society, but he was Mary’s lobbyist in reality. When Quinn got sick, Mary used her contacts to get Welch and Sidney Farber, but it got her special attention when Quinn’s incurable torment was overcome. The ongoing public concern for cancer and Albert Lasker’s death due to pancreatic cancer made it an ideal disease for Mary to draw the battle lines. Quinn’s recovery convinced her that the necessary advance in basic research had occurred to justify taking the disease head-on. In April 1970, she began building bipartisan support by having the Senate create the National Panel of Consultants on the Conquest of Cancer. She prevailed over the Texas Democrat senator Ralph Yarborough to appoint her friend, a wealthy Republican businessman Benno C. Schmidt —the chairman of Memorial Sloan Kettering board of managers—to be the chairman on the conquest of cancer panel. She backed him up by arranging Sidney Farber as the co-chairman. The panel also included Colonel Luke Quinn and Mary herself.
In just six months, the panel issued “The Yarborough Report.” The report, mainly written by Colonel Luke Quinn and Mary Lasker, made far-reaching recommendations, including an independent national cancer authority. It recommended a substantial increase in funding for cancer research from $180 million in 1971 to $400 million in 1972 and reaching $1 billion by 1976. Finally, it recommended that the approval of anticancer drugs be moved from the FDA to the new cancer authority. Senator Edward Kennedy presented the recommendations as new legislation for the Ninety-Second Congress. Though not a Senate staff member, Colonel Quinn, trained by Mary in the art of testifying before the Congress, orchestrated the hearings, set the agenda, and selected the people who would testify.
The Nixon administration did not immediately embrace the bill as he wasn’t thrilled by Edward Kennedy’s involvement. Being Ted Kennedy’s close friend, Mary asked him to withdraw as a sponsor. Under Senator Pete Domenici, the bill renamed the National Cancer Act had to pass in the House. Paul Rogers, who headed the House Health subcommittee—Colonel Quinn and Mary Lasker had no influence over him—objected to removing the NCI from the NIH umbrella. He cautioned the NIH would face similar threats of separation in other disease areas. A revised bill agreed to this demand and kept the NCI under the NIH but gave it a separate budget and a director appointed by the President.
On December 23, 1971— fifty years to this day—the National Cancer Act was signed as a Christmas gift to the nation by President Richard Nixon, two years after Colonel Luke Quinn walked into the NCI with a wrong diagnosis. Though Quinn ultimately died of his relapsed cancer, a few months after the signing of the Cancer Act, the war on cancer had commenced with cancer research on the fast track. It was a victory for Mary Lasker, perhaps the most effective advocate for biomedical research that Washington had ever seen.
In hindsight, Mary Lasker’s triumph came with two significant disappointments. First, her crusade had failed in transferring the authority for approval of anticancer drugs from the FDA to the NCI—a failure that would plague the National Cancer Program well into the future. Second, the premise of the National Cancer Act that the “basic science was already there” and a quantitative boost in resources was all that was needed to bring victory was flawed. In combination, the two disappointments—the subjects of a future blog post—have spotlighted a perceived progress gap in cancer research by the tax-paying general public rather than underlining the tremendous conceptual progress made due to the War on Cancer.
Ultimately, this blog is for you to appreciate the 50th anniversary of the lucky accidents and the incredible effort in creating the National Cancer Act. At the same time, personally, cancer researchers—the boots on the ground—like me who experience the non-triviality of progress in cancer will dwell on the insistence of simplistic linear views of progress in cancer research for public consumption.
Many moons ago, around this time in October, my research collaborators and I were keeping a night’s watch in the lab; I mean pulling an all-nighter. We were peeking into some cells growing in a petri dish using a confocal microscope, and little did we know that the impermissible realm was waiting to stare back at us. The spirit of Halloween spooked us through these sacs of life!
Here’s how they looked back at us.
To the annoyance of my wife, my Facebook memory of this otherworldly microscopic image prompted an outbreak of random reading. In this reading session, I hit upon the pagan festival of Fontanalia, a celebration of fountains celebrated by Romans in October. On digging further, I learned that a Pagan view of October has a deep connection with the House of Stark in Game of Thrones, which links back to Halloween. To the best of my knowledge, these connections are not made explicit by George R. R. Martin, and if you have already connected the dots yourself, consider me a dim tube light.
Any number of GoT fan pages will tell you that Westeros is based on medieval Anglo-Saxon Britain, and the motto for House of Stark—one of the Great Houses of Westeros—is “Winter Is Coming.” The House of Stark is the only noble House whose family motto is a focal warning for the whole Ice and Fire narrative. Apart from the motto being a sign of vigilance for the Starks against a hard winter, it is also a long-forgotten reminder that the White Walkers will return in the winter and overturn the realm. So the Starks are looking to protect the realm’s order and keep the Night King away.
Let’s cut to October—a month sacred for the Roman goddess Astrea. She is a star-goddess with wings, a shining halo, and a flaming torch who lived among humans during the Golden Age. When the human realm began to degenerate, she withdrew to the upper world. Astrea’s departure in October signaled the end of the golden age of light as the chills of autumn alerted that the winter was drawing near. Interestingly, October’s Anglo-Saxon name is Winterfelleth, which means winter is coming, and Westeros is a version of medieval Anglo-Saxon Britain.
Like Fire and Ice, the Pagan October is tinted in celebrations of light and darkness. The month begins brightly with Fides, the goddess of faithfulness, followed by a festival of the Grecian Dionysus, the pagan god of wine and revelry. Barring a brief interlude to honor the departed ancestors on Mundus (the 5th of October), we have the celebrations of Victoria—the Roman goddess of triumph; Felicitas—the Roman goddess of luck; Fortuna Redux—the goddess of successful journeys; Fontinalia—the goddess of holy wells, springs, and fountains just before October turns towards the freezing gloom that is to follow. From the 14th of October, named Vinternatsblot, to the festival of Fyribod on the 28th, a marker for lousy weather, we have ceremonies that cogitate the motto that winter is coming. As winter draws near, we have the feast of Samhain Eve (pronounced: Sow-ain Eve) on the 31st of October.
The close of October (aka Winterfelleth) signals a “calendrical rite of passage” for a temporary reversal of powers. It is a seasonal turning point marking the day’s liminal status as an annual and a seasonal day of transition; April fool’s day is another example of a “calendrical rite of passage” for a temporary reversal of powers. During this disorienting time of Winterfelleth, the Pagans cross the sensory walls of their mundane realm to peek into the impermissible realm. For all of us, the feast of Samhain Eve on the 31st of October is the modern-day festival of Halloween—a time when we playfully welcome the otherwordly. For members of the House of Stark, who embody the month of Pagan October, this time marks the breakdown of the Night’s Watch, the collapse of the physical wall, and the Night King’s arrival.
Remember, tonight, all of us belong to the House of Stark with the duty to keep a Night’s Watch because winter is coming. The point of difference is we welcome the temporary reversal of powers to mildly disorient our sensory walls to have a peek into the impermissible realm.
I have argued elsewhere that Republic of India is a Civilization-State, where Indic civilizational features will find increasing expression as the Indian state evolves away from its legacy of the British Raj. However, India’s history is an area of immense concern and a rate-limiting step in the evolution of the Civilization-State. The past seventy-five years of Indian independence have shown that to break away from the influences of intellectual and cultural imperialism is far more complicated than to draw away from political servitude because the foundation of colonization is cultural and spiritual illiteracy.
Colonization influences Indians in unusual ways as it alienates us from our past. For instance, we refer to our Indic formals as ‘ethnic wear’ whereas a suit and a conservative tie in the heat of tropical India is our ‘natural’ formals. The Constitution of India is written in English when only 0.1% of independent India spoke the language. Likewise, the knowledge about India among the English educated elite is generated with an alienating European view when India was colonized. The main interest of the British was to write a history of India that justified their presence. So, they had to acknowledge the legitimacy of preceding violators like the Turks, Persians, and Mongols while accentuating only those Indic kings who either reformed or renounced the Hindu way of life. Several generations of Indians, including me, have grown up studying Indian history textbooks that scarcely evaluate our impact on the outside world even as we painstakingly document the aftermaths of ideas and actions of the outside world on us.
Elites from colonized societies educated in the wake of the colonial rule often standardize colonial scholarship and legitimize it to the extent of rejecting most native insights about their own land, society, and culture. This dynamic largely explains the bloated investment in defining highly racialized tribes and ethnic types in colonized countries. Consider the state of African studies—with a nauseating inflection of Marxist narrative—is affected by similar categories: static tribes, decadent villages, and clashing ethnic groups. These frameworks were essential narratives that justified foreign rule— devices of hierarchical control by colonizing powers. Professor of African and world history Trevor Getz warns us, “the story we often tell of African tribes, chiefs, and villages tells us more about how Europeans thought of themselves in the period of colonization than of the realities in Africa before they came.”
The same holds for the history of Indic civilization. On the threshold of intellectual and cultural decolonization in 1947, when rehabilitating Indic wisdom traditions and the dignity of Hindu civilization was the need of the hour, newly independent India saw the emergence of a new movement in writing its history. Though more self-reliant than Europeans in broadening the scope of Indian social and economic history, this movement deeply dyed Indian history in an expression of Marxism. India’s ancient past became a theater to assert European society’s preconceived class and material conceptions while obliterating the importance of Indic ideas in defining its own historical events. India’s first prime minister, Jawaharlal Nehru, the Cambridge graduate whose fascination with Marxism and Fabian Socialism had made him an outlandish mix of the East and West, victimized the Indian civilization with his neither here nor their disposition.
Commenting on the sly attempt at thought-control and brainwashing of future generations of newly independent India through a European/Marxist view of Indian history, celebrated Indic thought leader Ram Swarup writes: “Karl Marx was exclusively European in his orientation. He treated all Asia and Africa as an appendage of the West and, indeed, of Anglo-Saxon Great Britain. He borrowed all his theses on India from British rulers and fully subscribed to them. With them, he believes that Indian society has no history at all, at least no known history —implying our history is just a bunch of ancient myths—and that what we call its history is the history of successive intruders. With them, he also believes that India has neither known nor cared for self-rule. He says that the question is not whether the English had a right to conquer India, but whether we are to prefer India conquered by the Turks, by the Persians, to India conquered by the Britons. His own choice was clear.”
As a result of such Marxist scribes, the brutal history of repeated invasions and the forced proselytization of Indic people is described in charitable words, often as the harbinger of an idealized syncretic culture that “refined” the Hindu society. Though I understand the emergence of syncretic cultures, it is absurd for anglicized elites to use them as an all-weather secular rationalization of blatant cultural and intellectual subjugation the Hindu psyche still confronts. The iconoclasm practiced by invading hordes and the resulting ruins of some of the most sacred Hindu sites find no place in our history books because it would ‘communalize’ history. Any inquisitive young Hindu will have to go out of the way to read the works of the trail-blazing Hindu publisher and historian Sita Ram Goel to understand the omitted sections from our history books. His two-volume work titled Hindu Temples, What Happened to Them analyzes nearly two thousand Hindu and Jain temples destroyed by Turkic/Mongol occupiers and their repercussions on the Indic ecosystem.
In a bizarre continuation of subjugating the Hindu temple ecosystem, the “secular” Indian state in 1951 under the Indian National Congress and its ideological Marxist and Communist allies selectively usurped Hindu temples and its lands to bring them under state control, and they continue to stifle the temple ecosystem of India. Every non-Hindu has a say on how Hindu temple rituals should be modified but a Hindu talking about Abrahamic practices will amount to a transgression of secularism and religious bigotry. I urge you to watch a quick primer on the crisis of Hindu temples in India — https://www.youtube.com/watch?v=js936p_cvTE.
Mainstream Indian history, as we learned it in school (India):
-Indian history begins with the Indus Valley Civilization (which is not Hindu)
-Hinduism begins with the Aryan Invasion bringing Vedic Religion of Sacrifice
-Hinduism is full of violent animal sacrifice until Buddhism reforms it
-There is a brief Golden Age for Hinduism in which art and culture ‘flourishes,’ and the epic poems, The Ramayana and Mahabharata are composed.
-There are Islamic invasions, but the Mughals bring harmony under Akbar
-The British colonize India; Hinduism is once again reformed
-Gandhi and Nehru free us.
Mainstream history, as it is taught in American schools, from a California textbook:
-Indian history begins with the Indus Valley Civilization (which is not Hindu)
-Hinduism begins with the Aryan Invasion bringing Vedic Religion of Sacrifice
-Hinduism is full of violent animal sacrifice until Buddhism reforms it
-The Ramayana and Mahabharata are composed. They have talking monkeys and bears and Hindus primitively animal-worship them. They also have the holiest text of Hindus, the Gita, which tells Hindus to do their caste duty and war.
The outline of the ‘alternative’ history proposed in Wendy Doniger’s book:
-The Indus Valley Civilization (is not Hindu)
-Hinduism begins with the Nazi-like Aryans bringing Vedic Religion of Sacrifice.
-Hinduism is full of violent animal sacrifice until Buddhism reforms it.
-There is no real Golden Age for Hinduism, but the Greek Invasion leads to great ideas and works of fiction like The Ramayana and Mahabharata. Greek women presumably inspired the fierce and independent Draupadi.
-After chapters called ‘Sacrifice in the Vedas’ and ‘Violence in The Mahabharata,’ at long last, we have ‘Dialogue and Tolerance under the Mughals.’
-The British colonize India; Hinduism is reformed.
-Once India gained independence, without those civilizing external forces, the Hindus became Hindutva extremists.
How can Doniger’s work be an ‘alternative,’ Vamsee Juluri asks, when its claims are the same as the dominant narrative? He notes that this ‘alternative’ history of Hinduism is at the top of the heap in most bookstores in the West, and its thesis dominates the press coverage of Hindu India. In a normal world, if there were ten titles written by ‘Hindu apologists’ on the shelf, then an ‘alternative’ would have indeed been worthy of the respect that term brings. Hence, the mainstream and the so-called alternative narratives of Indian history are both contemptuous for India of Hindu religion, culture, and philosophy. It is fortified in the republic of letters as ‘progressive’ criticism. This narrative still holds sway over academic appointments, research grants, crafting syllabi, and promoting textbooks. In other words, postcolonial Indian historians have primarily pursued the same blinkered colonizing vision of their masters who saw history only through a narrow prism of class, material wealth, and outsiders descending upon Hindus to ‘reform’ our heathen culture. In contrast, the history of the Hindu past is framed strictly within the confines of a region or sectarian terms. Therefore, instead of seeing the invaders for who they were, the history of Hindu India provincializes itself in this process and legitimizes self-hatred.
With a deep concern for such self-hatred in historical narratives, the Nobel Laureate V. S. Naipaul said: “This inveterate hatred of one’s ancestors and the culture into which they were born is accompanied by a hatred deep enough to want to destroy one’s own land and join the ranks of the violators of the ancestral land and culture, at least in spirit. Pakistan is an example. Its heroes are not the Vedic kings and Indic sages who walked the land but invading vandals like Ghaznavi and Ghori, who ravaged them.”
Accounting for the history of any country is a delicate task. However, it is far more sensitive for a civilization that has had a long colonial past. So, while reading mainstream Indian history, one must be mindful of what the foreigners wanted to know first. Because what they wanted to perceive foremost was beneficial to them in ruling the place as handily as possible, which effectively explains the dominance of hollow postulates surrounding racialized ethnic Aryan and Dravidian divides, and confounding European class concept with Hindu Varna in colonial and Marxist literature. These factors make it practically impossible for modern observers not to visualize previously colonized societies as changeless and unevolved for centuries. They never question if Indic social features are also aftermaths of several invading powers, the resulting conflict, and instinctive adaptations. Entities like factions and castes are often not static social constructs that result from endlessly contested histories. In Indic civilization and other colonized lands, such identities were often situational and flexible, not wholly rigid, and all-encompassing; wholesale rigidity is often a bug that needs exploration and explanation, not a feature to be endorsed unquestioned. A.M.Hocart, regarded by the philosopher-traditionalist Ananda Coomaraswamy as perhaps the only unprejudiced European sociologist who has written on caste, states: “It is at present fashionable to rationalize all customs, and to write up the “economic man” to the exclusion of that far older and more widespread type, the religious man, who, though he tilled and built and reared a family, believed that he could do these things successfully only so long as he played his allotted part in the ritual activities of his community.”
Indeed, the rigidity of the caste system was not the cause but the effect of the breakdown of political order. Moreover, the fixation with caste in Indian historical writings has made us oblivious to the profound ontological assumptions underlying the complexities of Indian society. Hindu society conceived multidirectional relationships at different levels of existence. This ideal of the individual-community relationship assumed that although human beings differ in temperament, each of them must try to develop and realize the full potential of being human through seeking greater intercourse with other members as part of cosmic reality. Dumont misunderstands this notion and argues that the individual in Indian thought did not exist beyond the idea of caste. It is simply not true because the concept of Indic autonomy is different from Individualism. Rights in Western liberal traditions are ‘claims’ against others in the society, but Indic society gave prominence to duties first, as it presupposed the consideration of others without obliterating autonomy. According to treatises like Sukraniti and the grand wisdom of Atharvaveda, autonomy is the state when no one can dominate us against our will or fundamental nature. Indian thought refers to this state as Swaraj; the Indian struggle for independence was predicated on Swaraj.
Several other Indic scholars have highlighted the adverse effect of excessive emphasis on caste coupled with a de-emphasis of equally important other Indic social concepts. Factors such as ashrama, shreni, Kula and Jati, and above all Dharma (Global Ethic) are overlooked, leading to a grave misunderstanding about the character of Hindu society through its historical writings. Indeed, in the Ramayana (venerated Indian epic-poem composed by an author not belonging to the “upper class”) and Mahabharata (revered Indian epic-poem composed by an author born to a fisherwoman) and in the broad scope of Puranic literature, there is a notion of an individual who gives life and substance to the existence in this world. Indeed, so great has been the weight attached to the individual that Indic art and literature are sought to be expressed through the lives of such individuals, be it limited but the exuberant personality of Sri Rama or the unlimited personality of Sri Krishna (a cowherd who is the ultimate avatar; he also happens to be dark-complexioned) or the non-dimensional, half-man half-woman personality of Shiva. Needless to add that the importance of the themes discussed here springs from the fact that Hindu society of 2021 is still largely organized around them.
Indic pantheism may look absurd to atheists and those of Abrahamic persuasion alike. Nevertheless, the Indic tradition has always had a transnational characteristic and considerable geopolitical sphere of influence. Therefore, in the future Hindu scholarship will understandably reject the notion of the colonial or Marxist intelligentsia that overemphasized the ‘local’ or ‘regional’ as more ‘authentic’ than the larger society, territory, or neighborhood of the Hindus. Hindu scholars of today already investigate more into the interplay of local, regional, national, and global settings that have historically responded to Hindu thought and actions. As native scholarship course corrects Indian history, the updated Indian history textbooks of the future will undoubtedly give way to a bad press in the global north with alarmist headlines such as ‘Hindu revision of Indian history.’ Still, anyone with a reasonable amount of grey matter needs to recognize the mobility of Hindu predecessors and think through the different spatial frameworks they occupied and traversed Bhārata (the original name of Indic Civilization mentioned in the Indian constitution).
When external forces become overpowering, or the body of society is unable to assimilate them, inevitable tensions are generated, leading to its decay. The interaction of the outside influences with the total personality of the Hindu society and various traditions within it can be a fascinating study. I’m not suggesting that the mainstream discourse has no valuable insights in this regard. I’m only pointing at the enormous biases and gaps and how native Hindu scholarship is best equipped to address them. As I see it, an inevitable manifestation of cultural and intellectual decolonization of the Indian Civilization-State is the progressive recognition of the depth and scope of its Hindu character. A character that remains currently truncated for ‘communal’ comfort in the dominant narrative of Indian history.
In discussing the push to vaccinate against COVID-19 in developed economies like the US, UK, etc., what gets lost in political rhetoric is the importance of effective vaccination among the immunosuppressed, especially the HIV+ group.
In groups with underlying immunosuppression [i.e., people with hematological malignancies, people receiving immunosuppressive therapies for solid organ transplants, or other chronic medical conditions], there have been reports of prolonged COVID-19 infection. The latest one is from South Africa, where an HIV+ patient experienced persistent COVID-19 infection –of 216 days– with moderate severity. The constant shedding of the SARS-CoV-2 virus accelerated its evolution inside this patient. This is possible because suboptimal adaptive immunity delays clearance of the SARS-CoV-2 virus but provides enough selective pressure to drive the evolution of new viral variants. In this case, the mutational changes of the virus within the patient resembled the Beta variant.
The largest population of immunosuppressed [HIV+] is in South Africa. So an alternative to chasing variants like Delta and Beta after their largescale emergence or trying to convince people who reject vaccination in the Global North is to tackle super-spreading micro-epidemics of novel variants among the immunosuppressed in the Global South. Since Novovax and J&J are demonstrably ineffective among the immunosuppressed, the Moderna vaccine is the best bet to slow down the emergence of future variants.
Who has millions of unused mRNA Covid-19 vaccines that are set to go to waste? The answer is the United States. As demand dwindles across the United States and doses will likely expire this summer, why not use them in the Global South, especially South Africa, by a concerted international effort?
The problems with headlines such as this: “US Trade Balance With China Improves, but Sources of Tension Linger” are twofold.
A: It furnishes support to the notion that trade surpluses are FOREVER safe and trade deficits are INVARIABLY grave. That is not accurate because foreign countries will always wish to invest capital in countries like the US, which employ it relatively well. One clear case of a nation that borrowed massively from abroad, invested wisely and did excellently well is the United States itself. Although the US ran a trade deficit from 1831 to 1875, the borrowed financial capital went into projects like railroads that brought a substantial economic payoff. Likewise, South Korea had large trade deficits during the 1970s. However, it invested heavily in industry, and its economy multiplied. As a result, from 1985 to 1995, South Korea had enough trade surpluses to repay its past borrowing by exporting capital abroad. Furthermore, Norway’s current account deficit had reached 14 percent of GDP in the late 1970s, but the capital it imported enabled it to create one of the largest oil industries.
B: The headline makes a normative claim while equating bilateral trade deficit with the overarching narrative of bilateral tensions. Such normative claims follow from the author’s value-based reasoning, not from econometric investigation. China and the US may have ideological friction on many levels, but the surplus or deficit has much to do with demographics and population changes within a country at a given time. Nonetheless, a legacy of political rhetoric relishes on inflating and conflating matters. We hear a lot about the prediction that China is forecasted to become the largest economy by 2035, provoking many in the US to bat for protectionist policies. But we ignore the second part of this prediction. Based on population growth, migration (aided by liberal immigration policies) and total fertility rate, the US is forecasted to become the largest economy once again in 2098.
Therefore, it is strange that a lot of the “trade deficit imbalance” headlines neglect to question if the borrower is investing the capital productively or not. A trade deficit is not always harmful, and no guarantee that running a trade surplus will bring substantial economic health. In fact, enormous trade asymmetries can be essential for creating economic development.
Lastly, isn’t it equally odd that this legacy of political rhetoric between the US and China makes it natural to frame trade deficits with China under the ‘China’s abuses and coercive power’ banner but intimidates the US establishment from honestly and openly confronting the knowledge deficit in SARS-CoV-2’s origin? How and when does a country decide to bring sociology to epistemology? Shouldn’t we all be concerned more about significant knowledge deficits?
Sarah points out that in the modern world, the mother is the only caregiver left. However, in traditional European and Asian societies (for example, the Indian society), mothering was—in the case of India, to a large extent still is—a communal effort. Aunts were central and were called Big Momma. Now they are just aunts. This clarifies for me why we in India, while growing up, refer to every stranger on the street of a certain age as an uncle or an aunt irrespective of who they are; it is a vestige of our traditional society heavily focused on ‘other mothering.’
From the open house discussion I discovered, after the First World War, there was despair about the world, leading to childlessness or child-free couples. This mirrors our generation’s cultural concerns and climate change anxieties in leading to more child-free couples. In this context, the American baby boom in the 1940s and 50s was not an everyday occurrence but a striking anomaly. After the baby boom, a period of childlessness (child-free couples) came back. Although many things are recorded about mothers and women in child-free marriages, what we know about fathers and childless men is nada. This is a gap in our history we need to correct. Back to the topic of mothers, the US and France, after the world wars, reduced their childbirths first. Women stood up for individual liberty—about time—in these countries. This trend eventually led to other places in the world adopting the same values. So, a trade-off the post-Second World War societies made was that they no longer cared for big families. Instead, they looked to invest in different versions of big and caring governments with mixed results.
As we decided to move from big, interconnected families, which helped protect (in terms of social capital) the most vulnerable people in society from the shocks of life to smaller, detached nuclear families, we made room to maximize talents and expand individual freedom. This shift has ultimately led to a familial system that liberates people of a certain capability and ravages others.
Altogether, Sarah Knott reminds us that our contemporary society often forgets that a ‘mother’ is just as much a verb as it is a noun.
The raging second wave of Covid-19 hasn’t just collapsed the Indian healthcare, it has devastatingly uncovered preexisting public health policy deficits and healthcare frailties.
In India, there is a need to revive a serious conversation around public health policy, along with upgrading healthcare. But wait, isn’t the term ‘public health’ interchangeable with healthcare? Actually, no. ‘Public health’ is the population-scale program concerned with prevention and not cure. In contrast, healthcare essentially involves a private good and not public good. Most public health experts point out, the weaker the healthcare system (such as in India), the greater the gains from implementing public health prevention strategies.
India focused its energies on preventing malaria by fighting mosquitoes in the 1970s and then regressed to treating patients who have malaria, dengue or Zika ineffectively. A developing public health policy got sidelined for a more visible, vote-grabbing, yet inadequate healthcare program. Why? Indian elites tend to transfer concepts and priorities, from the health policy debates of advanced economies, into Indian thinking about health policy without much thought. As a result, there is considerable confusion around terminologies. There is a need for a sharp delineation between: ‘public good,’ ‘public health’ and ‘healthcare.’ The phrase ‘public health’ is frequently misinterpreted to imply ‘healthcare.’ On the contrary, ‘healthcare’ is repeatedly assumed as a ‘public good.’ In official Indian directives, the phrase ‘public health expenditure’ is often applied for government expenditures on healthcare. It is confusing because it contains the term ‘public health,’ which is the antonym of ‘healthcare.’
Many of the advanced economies of today have been engaged in public health for a very long time. As an example, the UK began work on clean water in 1858 and on clean air in 1952. For over forty-five years the Clean Air Act in the U.S. has cut pollution as the economy has grown. Therefore, the elites in the UK and US can worry about the nuances of healthcare policy. On the other hand, the focus of health policy in India must be upon prevention, as it is not a solved problem. Problems such as air quality has become worse today in India. Can the Ministry of Health do something about it? Not much, because plenty of public health-related issues lie outside the administrative boundaries of the Ministry of Health. Air quality—that afflicts North India—lies in the Ministry of Environment and internal bureaucracy—“officials love the rule of officials”—deters the two departments from interacting and working out such problems productively. Economist Ajay Shah points out, Indian politicians who concern themselves with health policy take the path of least resistance—to use public money to buy insurance (frequently from private health insurance companies) for individuals who would obtain healthcare services from private healthcare providers. This is an inefficient path because a fragile public health policy bestows a high disease burden, which induces a market failure in the private healthcare industry, followed by a market failure in the health insurance industry.
In other words, Ajay Shah implies that the Indian public sector is not effective at translating expenditures into healthcare services. Privately produced healthcare is plagued with market failure. Health insurance companies suffer from poor financial regulation and from dealing with a malfunctioning healthcare system. No matter the amount of money one assigns into government healthcare facilities or health insurance companies, both these routes work poorly. As a consequence, increased welfare investment by the government on healthcare, under the present paradigm of the Indian healthcare, is likely to be under par.
The long-term lessons from the second wave of COVID-19 is that inter-departmental inefficiencies cannot be tolerated anymore. Public health considerations and economic progress need to shape future elections and the working of many Indian ministries in equal measure. India deserves improved intellectual capacity in shaping public health policy and upgrading healthcare to obtain a complete translation of higher GDP into improved health outcomes. This implies that health policy capabilities—data sets, researchers, think tanks, local government—will need to match the heterogeneity of Indian states. What applies to the state of Karnataka will not necessarily apply to the state of Bihar. The devastating second wave is not arguing for imposing more centralized uniformity in the working of healthcare and public health policy proposals across India, as it will inevitably reduce the quality of executing these proposals in its diverse states with various hurdles. Instead, Indian elites need to place ‘funds, functions and functionaries’ at the local level for better health outcomes. After all, large cities in India match the population of many countries. They deserve localized public health and healthcare policy proposals.
The need to address the foundations of public health and healthcare in India around the problems of market failure and low state capacity has never been greater.
It is fascinating how foreign societies that loathe someone else’s nationalist movements over time amplify the revitalized traditions from such movements in their mass culture and celebrate a parodied version of it. A case in point is the St. Patrick’s day celebration.
Ireland’s vibrant folklore tradition was revitalized in the late 19th and early 20th centuries with a vigorous nationalist movement. Following this movement, a tiny rural Irish folklore tradition carried over to the Irish American context, like leprechauns decorating St. Patrick’s day cards, for example. The communal identity of the pastoral Irish homeland revitalized by their nationalist movement was vital for the Irish group consciousness in America over many generations, even for the new generations who didn’t visit their old sod.
Today the Irish group has to remind itself of its kinship and traditions as it continues to fade into the American mass culture. As an outsider, what I get from the American mass culture— that supposedly celebrates multiple cultures—is not the rich Irish folklore and tradition that arrived on the scene but mere stereotypes of it. The 19th century WASP anti-Irish caricature can be seen even today. Implicit in every Irish joke is either the image of a drunken Irish devoid of any cultural sophistication or a fighting Irish who is endlessly combative. Had the Irish been a brown or black community, would such a depiction—in a less mean spirited form or not—carry forward in today’s hypersensitive, race-obsessed American society?
St. Patrick’s Day, for the most part, a quiet religious holiday in Ireland, started as an occasion to demonstrate Irish heritage for those living in the United States. Instead, one primarily receives an American mass culture-induced Irish self-parody: holiday associated with alcoholic excess. How much of the self-parody was consciously nurtured by early Irish Americans is debatable. But, I recognize the Irish have resisted from being entirely consumed by the American mass culture in several ways, for example, by employing traditional Irish names such as Bridget, Eileen, Maureen, Cathleen, Sean, Patrick, Dennis, etc.
As a Hindu immigrant myself, I realize it is essential for immigrant groups to assimilate in several ways, like speaking English, which the Irish did effortlessly. Isn’t it the American mass culture’s duty to comprehend a certain authentic Irishness or Hinduness in popular culture without caricatures? As a Hindu, I have faced disdainful “holy cow” jokes from Muslims and Christians in the United States, but of course, Hinduphobia isn’t a politically dominant thing you see. In this light, when just about anything goes on in the name of a “melting pot,” I don’t see cultural salad bowls as regressive but protective. Interestingly, folks who sermonize blending of cultures, dilution of conserved cultural traits like names, etc., as the only form of progressive new beginnings in the social setting also shore up conserved group identities for certain communities in politics.
Not everything is healthy with the immigrants who wish to conserve their authentic identity as well. There is a bias among such immigrants to regard the United States as lacking a unique, respectable culture. Such immigrant-held prejudices get magnified when one-half of the country goes about canceling all faces on Mount Rushmore and actively devalues every founding document and personality of the United States. The impact of immigration and assimilation is complex. It requires the appreciation of traditions maintained by the immigrants and the immigrants appreciating the culture of the land they have entered.
Over time, however, colorful cultural parades that aim to link immigrant folklore and traditions to public policy and popular culture via song, dance, and merriment also dilute the immigrant’s bond with their authentic cultural heritage. In a generation or two, immigrants embody their projected, simplistic self-image parading in the mass-culture. Soon, the marketed superficial traits become deeply authentic pop-cultural heritage worthy of conservation in a melting pot. A similar phenomenon is taking over the diaspora Hindus and some communities back home in India, where a certain “Bollywood-ization” of Indic rituals and culture is apparent. I call this a careless collective self-objectification.
When a critical mass of people recognize a weakening of valuable cultural capital, reviving it is natural. If not for the revivalist Irish cultural nationalism, would there be a sense of pride, a feeling of collaboration among the Irish Americans in the early years? Would there be a grand sweep of Irish heritage for the melting pot to—no matter how superficially—celebrate?
In the early twentieth century, cancer assumed a more prominent place in the popular imagination as the threat of contagious diseases reduced and Americans lived longer. In this regard, the American Society for the Control of Cancer (ASCC), founded in 1913, had identified three goals: education, service, and research. However, until midcentury, largely due to the limited budget, the society contributed little to cancer research.
Enter Mary Woodard Lasker
One of the most powerful women in mid-twentieth-century New York City, and perhaps North America, Lasker demonstrated that women could command transformations in medical institutions. Born in Wisconsin in 1900, Mary Lasker, at the age of four, found out that the family’s laundress, Mrs. Belter, had cancer treatment. Lasker’s mother explained, “Mrs. Belter has had cancer and her breasts have been removed.” Lasker responded, “What do you mean? Cut off?” Decades later, Mary Lakser would mention this early memory that had inspired her to engage in cancer work.
After having established herself as an influential New York philanthropist, businesswoman, and political lobbyist, when Mary Lasker inquired about the role of the ASCC in 1943, she learned that the organization had no money to support research. Somewhat astounded by this discovery, she immediately contemplated ways to reorganize the society. She wanted to recreate the society into a powerful organization that prioritized cancer research.
Well-versed in public relations, connected to the financial and political circles of the country, Lasker played a central role in the mid-1940s. Despite notable opposition, Lasker convinced the ASCC to change the composition of the board of directors to include more lay members and more experts in financial management. She urged the council to adopt a new name, the American Cancer Society. She also convinced them to earmark one-quarter of its budget for research. This financial reorganization allowed the ACS to sponsor early cancer screening, including the early clinical trials. The newly formed American Cancer Society articulated a mission that explicitly identified research funding as its primary goal.
By late 1944, the American Cancer Society had become the principal nongovernmental funding agency for cancer research in the country. In its first year, it directed $1 million of its $4 million in revenue to study. Her ardent advocacy for greater funding of all the medical sciences contributed to increased funding for the National Institutes of Health and the creation of several NIH institutes, including the Heart, Lung, and Blood Institute.
Mary Lasker continued to agitate for research funds but resisted any formal association. As she explained it, “I’m always best on the outside.” Undoubtedly, Mary Lasker’s influence and emphasis on funding cancer research contributed to promoting the Pap smear in U.S. culture. As a permanent monument to her efforts, in 1984, Congress named the Mary Woodard Lasker Center for Health Research and Education at the National Institutes of Health.
A portion of a letter ACS administrative director Edwin MacEwan wrote to Lasker encapsulates her contribution to our society. He wrote, “I learned that you were the person to whom present and potential cancer patients owe everything and that you alone really initiated the rebirth of the American Cancer Society late in 1944.”
“If you think research is expensive, try disease.” —Mary Woodard Lasker (November 30, 1900 – February 21, 1994)
Yesterday, I came across this scoop on Twitter; New York Post and several other blogs have since reported it.
Regardless of this scoop’s veracity, the chart of Eight White identities has been around for some time now, and it has influenced young minds. So, here is my brief reflection on such identity-based pedagogy:
As a non-white resident-alien, I understand the history behind the United States’ racial sensitivity in all domains today. I also realize how zealous exponents of diversity have consecrated schools and university campuses in the US to rid the society of prevalent racial power-structures. Further, I appreciate the importance of people being self-critical; self-criticism leads to counter-cultures that balance mainstream views and enable reform and creativity in society. But I also find it essential that critics of mainstream culture don’t feel morally superior to enforce just about any theoretical concept on impressionable minds. Without getting too much into the right vs. left debate, there is something terribly sad about being indoctrinated at a young age —regardless of the goal of social engineering— to accept an automatic moral one-‘downmanship’ for the sake of the density gradient of cutaneous melanin pigment. Even though I’m a brown man from a colonized society, this kind of extreme ‘white guilt’ pedagogy leaves me with a bitter taste. And in this bitter taste, I have come to describe such indoctrination as “Affirmative Guilt-Gradient.”
You should know there is something called the Overton Window, according to which concepts grow larger when their actual instances and contexts grow smaller. In other words, well-meaning social interventionistas easily view each new instance in the decreasingly problematic context of the problem they focus on with the same lens as they consider the more significant problem. This leads to unrealistic enlargement of academic concepts that are then shoved down the throats of innocent, impressionable school kids who will take them as objective realities instead of subjective conceptual definitions overlaid on one legitimate objective problem.
I find the scheme of Eight White identities a symptom of the shifting Overton Window.
According to Thomas Sowell, there is a whole class of academics and intellectuals of social engineering who believe that when the world doesn’t reconcile to their pet theories, that shows something is wrong with the world, not their theories. If we are to project Thomas Sowell’s observation on this episode of “Guilt-Gradient,” it is perfectly reasonable to expect many white kids and their parents to refuse to adopt these theoretically manufactured guilt-gradient identities. We can then —applying Sowell’s observation—predict academics to declare that opposition to the “Guilt Gradient” is evidence for many covert white supremacists in the society who will not change. Such stories may then get blown up in influential Op-Eds, leading to the magnification of a simple problem, soon to be misplaced in the clutter of naïve supporters of such theories, the progressive vote-bank, and hard-right polemics.
We should all acknowledge that attachment to any identity—be it majority or minority—is by definition NOT a hatred for an outgroup. Assistant Professor of Political Science at Duke University, Ashley Jardina, in her noted research on the demise of white dominance and threats to white identity, concludes, “White identity is not, a proxy for outgroup animus. Most white identifiers do not condone white supremacism or see a connection between their racial identity and these hate-groups. Furthermore, whites who identify with their racial group become much more liberal in their policy positions than when white identity is associated with white supremacism.” Everybody has a right to associate with their identity, and equating one’s association with an ethnic majority identity is not automatically toxic. I feel it is destructive to view such identity associations as inherently toxic because it is precisely this sort of warped social engineering that results in unnecessary political polarization; the vicious cycle of identity-based tinkering is a self-fulfilling prophecy. Hence, recognizing the Overton Window at play in such identity-based pedagogy is a must if we have to make progress. We shouldn’t be tricked into assuming that the non acceptance of the Affirmative Guilt Gradient is a sign of our society’s lack of progress.
Finally, I find it odd that ideologues who profess “universalism” and international identities choose schools and universities to keep structurally confined, relative identities going by adding excessive nomenclature so they can apply interventions that are inherently reactionary. However, isn’t ‘reactionary’ a pejorative these ideologues use on others?
Every great civilization has simultaneously made breakthroughs in the natural sciences, mathematics, and in the investigation of that which penetrates beyond the mundane, beyond the external stimuli, beyond the world of solid, separate objects, names, and forms to peer into something changeless. When written down, these esoteric percepts have the natural tendency to decay over time because people tend to accept them too passively and literally. Consequently, people then value the conclusions of others over clarity and self-knowledge.
Talking about esoteric percepts decaying over time, I recently read about the 1981 Act the state of Arkansas passed, which required that public school teachers give “equal treatment” to “creation science” and “evolution science” in the biology classroom. Why? The Act held that teaching evolution alone could violate the separation between church and state, to the extent that this would be hostile to “theistic religions.” Therefore, the curriculum had to concentrate on the “scientific evidence” for creation science.
As far as I can see, industrialism, rather than Darwinism, has led to the decay of virtues historically protected by religions in the urban working class. Besides, every great tradition has its own equally fascinating religious cosmogony—for instance, the Indic tradition has an allegorical account of evolution apart from a creation story—but creationism is not defending all theistic religions, just one theistic cosmogony. This means there isn’t any “theological liberalism” in this assertion; it is a matter of one hegemon confronting what it regards as another hegemon—Darwinism.
So, why does creationism oppose Darwinism? Contrary to my earlier understanding from the scientific standpoint, I now think creationism looks at Darwin’s theory of evolution by natural selection not as a ‘scientific theory’ that infringes the domain of a religion but as an unusual ‘religion’ that oversteps an established religion’s doctrinal province. Creationism, therefore, looks to invade and challenge the doctrinal province of this “other religion.” In doing so, creation science, strangely, is a crude, proselytized version of what it seeks to oppose.
In its attempt to approximate a purely metaphysical proposition in practical terms or exoterically prove every esoteric percept, this kind of religious literalism takes away from the purity of esotericism and the virtues of scientific falsification. Therefore, literalism forgets that esoteric writings enable us to cross the mind’s tempestuous sea; it does not have to sink in this sea to prove anything.
In contrast to the virtues of science and popular belief, esotericism forces us to be self-reliant. We don’t necessarily have to stand on the shoulders of others and thus within a history of progress, but on our own two feet, we seek with the light of our inner experience. In this way, both science and the esoteric flourish in separate ecosystems but within one giant sphere of human experience — like prose and poetry.
In a delightful confluence of prose and poetry, Erasmus Darwin, the grandfather of Charles Darwin, wrote about the evolution of life in poetry in The Temple of Nature well before his grandson contemplated the same subject in elegant prose:
Organic life beneath the shoreless waves Was born and nurs’d in Ocean’s pearly caves; First forms minute, unseen by spheric glass, Move on the mud, or pierce the watery mass; These, as successive generations bloom, New powers acquire, and larger limbs assume; Whence countless groups of vegetation spring And breathing realms of fin, and feet, and wing.
The prose and poetry of creation — science and the esoteric; empirical and the allegorical—make the familiar strange and the strange familiar.
This fascinating isochrone map—how many days it took to get anywhere in the world from London in 1914—at first blush evokes the cliché that the world has now shrunk. Obviously it hasn’t shrunk. While the distance between London and New Delhi is still 4,168 miles, what has shrunk is time, and this has had profound aftermaths on our lives.
One of the great ironies in the remarkable proliferation of time-saving inventions is they haven’t made life simple enough to give us more time and leisure. By leisure, I don’t mean a virtue of some kind of inertia, but a deliberate organization based on a definite view of the meaning and purpose of life. In the urban world, the workweek hasn’t shortened. We still don’t have large swaths of time to really enjoy the good life with our families and friends.
In 1956, Sir Charles Darwin, grandson of the great Charles Darwin, wrote an interesting essay on the forthcoming Age of Leisure in the magazine New Scientist in which he argued: “The technologists, working for fifty hours a week, will be making inventions so the rest of the world need only work twenty-five hours a week. […] Is the majority of mankind really able to face the choice of leisure enjoyments, or will it not be necessary to provide adults with something like the compulsory games of the schoolboy?”
He is wrong in the first part. The world may have shrunk but cities have magnified. Travel technologies have incentivized us to live farther away and simply travel longer distances to work and attract anxiety attacks during peak hour traffic [Google Marchetti’s constant]. So, rather than being bored to death, our actual challenge is to avoid psychotic breakdowns, heart attacks, and strokes resulting from being accelerated to death.
Nonetheless, Sir Charles Darwin is accurate about “compulsory games” for adults. What else are social media platforms, compulsive eating, selfies, texts, Netflix bingeing, and 24/7 news media that dominate our lives? They are the real opiates of the masses. We have been so conditioned to search for happiness in these anodyne pastimes that defying these urges appears to be a denial of life itself. It is not surprising that we can no longer confidently tell the difference between passing pleasure and abiding joy. Lockdown or no lockdown, we are all unwittingly participating in these compulsory games with unwritten rules, believing that we are now that much closer to the good life of leisure. But are we?
‘Time is the wealth of change, but the clock in its parody makes it mere change and no wealth.’— Rabindranath Tagore
What’s the first question in the field of public policy? According to the Indian Economist Ajay Shah, “What should the state do?” is the first question. He says, “A great deal of good policy reform can be obtained by putting an end to certain government activities and by initiating new areas of work that better fit into the tasks of government.”
This question is especially essential for a weak state like India. But what if people prefer government subsidies, assertive intermediaries and a weak state? I don’t know the answer to this question. The story of the Indian farm protest is an illustrative example; it is a rebellion to stay bound to the old status quo, fearful of free choice.
04 June 2020: Union Cabinet clears three ordinances meant for reforms in the Indian Agricultural sector. These reforms upgrade farmers from being just producers to free-market traders. Agriculture is a state subject in India, but state governments have had no political will to usher in these reforms. China reformed its agriculture sector first, followed by other industries. India is doing it the other way round and thirty years late. So, the union government followed constitutional means to usher in the reforms.
04 December 2020: The Union Govt offers a work-around the dilution of MSP. By the way, MSP sets an unnaturally high price and cuts out the competition, so the middlemen club in the farmer’s association of Punjab, Haryana, and U.P. want nothing less than the scraping of these reforms.
Bottom line: A) The Ordinances aim to liberalize Agri trade and increase the number of buyers for farmers. B) de-regulation alone may not be sufficient to attract more buyers.
Almost every economist worth his salt acknowledges the merit in point A) and welcomes these essential reforms that are thirty years late but better late than never. Ajay Shah says, “We [Indians] suffer from the cycle of boom and bust in Indian agriculture because the state has disrupted all these four forces of stabilization—warehousing, futures trading, domestic trade and international trade. The state makes things worse by having tools like MSP and applying these tools in the wrong way. Better intuition into the working of the price system would go a long way in shifting the stance of policy.”
However, the middlemen argue on point B), that acts as a broad cover for their real fears of squandering their upper-hand in the current APMC/MSP system. Although nobody denies that a sudden opening of the field for competition will threaten the income of these middlemen, such uncertainties should not justify violent protests, slandering campaigns, that look to derail the entire process of upgrading the lives of a great majority of poor farmers in the country.
Even worse, these events get branded in broad strokes as state violence and human rights abuses by pre-planned Twitter and street campaigns and unnecessary road blockades. Everybody questions internet outages during these protests but no one questions the ethics of protesters blocking essential roads in the city. A section of the Indian society and diaspora hates Prime Minister Modi for sure. I have no qualms with this, but the reckless hate shouldn’t negate all nuances in analyzing perfectly sane reforms. Social justice warriors legitimize the vicious cycle of dissent without nuance because they don’t take the trouble of even reading the farm bill but make it a virtue to reason from their “bleeding hearts.”
Ordinarily, the Indian state works inadequately, experiences confusion when faced with a crisis. It comes out with a communication of a policy package that attempts to address the problem in a short-term way and retreats into indifference. So, there are two aspects to its incompetence, one, there is a lack of political will because special interest groups persuade the government towards the wrong objectives. And two, the state capacity is so weak that it fails to achieve the goal. The farm protest is a hideous third kind of difficulty: a special interest group of assertive, influential middlemen want the strong-willed, long-term thinking Indian government policy—a rare entity— to sway towards short-termism under the pretext of human rights abuse. The hard left is actually supporting the Indian state to remain weak. They will also be the first to blame the state when it comes off as weak in the next debacle.
Of late, a growing number of Indian-Americans look to assert a South Asian identity for most of their sociopolitical and cultural expressions even though actual residents of ‘South Asia’ don’t claim this identity in any way, home or abroad. I realize that second-generation Indian-Americans embrace ‘South Asian’ forums in reaction to various domestic conditions. However, they ignore the polysemy of the term ‘South Asia’ when they project it internationally, for example, to express ‘South Asian’ pride over Kamala Devi Harris’s historic election for the Vice Presidency, instead of just Indian-American pride. Of course, I’m not talking about African-American pride here; it is beyond the purview of my discussion.
According to my understanding, increasing application of the term ‘South Asia’—just like the Middle East—precludes a nuanced perception of the particular countries that make up the region. It permits Americans to perceive the region like it is a monolith. Although the impression of the United States is striking in the Indian imagination, the image of India, as it turns out, is not very obvious for the average citizen in the United States, not even among second-generation Indian Americans, as I see it. To gauge American curiosity in a particular region, language enrollment in US universities is a decent metric. It turns out, around seven times more American students study Russian than all the Indian languages combined. The study of India compares unfavorably with China in nearly every higher education metric, and surprisingly, it also fares poorly compared to Costa Rica! As an aside, to understand India and her neighborhood, an alternate perspective to CNN or BBC on ‘South Asian’ geopolitics is WION (“World is One” News – a take on the Indic vasudhaiva kutumbakam). I highly recommend the Gravitas section of WION for an international audience.
Back to the central question: ‘South Asia’ and why Indians do not prefer this tag?
For decades, the United States hyphenated its India policy by balancing every action with New Delhi with a counterbalancing activity with Islamabad. So much so that the American focus on Iran and North Korean nuclear proliferation stood out in total contrast to the whitewashing of Pakistan’s private A.Q. Khan network for nuclear proliferation. Furthermore, in a survey conducted by the Chicago Council on Global Affairs that gauges how Americans perceive other countries, India hovered between forty-six and forty-nine on a scale from zero to one hundred since 1978, reflecting its reputation neither as an ally or an adversary. With the civil-nuclear deal, the Bush administration discarded the hyphenation construct and eagerly pursued an independent program between India and the United States. Still, in 2010, only 18 percent of Americans saw India as “very important” to the United States—fewer than those who felt similarly about Pakistan (19%) and Afghanistan (21%), and well below China (54%) and Japan (40%). Even though the Indo-US bilateral relationship has transformed for the better from the Bush era, the increasing use of ‘South Asia’ on various platforms by academics and non-academics alike, while discussing India, represents a new kind of hyphenated view or a bracketed view of India. Many Indian citizens in the US like me find this bracket unnecessary, especially in the present geopolitical context.
What geopolitical context? There are several reasons why South Asian identity pales in comparison to our national identities:
The word ‘South Asia’ emerged exogenously as a category in the United States to study the Asian continent by dividing it. So, it is a matter of post Second World War scholarship of Asia from the Western perspective.
Despite scholarship, ‘South Asia’ has low intelligibility because there is no real consensus over which countries comprise South Asia. SAARC includes Afghanistan among its members; the World Bank leaves it out. Some research centers include Myanmar—a province of British India till 1937, and Tibet, but leave out Afghanistan and the Maldives. For instance, the UK largely accepts the term ‘Asian’ rather than ‘South Asian’ for academic centers. The rest of Europe uses ‘Southeast Asia.’
Besides, geopolitically, India wants to grow out of the South Asian box; it cares a lot more about the ASEAN and BRICS grouping than SAARC.
Under Modi, India has a more significant relationship with Japan than with any South Asian neighbor. With Japan and South Korea, India plans to make Indo-pacific a geopolitical reality.
South Asia symbolizes India’s unique hegemonic fiefdom, which is viewed unfavorably by neighboring Nepal, Sri Lanka, Bangladesh, and Pakistan.
According to the World Bank, South Asia remains one of the least economically integrated regions globally.
South Asia is also among the least physically integrated (by road infrastructure) regions of the world and this disconnect directly affects our politics and culture.
Therefore, the abstract nature of ‘South Asia’ is far from a neutral term that embraces multiple cultures. It is, at best, a placeholder for structured geopolitical co-operation in the subcontinent. However, in socio-cultural terms, ‘South Asia’ used interchangeably with India signals India’s dominance over her neighborhood. Contrarily, in India’s eyes, it is a dilution of her rising aspirations on the world stage. These facts widen the gap between the US’s intentions (general public and particularly, second-generation Indian-Americans) and a prouder India’s growing ambitions.
Besides, it is worth mentioning that women leaders have already held the highest public office in Pakistan, India, Sri Lanka, Bangladesh, etc. So as you see in this video, the Indian international actress, Priyanka Chopra, tries her best to be diplomatic about this nebulous ‘South Asian’ pride thingy, but she rejoins with the more solid identity, her Indian identity. The next time, say a Nepalese-American does something incredible in the US, and you want to find out how another Nepali feels about this achievement, as a matter of experiment, refer to the accomplishment as Nepali pride, instead of South Asian pride, and see the delight on the person’s face. Repeat this with another Nepali, but this time use the ‘South Asian’ identity tag and note the contrast in the reaction.