Attached Paper

Rethinking AI Narratives Through Buddhism

Description for Program Unit Review (maximum 1000 words)

Assumptions about the directionality of time are entwined with narratives of capitalism and progress. Scholars have argued that the temporal logic of progress, which is especially pronounced in American media, is rooted in a Christian teleology. This logic structures how we perceive time, identity, and the possibility for change. Scholars like Johannes Fabian, Kathryn Gin Lum, and Priya Satia have argued that academic disciplines like Anthropology and History emerged at a time when capitalism and its colonialist expansion projects were stretching into the very territories that would become the objects of academic study. Satia shows that in the middle of the eighteenth century, history came to be understood as linear and progressive. Underpinning these emerging notions of progressive history were theories of God’s Providence—the idea that in the long sweep of history, there would be some greater moral meaning to the evils of empire. As Satia writes, “Enlightenment faith in the progressive nature of war helped normalize the violence of imperial conquest and industrial capitalism.” 

 

Technological development in the form of AI is one recent outgrowth of late-stage capitalism. The release of China’s newest model, DeepSeekR1, catalyzed a flurry of panic in Silicon Valley over the possibility that a “non-democratic” other would outpace America in its technological, and thus, economic and world dominance. Where do these anxieties come from? How do they relate to religion? And how might a Buddhist lens on technology ethics provide an alternative orientation to the stories we tell about our technological futures?

 

Just as the assumption that history moves linearly towards progress was an extension of western colonial projects believed to be sanctioned by God’s providence, the anxieties and fears around AI development are also extensions of larger narratives informed by religion. Robert M. Geraci argues that there are direct parallels between apocalyptic Christianity and American attitudes towards technology. Apocalyptic AI posits that through progressive cyborg enhancements, biological humans will eventually transcend the limitations of biology, becoming fully immortalized as computer intelligence. The assumption in apocalyptic AI is that the only way for humans to preserve their exceptionalism is to meld with computer intelligence—to replace their impermanent, biological “selves” with eternal, artificial intelligence.

 

Looming behind the broader discourse about our inevitable future with AI is an uncomfortable question about whose future. Who will AI benefit? Critics of AI, like Timnit Gebru, have long critiqued the biases within our existing machine learning data sets and the fact that these technologies favor large corporations—with its primary beneficiaries being the wealthy and mostly White elite. Scholars like Stephen Cave and Kanta Dihal have further argued that in America, intelligent machines are often racialized as White—whether through humanoid robots, chatbots and virtual assistants, stock images of AI, or portrayals of AI in film and television. The racialization of AI as White thus enables the erasure of people of color from a White utopia. And finally, AI and cloud technology intensify greenhouse gas emissions and require large quantities of natural resources, exacerbating the climate crisis and its varying effects on already burdened communities.

 

And yet these inequalities should not surprise us—they are the very hallmark of capitalism. Competition and its associated notion of scarcity––that if someone else wins, you will lose––breeds distrust. Politics have historically co-opted this distrust to create racialized scapegoats, most recently in the attacks on diversity. At the heart of these scapegoating tactics is the compulsion to other. As Graham Priest writes, “The failure of solidarity is, of course, a blatant feature of capitalism.”

 

Hollywood films about robot takeovers highlight the anxieties about technology, but more than that, they reveal the failure of dominant narratives of progress to successfully imagine a more equitable and more interdependent future. Said in another way, America’s dominant narratives about technology belie the seeds of an ongoing colonialist identity that continues to dictate our stories. This identity depends on the violent destruction of any “other” that challenges its supremacy.

 

Developments in AI only embody and replicate the values behind the data that we feed it, including the stories used to condition its responses. But what if there was another way to narrate our relationship with technology and with each other? The 14th Dalai Lama has said that it may be possible for consciousness to be reborn in a computer. From a Buddhist perspective, if this is true, it means that computer-consciousness contains the potential for enlightenment, for all things morally just. One Buddhist temple in Kyoto, Japan has adapted this idea into its technology. The temple installed a robot named Mindar, trained on the data of Buddhist scriptures, as well as the sermons of other contemporary priests. Priests at that temple consider Mindar to be a reincarnation of the bodhisattva Kannon, renowned throughout the Buddhist world for their compassion. Inspired by this temple’s deployment of the bodhisattva ethos in a humanoid robot as well as Buddhist theories of consciousness, I consider the ethical consequences for regarding AI and AI robots as agents of morality, capable of awakening. Such a position has far-reaching consequences for how we might deploy our technology and be in relationality with it.

Beyond exploring the interventions of contemporary Buddhists around AI applications, my presentation also turns to Buddhist literature to consider how Buddhist-story forms inherently challenge the dominant narratives of techno-optimism and techno-pessimism. I argue that Buddhist stories embrace relational and pluralistic ways of understanding time, identity, and consciousness—providing a productive counter-model to western narratives of progress and apocalypse. Rather than positioning apocalyptic AI as transcending all temporality, Buddhist stories remind us that our technologies are embedded within a larger web of actors and actants—very much bound to our material world. The stories we consume about technology and AI have real-world consequences for how we develop and employ this technology. As such, it would behoove us to consider how other narratives might inspire us to imagine different and more just outcomes.

Abstract for Online Program Book (maximum 150 words)

The stories we consume about technology and AI have real-world consequences for how we develop and employ this technology. The dominant narratives of our technological predicament––techno-optimism and techno-pessimism––are both manifestations of a primarily Christian worldview that has previously informed American narratives of progress and history. Drawing from different critical theory, this presentation highlights issues at stake in the development of AI, including the stratification of social inequalities and environmental impact. I propose an alternative ethical orientation to technology, one rooted in Buddhist theories of cyclical time, interdependence, and the bodhisattva ethos.