This session considers the implications and impacts of artificial intelligence on workers and class from the perspective of religion/theology/ethics on two levels: on labor broadly speaking and then on academic labor specifically. Paper 1 demonstrates how techno-libertarian narratives portray AI and its concomitant disruptions as socially salvific while obscuring class relations, including labor displacement. Paper 2 discusses how AI and automation shift tedious and dangerous work from humans on the periphery of society to machines on the periphery and then morally evaluates this shift if AI becomes sentient or self-conscious. Paper 3 explores how contingent faculty, the majority of academic labor, often have no meaningful choice about the use of AI, particularly given the time-inequalities that define contingent academic labor. Finally, Paper 4 presents a course activity in which students construct their own AI chatbot, thereby helping students understand how AI technologies work, as well as their limitations and drawbacks.
This paper examines how Silicon Valley techno-libertarianism is an AI religion that sacralizes technological innovation while marginalizing labor. Discourse about the benefits of AI and disruptive technologies, analyzed across 350 episodes of the All-In Podcast, along with writings from Elon Musk, Peter Thiel, and Marc Andreessen, was used to form a survey given to faculty and students at a Northern California university. Survey respondents were asked whether God, money, or techne (machines) function as primary sources of meaning and social progress. The analysis shows that techno-libertarian narratives portray artificial intelligence and technological disruption as socially salvific while obscuring class relations, including labor displacement. Interpreted through Max Weber’s notion of disenchantment, this worldview operates as a secular moral framework that legitimizes wealth accumulation while rendering workers largely invisible within narratives of technological progress.
This paper draws upon Kant and Liberation Theology to argue that current paradigms of worker replacement through AI and automation harms humans and exploits the technology replacing them. Beginning with people’s perception of the “American Dream,” income, and education, people’s sense of identity and worth are being devalued through worker replacement. The machines replacing them are also having their labor instrumentalized. AI and automation shift tedious and dangerous work from humans on the periphery of society to machines on the periphery. Walzer’s position about the relationship between this kind of work and membership in society affects the use of AI especially if it becomes sentient or self-conscious. The preferential option for the marginalized may only first apply to displaced human workers, but should also apply to machines with personhood, and these considerations need to guide the integration of AI into work and the replacement of human labor.
This paper takes seriously the critiques of AI as a technology that degrades critical thinking, accelerates environmental destruction, and operates within a capitalist logic of extraction and efficiency. Yet it argues that calls to boycott or refuse AI in academic settings risk reproducing the very inequities they claim to resist. Drawing on the 2026 EDUCAUSE report, the AAUP's 2025 report on AI and academic professions, and Elizabeth Losh's analysis in Critical AI, I demonstrate that contingent faculty, roughly 70% of the national instructional workforce, face structural pressures making AI adoption a matter of survival rather than choice. Time is the scarce resource shaping this divide. I situate this analysis in my work co-chairing the AAR's Academic Labor and Contingent Faculty Committee, arguing that a scholarly guild skewing tenure-line must speak to and find solidarity beyond its own membership to raise AI's visibility as an academic labor crisis.
The past two years, students in my “Religion, SciFi, AI, and Non-Human” course have designed and constructed Large Language Model (LLM) chatbots as a semester-length scaffolded assignment designed to teach them fluency with the technology of LLMs and the humanistic critique of their use. The assignment culminates in the creation of “The Oracle,” a chatbot constructed and trained by my students. The assignment has been highly successful as an experiential learning tool to help student understand how AI technologies work, as well as their limitations and drawbacks. By integrating the questions and tools of religious studies, the Oracle project is particularly effective. This AAR paper describes the learning goals, scaffolded assignment design, and student experience of the Oracle, and demonstrates how faculty can use similar approaches to critically engage AI in the classroom from a humanistic perspective highlighting the tools of religious studies.
