Papers Session In-person November Annual Meeting 2026

Aligned with What? AI, Power, and Technology Governance

Abstract for Online Program Book (maximum 150 words)

It’s often stated that AI technology should be aligned to human values, but as OpenAI admits, “Aligning AI systems with human values also poses a range of other significant sociotechnical challenges, such as deciding to whom these systems should be aligned.” AI cannot be aligned to human values, but always particular humans’ values, and presumably the values held by a powerful minority. This panel considers the ways developments in AI research and usage interact with existing power dynamics. What theological and ethical resources might help us grasp the operation of power and mode of governance anticipated by the widespread use of AI? How might our fields help us to articulate the human over and against these models, both in response to the question of "alignment" and to the question of the kind of "human" guidance or oversight that LLMs continue to require?

Papers

Recent work in Christian ethics, including the work of Paul Scherz, has explicitly linked questions of AI governance to the Christian doctrine of providence. While this connection is apt, there is far more to be explored in AI’s providential parallel here than just questions of governance and prudential action. Employing the tripartite structure of providence borrowed from the Reformed scholastic tradition of preservation, concurrence, and governance, I argue that the former two aspects of providence are just as crucial to understanding the uses of LLMs by consumers. A close reading of AI marketing reveals the urge to make use of AI to fulfill similar narrative-structuring aims that have been a part of Christian talk of providence since Augustine’s Confessions. A Christian virtue ethical response then, must contend not only with talk of prudence, but also courage, accounting for narrative, affective, and relational aspects of the AI age.

In dominant AI debates, "alignment" names the problem of making models conform to human values. This paper inverts the question. Drawing on ethnographic research among a community of immigrant Iranian AI scientists in Silicon Valley, I ask how human values, affects, and ethical dispositions are themselves being aligned—and misaligned—through AI work and discourse. Through two ethnographic encounters, this paper shows how the epistemic vocabulary of AI becomes a medium through which my interlocutors work on themselves, negotiating the affective remainders of authoritarian religiosity against the self-improvement imperatives of liberal ethical life. I argue that alignment discourse is not a neutral technical framework but a liberal technology of governance historically continuous with statistical population management, and that the affective unruliness my interlocutors exhibit marks precisely where its logic reaches its limits.

Many applications of artificial intelligence (AI) rely on a technocratic ideology that breeds an “obsession… to increase human power beyond anything imaginable, before which nonhuman reality is a mere resource at its disposal.” This paper argues that Western technologists exercise significant power through algorithms that reshape human desire, material opportunities, and moral character. I draw on a variety of Christian theological resources to argue that algorithmic power reshapes political life according to an ethic of control that caricatures God’s omniscience and omnipotence, in turn enabling the oppression of marginalized communities while diminishing accountability within democratic political systems for algorithmic harms. Yet Christian political theologies that (1) emphasize personal and social responsibility for harmful moral actions and (2) elevate the possibilities of local forms of political action may provide fertile ground for re-imaging algorithmic governance.

When tech companies describe their products as in “alignment with human values,” an immediate red flag should be raised. I argue that AI alignment discourse reproduces the same perfectionistic, technocratic and implicitly eugenic logic underwriting Silicon Valley’s multitudinous transhumanistic endeavors. Behind the veil of human values hides an insidious mythos treasured by a powerful technocratic minority. Encoding these particular human values will intensify existing structural inequalities and reifies the (im)morality of the technocratic elite. This paper investigates the motivations behind Silicon Valley’s definition of the human and which human futures will be prioritized as a result. In the following sections, I outline the current discourses within Silicon Valley, already prioritizing an idealized human who is in the image of the technocrat, leaving the disenfranchised even more vulnerable.

Tags
# Artificial Intelligence
#algorithms
#algorithmic power
#AI
#artificial intelligence
#power
#political theology
#Catholic Social Thought
#CST