Co-Sponsorship In-person November Annual Meeting 2026

CO-SPONSORSHIP: Ethics Unit and Political Theology Unit

Call for Proposals

Aligned with What? AI, Power, and Technology Governance

A key buzzword for AI ethics is “alignment,” especially in the wake of Brian Christian’s 2020 book The Alignment Problem. The idea is that any artificial intelligence should be aligned to human values. That’s an admirable (if vague) goal, but as OpenAI admits, “Aligning AI systems with human values also poses a range of other significant sociotechnical challenges, such as deciding to whom these systems should be aligned.” AI cannot be aligned to human values, but always particular humans’ values, and presumably the values held by a powerful minority. We invite proposals in political theology or religious ethics (both broadly understood) on the question of technology governance and the ways developments in AI research and usage interact with existing power dynamics and differing values. Possible directions could include the following: What theological resources might help us grasp the operation of power and mode of governance anticipated by the widespread use of AI and LLMs? Alternatively, how can humanism (in its classical or contemporary human-rights forms) or posthumanism (particularly in its eco- or materialist forms) clarify the ethics of technology governance? How might our fields help us to articulate the human over and against these models, both in response to the question of "alignment" and to the question of the kind of "human" guidance or oversight that AI/LLMs continue to require?

Chair Mail Dates
An Yountae anyount@gmail.com - View
Michelle Sanchez msanchez@hds.harvard.edu - View
Nichole Flores, University of Virginia nichole.flores@virginia… - View
Russell Johnson rpjohnson@uchicago.edu - View
Review Process: Participant names are anonymous to chairs and steering committee members until after final acceptance/rejection