Attached Paper In-person November Annual Meeting 2026

Aligning the Self: Affect, Ethics, and the Limits of Alignment Discourse in Artificial Intelligence

Abstract for Online Program Book (maximum 150 words)

In dominant AI debates, "alignment" names the problem of making models conform to human values. This paper inverts the question. Drawing on ethnographic research among a community of immigrant Iranian AI scientists in Silicon Valley, I ask how human values, affects, and ethical dispositions are themselves being aligned—and misaligned—through AI work and discourse. Through two ethnographic encounters, this paper shows how the epistemic vocabulary of AI becomes a medium through which my interlocutors work on themselves, negotiating the affective remainders of authoritarian religiosity against the self-improvement imperatives of liberal ethical life. I argue that alignment discourse is not a neutral technical framework but a liberal technology of governance historically continuous with statistical population management, and that the affective unruliness my interlocutors exhibit marks precisely where its logic reaches its limits.