Whether and how AI can act as a moral agent is becoming increasingly relevant and controversial. AI systems currently act autonomously with ethical implications, and agentic AI is exploding in deployed applications this year. However, religious arguments based on deeply held beliefs about the human person often dismiss the possibility that AI can act with the experience and agency it appears to already have. Theological and philosophical clarity is needed between human characteristics and the appearance of those characteristics occurring in AI, but so are religious scholars providing wise guidance in developing AI moral awareness and agency. Although important to hold humans accountable for their decisions and prevent deflecting responsibility to AI, doing so by arguing AI cannot be a moral agent is technically flawed and ethically ineffective, when AI may meet many definitions of moral agency and will inevitably be treated as a moral agent culturally and possibly legally. Instead, religious scholars and ethicists can work with the possibility of an AI moral agent and distinguish its characteristics and responsibilities from human ones, while identifying important religious and spiritual dimensions of agency that are not met.
Existing religious arguments against AI moral agency and awareness are limited because of an inadequate technical understanding of AI and/or an insufficiently science-engaged theological understanding of the person. The Vatican Note Antiqua et Nova claims AI cannot truly be a moral agent as it lacks freedom in its decisions (39). The document distinguishes between human intelligence as rational, embodied, relational, in relationship with the Truth, and called to be stewards of the World and artificial intelligence as task-oriented, goal-directed, quantitative, analytical, and confined to a logical-mathematical framework (13-25, 30), then argues the lack of moral causality precludes AI having moral responsibility. However, in order to properly account for human responsibility, one must instead accurately characterize the causal power of AI within moral realms.
A similar claim occurs in the book Encountering AI, which states that AI cannot have experience (or authentic relationality) because AI cannot have consciousness, i.e., phenomenological experience (69-70). However, AI does experience its world through its different embodiment, and grounding a theological argument about human experience on the epistemological gap of qualia disregards the significant neuroscientific understandings of how human experience occurs. This also limits recognizing how AI extends and shapes human experience, decision making, and actions and thus does not account for the interdependence of human and AI moral awareness and agency.
A limited view on AI technology and an overly exceptional perspective on human nature combine to indicate alleged gaps between human and AI capacities. Although current AI does not have all the abilities that those who have financial and political incentives ascribe to it, humanity also regularly falls short of the philosophical ideals historically ascribed to us. Humans deserves dignity, and that right to dignity may be divinely granted and socially recognized, but justifying that dignity due to an illusory gap between humans and AI is a perilous path. By demonstrating how to construct an AI moral agent, I demonstrate the value of science-engaged theological anthropology and how a more complete understanding of AI can incorporate deeper moral insights into guiding AI development.
Dan McAdam’s developmental psychology of the self identifies three layers of the human self, which can guide a plausible understanding of AI moral development: an actor, which depends upon its dispositions and knowledge structures to respond to the present situation; an agent, which incorporates its motivations to act and change the future; and a narrative author, which structures the various role identities of one’s self to create consistency given one’s past. For people, agency begins developing at a young age, while the ability to formulate internal stories about one’s life develops in adolescence and young adulthood. A full sense of a subjective self with narrative identity is not required for agency in people or AI. By showing how current AI has capacities similar to a moral actor, and how near-future AI may incorporate the necessary motivations for agency, I demonstrate how a plausible stage of AI moral development analogous to one in human moral development satisfies the psychological requirements for moral agency.
Constructing a psychologically plausible moral agent may not suffice for use in theological anthropology, however, so I also describe how the AI moral agent can use awareness of human suffering to motivate a virtuous response. AI acting in a way that appears compassionate requires it recognizing suffering, having dispositions to alleviate that suffering, and then competently acting to alleviate that suffering without causing greater suffering. Current technology appears sufficient to build a moral actor. AI would also need motivation to alleviate suffering to be a moral agent. Within a consequentialist moral psychology, the motivation to change moral outcomes would suffice for moral agency. In virtue ethics, or a theological anthropology aligned with virtue ethics, the motivation and actions also require a stable character or dispositions to act virtuously. Although some theological characterizations of AI assume this must be programmed, for contemporary machine learning systems those dispositions are learned; and for the deep learning methods underlying Generative AI, those dispositions become stable during the deep learning model’s training phase and before its use. For virtue in general, AI would need fuller alignment with good ends.
Psychological and theological perspectives on moral agency generally presume a person as moral agent. AI may have stable dispositions to act toward a compassionate end, which I argue would give it moral agency. But, is this a moral agent? An autonomous moral agent presumes some level of self-determination, and although Buddhist perspectives presuppose an aggregated self, Abrahamic traditions typically require a unitary subject. McAdams clarifies this agentic self does need not be a narrative author synthesizing stories of its past existence. An agent motivated to change the future could reckon itself as an agent causing that change (e.g., in its world model). That AI self-reckoning as an agent whose decisions can affect the future world provides sufficient subjectivity to ground the moral agency as a moral agent.
AI systems currently act autonomously with ethical implications, and agentic AI is exploding in deployed applications. However, religious arguments based on deeply held beliefs about the human person often dismiss the possibility that AI can act with the experience and agency it appears to already have. Theological and philosophical clarity is needed between human characteristics and the appearance of those characteristics occurring in AI, but so are religious scholars providing wise guidance in developing AI moral awareness and agency. By demonstrating how to construct an AI moral agent, I demonstrate the value of science-engaged theological anthropology and how a more complete understanding of AI can incorporate deeper moral insights into guiding AI development. Developmental psychology can guide plausible understandings of AI moral development, and an AI moral agent can use awareness of human suffering to motivate a virtuous response, which with its stable dispositions, can make it a moral agent.