You are here

Exemplary Compassionate AI for Palliative Dementia Care

Meeting Preference

In-Person November Meeting

Only Submit to my Preferred Meeting

Artificial Intelligence (AI) is becoming increasingly pervasive across many global societies with healthcare often at the leading edge. However, the incentives for technical innovation and financial gain driving efficient AI healthcare automation can interfere with patient care and increase health inequity. Focusing on developing compassionate AI reorients AI development to improve patient care, health outcomes, and well-being. Although many technology creators would value these benefits, they rarely have the resources, skills, and opportunities to fulfill those aspirations. Religious literacy and wisdom about compassion can inform healthcare, psychology, and AI research; and synthesizing and distilling that knowledge in an AI-oriented framework can inform and facilitate the construction of compassionate AI as well as illuminate new religious insights from a novel perspective.

Palliative dementia care by AI raises many issues around memory, identity, suffering, end of life, and dying well that are not only significant for world religions and religious scholarship but also core to the intertwined religious and cultural values informing secular societies. Examining compassion from a religious perspective is thus an important precursor to responsible individualized patient care by AI (or healthcare robots). Characterizing exemplary AI palliative care of patients with dementia can feed both the imagination and the training data of AI healthcare developers. By closely examining how AI could compassionately care for dementia patients, religious scholars gain a fresh perspective on central religious concepts like compassion, care, and suffering in a challenging context as well as constructively and beneficially contribute to the rapid and important social change surrounding AI and related technologies.

In this paper, I examine three religious and ethical concerns that arise in the study of exemplary compassionate care of those with dementia by AI and robotic systems: the value of exemplary compassion by AI instead of typical, human-level compassion; the nature and ethics of human relationship with compassionate AI; and the implications for caregiver stress and burnout, especially in the context of aggressive personality change in dementia. In the full paper, I describe a working example around “Ann,” who has mid-stage dementia, diabetes, and cardiovascular disease, and receives care from a visiting nurse in the day, her son, and a recently installed AI assistant that combines smart home functionality, a wearable, and two robotic devices.

Exemplary Compassion. AI could be designed and trained to behave like a typical person might, but using the goal of exemplary compassion, i.e., that which others would generally find admirable, clarifies compassion within a broader spectrum of virtue and moral behavior and simplifies the development of compassionate AI (Walker 2013; Zagzebski 2013). Although the virtues exist interdependently within humans, the independent specification of exemplary compassion is an important precursor to the integration of virtues in a larger AI system that would act prudently while taking compassion and other moral goals, like justice and honesty, into account. For example, “Tony” rescued hundreds of Jewish people in the Netherlands from the Holocaust in WWII, while hiding from the Nazis who had sentenced him to death; he then lived in Malibu, CA after the war, where he rescued eighteen people from drowning in the surf (Monroe 2012, 60). Although one may prudently not want AI to risk the lives of others to act compassionately, I argue AI should have a highly developed disposition toward compassion, that is then only limited by prudence and other virtues. In the working example, consider the situation where Ann is baking a birthday cake for her husband, who she forgot is deceased. Although many responses would be possible by an AI (or human) caregiver, training the AI to respond with exemplary compassion would not only prevent harm (e.g., from a diabetic person baking a cake) but could also increase well-being (e.g., by steering a conversation toward remembrance).

Human-AI Relationship. Creating and deploying healthcare robots or AI within a vulnerable population raises many ethical issues, including loss of privacy, restricted autonomy, and relational harm (Turkle et al. 2006; Sparrow and Sparrow 2006; Sharkey and Sharkey 2012). With adequate safeguards and value-sensitive design, the deployment of eldercare AI can respect privacy and facilitate autonomy through longer independent living, but the close, daily interaction between compassionate AI and the one receiving care increases the risk of relational harm, given human tendency to anthropomorphize and form attachments. Many arguments against empathetic AI would be addressed by compassionate AI’s acting to alleviate suffering (rather than just identifying emotion) or it’s awareness of the universality of suffering (Strauss et al. 2016; Morrow et al. 2022; Graves et al. 2024), but underlying some of those arguments is a claim that AI is somehow different enough from humans that it cannot enter into “real” relationship with people. A similar claim is made in Encountering AI (Culture 2024) and informed by a relational interpretation of imago Dei (Lumbreras 2023; Dorobantu 2023; Herzfeld 2023), and in the full paper that claim is evaluated based upon its anthropological assumptions and put in dialogue with Coeckelbergh (2021) who identifies an ethical “risk of anthropocentric bias” in making claims about what is genuine about robots and a dependence upon a “political epistemology” of robotics in who gets to decided upon reality for another person. In Ann’s reality, building a relationship with a caregiver AI needs to be compared with realistic alternatives, such as overcrowded group homes and overworked and burned-out human caregivers.

Caregiver stress. Taking care of someone with dementia can cause stress for both professional caregivers and close family members. Dementia can cause personality changes that result in aggressive behaviors, which adds an additional burden to the relationship. Compassionate AI can help manage those situations, eg, through exemplary nonviolent communication, to both alleviate pressure on the caregiver and to model helpful behavior at times that human caregivers would be hindered by the additional stress. Compassionate AI may remove opportunities for people to develop caregiving skills or patience, but it can also create space for people to develop those virtues further by alleviating suffering in people with dementia and their caregivers.

Abstract for Online Program Book (maximum 150 words)

 Artificial Intelligence (AI) is becoming increasingly pervasive across many global societies with healthcare often at the leading edge. However, the incentives for technical innovation and financial gain driving efficient AI healthcare automation can interfere with patient care and increase health inequity. Focusing on developing compassionate AI reorients AI development to improve patient care, health outcomes, and well-being. Palliative dementia care by AI raises many issues around memory, identity, suffering, end of life, and dying well that are significant for world religions, religious scholarship, and the intertwined religious and cultural values informing secular societies. I examine three religious and ethical concerns in AI exemplary compassionate care of those with dementia: the value of exemplary compassion by AI instead of typical, human-level compassion; the nature and ethics of human relationship with compassionate AI; and the implications for caregiver stress and burnout, especially in the context of aggressive personality change in dementia.

Authors