This paper introduces the InterSapience Project, a theological and interreligious proposal for rethinking AI alignment as a moral and relational problem rather than merely a technical one. I advance the notion of “co‑intelligence” (the integrated dynamics of collective, collaborative, and cooperative intelligence) as a normative framework for human–synthetic cooperation. Drawing on process thought, comparative theology, and interfaith ethical resources, the study reframes alignment as relational resonance and mutual transformation. It argues that the crucible of AGI/SSI demands practices of humility, dialogical encounter, and structural justice that theological traditions already model. By engaging AI scholarship (Bostrom, Russell, Tegmark) alongside theological voices (Cobb, Neville, Panikkar, Levinas, Teilhard de Chardin), the paper proposes governance practices grounded in transparency, covenantal accountability, and inclusive deliberation. The result is a theologically informed agenda for policy and design that treats synthetic intelligences as potential participants in shared moral learning, while protecting human dignity and promoting ecological flourishing.
Attached Paper
In-person November Annual Meeting 2026
Co‑Intelligence and the Ethics of Alignment: An Interreligious and Comparative Theological Framework for Safe and Sustainable Human–Synthetic Cooperation
Papers Session: Experiments in Artificial Intelligence
Abstract for Online Program Book (maximum 150 words)
