You are here

Choreographing Shadows: Interdisciplinary Collaboration to Orchestrate Ethical AI Image-Making

Meeting Preference

Online June Meeting

Only Submit to my Preferred Meeting

Choreographing Shadows is a co-written paper first published in October 2023 in the new peer-reviewed journal “Traditions/Innovations in Arts, Design, and Media Higher Education.” This collaboration between an artist and scholar of religious studies explores the intricacies of AI image-making as it is applied to depicting sacred imagery and the religious transcendent, especially within the Catholic tradition. Since the publishing of the paper, the project has continued to develop and new insights will be offered in our presentation.

Building on the scholarship of artists and media researchers like Hito Steyerl and Eryk Salvaggio, AI art’s usage as a diagnostic tool for deciphering internet biases is compared to the scholar of religious studies' theoretical method of redaction criticism. When producing an AI image, what we are doing is creating composite images based on training sets. The image isn’t “new” or “original” so much as it mirrors content pulled from the archive of human history. However, these data sets largely favor images produced on the internet, digital images, and copywritten materials, which introduces a large number of ethical problems. Therefore, the production of an AI image requires analysis to determine its sources and biases. When applied to sacred imagery, we can use AI image-making programs as a diagnostic tool for redaction, revealing how the internet “understands” a specific topic. In our paper, we discuss St. Francis of Assisi's encounter with a Seraphim angel, the historical depictions of this event, and how AI tools understand the concept of “Seraphim.”

We are also in a period where as we speculate about AI’s capacity to become an autonomous form of consciousness, the question is asked, “What hand is creating an AI image?” We relate this question to the tradition of acheiropoieta, the history of sacred images such as the Our Lady of Guadalupe tilma and the Shroud of Turin, which are allegedly “made without human hands.” By producing sacred imagery using AI tools, we hope to show how technology is being used as a mediator for accessing the Divine. Scholars like Daniel Wojcik and Paolo Apolito have discussed the role of technology, especially photography and the internet, in mediating access to the Virgin Mary. Building on their research, we propose a methodology for producing AI images of “imagined realities” seen at Marian Apparition sites: images for which we have credible eyewitness testimony, but which we do not have photographic evidence for.

We compare and contrast the ethics of this methodology to “Deepfake” images, which are photorealistic images often used for the spread of misinformation, disinformation, and propaganda. Many AI image-making tools are curated to favor aesthetically pleasing representations, sourced from commercial imagery. In analyzing this phenomenon, we relate the impulse of artists to alter primary source material in our religious history to serve theological or aesthetic agendas. The major ongoing question of this study is what do AI images get wrong and why does this happen? Can AI images be an effective tool for representing the sacred?

If selected to present in the online format, I intend to produce a short video presentation highlighting the animation, video, and installation artworks that have been produced as a result of this study. This video backdrop will accompany a spoken word summary of our paper and ongoing research.

Abstract for Online Program Book (maximum 150 words)

Using a collaboration between an artist and scholar of religious studies as a case study, the ongoing “Noo Icons” media arts project explores how AI image-making tools are well suited to explore the visual history of the religious transcendent. Building on the scholarship of Hito Steyerl and Eryk Salvaggio, AI art’s usage as a diagnostic tool for deciphering internet biases is compared to the scholar of religious studies' theoretical method of redaction criticism. This article explores ways in which the training set data of AI image-making programs can be refined to produce more accurate composite images, as well as the power for these tools to be used as visual aids in the creation of “imagined realities:” images for which we have credible eyewitness testimony, but which we do not have photographic evidence for. The ethics of AI image-making is primary to the methodology advanced in this interdisciplinary mode.

Authors