Attached Paper In-person November Annual Meeting 2025

Human-to-AI Interaction in Faith Contexts: Analyzing Conversations with 'Ask Cathy'

Description for Program Unit Review (maximum 1000 words)

As educators and community leaders leverage the power of generative AI to create rich, interactive activities to teach and form participants, there is a need to understand the character and dialogical mechanics of human-to-AI conversations as distinct from human-to-human ones. Religious conversations, rich with existential questions of faith, matters of fact, and complex interactions, offer a special category to explore human-to-AI dialectics. A research team was created consisting of a Principal Investigator, Researcher, Research Assistant, Linguist, and three Computer Science students to analyse more than 85,000 messages across more than 10,000 conversations with the religious Chatbot “Ask Cathy.”  

The "Ask Cathy" chatbot was developed to answer questions about the Episcopal Church by utilizing a comprehensive collection of provided source documents. Employing Retrieval-Augmented Generation (RAG), Cathy searches its resource library for relevant context before responding, resulting in accurate, domain-specific, and verifiable answers. The extensive volume of conversations available enabled the team to pioneer new methods for understanding human-to-AI interactions. This paper discusses these methods, their validation, and emerging insights into human-to-AI interaction and human agency, concluding with preliminary implications for designing and applying chatbots in education and faith formation—offering valuable perspectives for educators, education technologists, and faith leaders. 

Background 

Human-to-technology conversations are not new. Perhaps the first such dialog occurred when our Australopithecus ancestors became frustrated or delighted by the operation of a sharp rock and told it so 3.4 million years ago. At the dawn of digital computing pioneers like Alan Turing predicted “chat” conversations in which a machine could convincingly mimic a human. When Joseph Weizenbaum at MIT created the ELIZA computer program in the mid-1960's, he was surprised by how much his “fake” therapist elicited real emotional responses. It demonstrated the “illusion of intelligence” in conversational systems and the potential for humans to create asymmetrical attachments: the “ELIZA Effect.” 

The concept of parasocial relationships explains the attachment individuals feel in one-sided interactions. Even when aware that a favored celebrity is unaware of their existence, individuals experience an "illusion of understanding." Similarly, the chatbot "Cathy," while bearing a human name, does not claim to be human. For instance, when asked about its favorite hymn, Cathy might respond, "I'm an AI without personal preferences, but I know 'Amazing Grace' and 'Be Thou My Vision' are popular among Episcopalians." Despite such clarifications discouraging anthropomorphism, users often treat the chatbot as a social entity, displaying politeness (e.g., "Thank you for explaining that") or hostility (e.g., "You are corrupting people by spreading your woke church lies!"). 

The social nature of these interactions enables the team to explore and develop innovative applications of the Computers Are Social Actors (CASA) paradigm. Initially proposed by Byron Reeves and Clifford Nass at Stanford University in the 1990s, CASA posits that humans instinctively treat computers as social entities, applying social norms such as politeness, etiquette, or even belligerence during interactions. Notably, previous studies have identified critical differences, including a higher propensity for users to disclose private information to computers. This reduced social inhibition is attributed to factors like anonymity and the absence of social sanctions from chatbots when disclosures are embarrassing or taboo. 

Recent studies have demonstrated AI chatbots' surprising ability to dissuade users from conspiracy theories. For instance, Thomas Costello and colleagues found that an AI chatbot could reduce belief in conspiracy theories by approximately 20%, with effects lasting up to two months. They attributed this success solely to reasoning-based tactics, not manipulative strategies. Thus, equipping chatbots with carefully curated factual collections appears to be an effective strategy for changing "hearts and minds" in religious or educational contexts. However, the mechanics of persuasion, and particularly "learning," in this emerging medium are not yet well understood. This paper aims to extend our understanding through careful conversational analysis. 

Methodology 

The research time developed tools to batch process the chat logs across several different groupings: 

  • Human-submitted prompts only 

  • AI response to human only 

  • AI response to human (pairs) 

  • Human response to AI (pairs) 

  • Whole conversation threads 

Against these data sets several investigative lines were pursued, including: 

  • Topic analysis using multiple models and comparing them. The team experimented, in each case, with multiple “rubrics” for classification to discover which prompting was most effective 

  • Human baseline (a statistically significant, randomized number of datums were hand-coded by humans to create a baseline) 

    • VADER 

    • Claude 

    • ChatGPT 

    • Gemini 

  • Sentiment analysis, again using different “rubrics” and comparing AI and human baselines to validate the AI characterizations 

  • Evidence of “learning”: the team worked to create evidential criteria for determining the pedagogical effects of conversation. Some of these include: 

  • Discourse markers: "oh," "I see," “I didn’t know that” “I understand now” "well," and "you know". 

  • Repair sequences: Instances where speakers correct themselves or others to resolve misunderstandings or errors. 

  • Elaborative Responses: Detailed explanations, examples, or expansions on a topic suggest deeper processing and assimilation of information. 

  • Questioning Patterns: Clarification questions (e.g., "What do you mean by...?") and probing questions (e.g., "Can you elaborate on...?") indicate active engagement and a desire to understand. 

  • Alignment and Agreement: Expressions of agreement or alignment, such as "Exactly," "That's right," or mirroring language, suggest shared understanding and reinforcement of learned concepts. 

  • Use of Technical or New Vocabulary: Incorporation of newly introduced terms or jargon into the conversation demonstrates assimilation of new knowledge. 

  • Reflective Statements: Phrases like "I used to think... but now I realize..." or "This reminds me of..." indicate self-awareness and the integration of new insights with prior knowledge. 

The team’s experience using these and other methods will be helpful to other researchers who seek to assess or design chatbots. The Team intends to expand this research in the future to include more direct comparisons to human-to-human text discussions of religion and faith. 

Impact 

Implications for the field of Human-Computer Interaction (HCL) and Learning Design are obvious, but we expect this research to raise important questions about human and AI agency and the ethics of engineering interactions to optimize effects. The Team continues to be excited by the prospects for deeper insights into how humans form beliefs and social bonds. 

Abstract for Online Program Book (maximum 150 words)

​As educators and community leaders increasingly utilize generative AI to create interactive educational activities, understanding the nuances of human-AI conversations becomes essential. Religious dialogues, rich with existential and factual inquiries, provide a unique lens for examining these interactions. This study analyzes over 85,000 messages from more than 10,000 conversations with "Ask Cathy," a chatbot designed to answer questions about the Episcopal Church using Retrieval-Augmented Generation (RAG) techniques. Our research team, comprising experts in linguistics and computer science, employed various methods—including topic and sentiment analysis validated against human-assessor baselines—to assess user engagement and evidence of learning. Preliminary findings reveal distinct patterns in human-AI religious dialogues, offering insights into designing effective chatbots for education and faith formation. These results hold significant implications for educators, technologists, and faith leaders seeking to foster meaningful interactions through AI-driven platforms.