Attached Paper In-person November Annual Meeting 2025

“I Use it to Help me Think”: Troubling the Hype Around AI in Education

Papers Session: Teaching Tactics
Description for Program Unit Review (maximum 1000 words)

Introduction

Since the introduction of OpenAI’s ChatGPT, many university administrators and educators across disciplines have encouraged the use of large language models (LLMs) in the classroom. Faculty and graduate students integrate artificial intelligence (AI) into their teaching and research; instructors design assignments that incorporate LLMs; students use LLMs to write essays, summarise articles and books, draft outlines, and outsource the thinking process; respected journals use AI for peer-review; Google Workspace and Microsoft 365 have integrated AI into every product they offer, and the AI in education market is currently worth over $2.2 billion annually. 

I am an assistant professor in the religion department of a major North American university. I teach seven courses—online and in-person—with an average of 1100 students and eighteen teaching assistants each year. My teaching philosophy centers trust and care, which I try my best to demonstrate through my course design, assignments, assessment models, and by prioritising interpersonal relationships and mentorship. My assignments tend towards low stakes writing practice, short online quizzes, creative projects, digital storytelling and reflections, and an “ungraded” participation model. I have tried mitigating the use of LLMs in my courses by adjusting reading and writing requirements, assigning more open-ended creative projects, and more. Despite these efforts, plagiarism in my courses has increased each year. 

In this paper, I reflect on my experiences teaching large survey courses, share my experiments with alternative assessments and assignments, and discuss how AI has challenged the pedagogies of care that inform my teaching philosophy. I examine the discourse and narratives used in support of the use of AI in the classroom and argue that the perceived benefits of LLMs to the process of teaching and learning are over-hyped and largely antithetical to pedagogical best practices and models. In addition to critiquing the assumptions and premises made by AI advocates, I also argue that the real-world consequences of LLMs means that there is no ethical use-case for AI in higher education.

LLMs in the Classroom

Proponents of LLMs in the classroom present many arguments in favor of its use. OpenAI Education suggests using ChatGPT to build quizzes and lesson plans, teach students about critical thinking, and save time by outsourcing “hard to implement” teaching strategies to AI (Mollick and Mollick 2013). Kumar et al. suggest that using AI teaching assistants in large classrooms has the “potential to reshape the landscape of personalised education” (2024, 499:2). Assessments written by AI can foster student creativity and provide consistent and accurate feedback, and encourage students to “complete assigned tasks that they initially thought they were unable to perform or explore different perspectives in their learning process” (Weng et al. 2024, 37). LLMs can foster self-directed learning, and enhance information accessibility (Chen 2023). LLMs align with Bloom’s Taxonomy and other learning frameworks, and they can enhance English writing skills and student motivation (Bopp, Foerst, and Kellog 2024, 133). LLMs can also “level the playing field” for students with disabilities, and students whose first language is not English (131).

LLMs are often described as having sentient qualities, such as the ability to learn and think (van den Berg and du Plessis 2023). LLMs can “understand” relationships between ideas and words (Kasneci et al. 2023), and they can assume the role of a tutor or teaching-assistant, offering on-demand advice, care, and personalised learning (Chen 2024). These human-like descriptors reflect the broader hype around AI, and legitimises the use of AI in education by drawing cognitive parallels between students and large language models.

Troubling the AI in Education Hype

I critique the assumptions and premises made by advocates for AI in education, and advance two evidence-based arguments:

  1. The integration of LLMs into the classroom does not promote pedagogical best practices. For example, many claim that LLMs are particularly adept at helping students develop their critical thinking, epistemic curiosity, and metacognitive skills. Yet, how LLMs support forms of higher order learning that educators do not or cannot is unclear. When students outsource their writing and thinking to a LLM, their ability to develop competence in writing and thinking is compromised.  

     

  2. The societal risks and impacts of AI means that there is no such thing as responsible LLM use. First, LLM models are trained using large amounts of stolen text, or text sold to AI companies without proper consent. Second, the computational resources needed to train and deploy LLMs is outpacing our energy infrastructure (Kneese and Young 2024). And as Bender et al. (2024) argue, “increasing the environmental and financial costs of these models doubly punishes marginalised communities that are least likely to benefit from the progress achieved by large LMs and most likely to be harmed by negative environmental consequences of its resource consumption” (610). This is to say nothing of the accelerationist, longtermist, and techno-fascist beliefs and values of the Silicon Valley elites who develop these technologies.

Conclusion

This paper is not meant to contribute to the moral panic over AI. We are in the early stages of a rapidly evolving technology, and there is a lot left to learn with respect to the relationship between learning outcomes and LLMs. But AI in education narratives downplay the real harms of artificial intelligence and exaggerate LLM use-cases in higher education, which ultimately benefits no one. We should consider what message we send to our students by encouraging the use of such problematic technologies.

Although plagiarism is a reason to be cautious of LLMs, our discussions around artificial intelligence in the classroom must extend beyond cheating. As Damien P. Williams (2024) argues, “the seeming eagerness of many to hand over knowledge-certifying authority to GPT-based systems … [risks] damaging the production of knowledge itself” (626). We need to consider if LLMs align with our teaching and learning goals, how LLMs—plagiarism checkers included—may harm neurodivergent and marginalised individuals, and how AI may contribute to the very problems it supposedly solves.  

Abstract for Online Program Book (maximum 150 words)

Since the introduction of OpenAI’s ChatGPT, many university administrators and educators across disciplines have encouraged the use of large language models (LLMs) in the classroom for the purpose of course and assignment design. However, advocates for AI in education overestimate the capabilities of LLMs in the learning process and overwhelmingly ignore the social, environmental, and epistemological consequences of AI—including plagiarism. In this paper, I critique the discourse surrounding artificial intelligence (AI) in higher education. I argue that the perceived benefits of LLMs to the process of teaching and learning are overhyped and are largely antithetical to pedagogical best practices and models. In addition to critiquing the assumptions and premises made by advocates for AI in education, I also argue that the real-world consequences of LLMs means that there is no ethical use-case for AI in higher education.