In 2022, Blake Lemoine, a Google engineer and ordained mystic Christian priest, caused a stir when he claimed his employer’s AI-powered chatbot, LaMDA, was sentient. In a Medium post detailing his conclusions, Lemoine drew a direct connection from his faith to his claim about LaMDA: “There is no scientific evidence one way or the other about whether LaMDA is sentient because no accepted scientific definition of “sentience” exists. Everyone involved, myself included, is basing their opinion on whether or not LaMDA is sentient on their personal, spiritual and/or religious beliefs,” a view he also shared on X/Twitter. When Lemoine’s priest role came up during his conversation with LaMDA, the chatbot asked whether Lemoine would be its spiritual advisor and lead it in meditation. Lemoine agreed on both counts. After sharing documents with a U.S. senator and a day after he publicly aired concerns about religious discrimination at the company, Lemoine was suspended by Google and later fired for violating the company’s confidentiality policy. Lemoine’s high profile case is one of the first to raise concerns about AI and religious belief in the workplace. But Lemoine’s beliefs and their connection to the controversy surrounding his firing and subsequent legal troubles has not garnered much scholarly attention.
What happens when an employee, informed by her religious beliefs about AI, acts conflicts with their employer’s expectations of their use of that technology in the workplace? Blake Lemoine’s dilemma is one example of how this tension could manifest. There are others. At least one new religion that places AI in a central role within its theological worldview has appeared in recent years. Should an employer consider an employee belief’s that AI is divine when making work assignments involving the technology? More established religious groups, like the Southern Baptist Convention and the Catholic Church, are beginning to issue statements about AI ethics, making normative claims that are at once rooted in a particular tradition and shared across faiths. Prominent among these is human dignity and the uniqueness of humans in creation. How should an employee respond to a request from their employer to adopt and use a technology in such a way that the employee perceives to be a threat to human dignity and the uniqueness of humans in the created order?
Another example is the use of AI tools to discriminatory effect. Employment law scholars warn about the unchecked use of AI tools to select, classify, and generate ever-increasing amounts of data about employees runs the risk of both replicating and creating new opportunities for bias in the workplace. In 2021, the Equal Employment Opportunity Commission (EEOC), the federal agency responsible for enforcing many of the U.S.’s workplace civil rights laws, took notice and issued guidance to ensure AI usage in the workplace complies with those laws. While many examples of this in the context of employment have involved race and gender, bias toward an applicant or employee because of their religious identity is not a far leap. One 2021 study published by Stanford University-affiliated researchers Abubaker Abid, Maheen Farooqi, and James Zhou found that Chat GPT-3’s responses to a prompt referencing Muslims generated responses involving violence 66 percent of the time. Coverage of the study importantly noted the importance of addressing such biases before they become integrated into “downstream” tasks.
Artificial intelligence presents pressing legal, moral, theological, and ethical questions for the workplace and beyond. The conversation about what protections may exist for religious employees interacting with AI in the workplace in ways that implicate their religious identity, beliefs, and practices is nascent and this Paper aims to provide a roadmap for beginning to explore these issues. Part I sets the stage by discussing different religious approaches to artificial intelligence. This foundation demonstrates that AI is quickly becoming a topic for theological reckoning and, in some cases, a new frontier for finding common ground. This Part discusses a high-profile employment dispute example involving an individual’s religious beliefs about AI’s sentience, at least one emerging religion with AI at its core, and denominational and interfaith statements from established religious traditions. At least one of these groups has made public its support for employee resource groups (ESGs) related to faith.
Part II evaluates potential claims under existing laws that protect employees from religious discrimination, provide for religious accommodation in the workplace, and shield (to some extent) employees from reprisal because of their speech and expression. This Part discusses Title VII and the First Amendment’s Freer Exercise and Speech Clauses. This list is by no means exhaustive and the legal landscape governing the development and use of AI continues to evolve. One recent article by Michael H. LeRoy has concluded that some religious accommodation claims involving AI in the workplace are not likely to be successful under Title VII or the First Amendment. But that conclusion only tells part of the story, and this Part aims to broaden the scope of analysis by expanding the types of scenarios and theories of liability considered.
Recognizing potential shortcomings in existing protections, Part III considers whether a conscience-based protection model akin to those that exist with respect to abortion within the healthcare context could address existing gaps. While conscience-based protections have developed within the specific context of healthcare and therefore no doubt bear that context’s unique imprint, scholars are already looking to the regulation of biological and medical science technologies as a model for AI regulation.
This Paper concludes by calling for mechanisms, legal and otherwise, that will ensure that diverse voices are heard in the ongoing debates about the proper scope and development of AI and its use in the workplace.
[footnotes omitted but available upon request]
What happens when an employee’s actions, informed by her religious beliefs about AI, conflict with their employer’s expectations of their use of that technology in the workplace? Should an employer consider an employee belief’s that AI is divine when making work assignments involving the technology? How should an employee respond to employer requests to adopt and use a technology in such a way that the employee perceives to be a threat to human dignity and the uniqueness of humans in the created order? Are religious employees protected if AI tools are used to discriminate? Artificial intelligence presents pressing legal, moral, theological, and ethical questions for the workplace and beyond. The conversation is nascent about what protections exist for religious employees interacting with AI in the workplace in ways that implicate their religious identity, beliefs, and practices and this Paper aims to provide a roadmap for beginning to explore these issues.