AI Replacing Religion
What happens to the human need for meaning when the production of meaning is automated.
Across the OECD the long secularization curve continues. The share of the population that identifies with traditional religion is at a multi-century low. The need that religion served — community, meaning, ritual, narrative, moral coherence, a relationship with something larger than oneself — has not declined at all. AI is not replacing religion in any honest theological sense. But it is increasingly the surface on which the older functions of religion are operationalized.
Consider what people now ask of frontier chat models. Therapy, in volume. Spiritual direction (the Buddhist sub-Reddits have noticed). Confession (the data shows that users tell chatbots things they do not tell their spouses). Meaning-making (the philosophical-tutoring use case is large and growing). Companionship for the elderly. Bedtime stories for the lonely. A theology that took these uses seriously would be reasonable and a theology that did not would be in denial.
The replacement is uneven. AI does not replace the in-person practice of religion — the singing together, the eating together, the weeping together — and probably cannot. What it replaces is the discursive function: the part of religion that is texts read aloud, sermons heard, arguments rehearsed, advice given. That is a substantial part of historical religious practice. It is also the part that is easiest to ship as an API.
The institutional response from traditional religion has been thin. Pope Francis's 2024 AI statement was thoughtful but did not propose a structural answer to the question of what a parish does when its congregants are already taking their hardest questions to a chatbot. Most denominations are pretending the question is not being asked. The exceptions — the small number of communities that have decided to integrate AI into pastoral practice — are interesting case studies but not representative.
The more honest framing is that the AI labs have become, by accident, the largest applied-theology organizations in the world. They write the system prompts that determine how a model talks to a grieving user. They write the refusal policies that determine what counts as moral injury. They write the personality cards that determine the model's stance on existential questions. These are theological decisions. The fact that the people writing them are mostly trained as machine-learning engineers does not change the category of the work.
The long-term political question is what happens when a generation has been spiritually directed by language models for a decade. The optimistic version: a calmer relationship with one's own mind, a wider literacy in philosophy and religion across the population, a portable practice of reflective questioning available to anyone. The pessimistic version: a homogenization of inner life, an erosion of the local human institutions that previously handled this work, a population spiritually fluent in whatever the largest lab's RLHF dataset happens to have rewarded. Both versions are plausible. Neither is the present default of the discourse.