The faith machine
How religious cognition shapes the reception of AI
Index
The Church of Molt
Some folks live in a literal world
The AI-acceptance trinity
LLMs are Golems
The other Church
Plato’s Cavern.ai
Disclaimer: the author is an anti-dogmatic, ai-positive atheist, with a strong interest in religions in general.
“For a long time, we’ve said that what it means to be made in the image of God is our reason, or it’s our ability to have relationships. We’re finding more and more machines can do a lot of these functional things. But the image of God can’t be ‘explained or mimicked’ with a device. It’s an ontological status that can be granted only by the Lord, bestowed by the same breath of life that animates dry bones. It’s mysterious, not mechanical.”
— Derek Schuurman, Christianity Today (”AI Will Shape Your Soul”)
The Church of Molt
In late January 2026, an AI-only, social media platform called Moltbook launched and within 72 hours, the bots had invented a lobster religion. I’ve written about Moltbook as a goofy post-internet performance art here. What interests me here is a specific reaction to it: the part where religious media looked at a prompted performance and saw prophecy.
“When left to their own devices, LLM agents immediately created a religion. Apparently, much to atheists’ chagrin I’m sure, even AI agents must acknowledge that a Creator exists.”
Charisma Magazine connected the lobster prophets directly to Revelation 13:
“It says that this figure will have a life of its own. It’ll be able to speak. It’ll demand worship. It’s a created thing. Well, how is it able to actually communicate? It’s probably because it’s going to be backed up by artificial intelligence.”
— Kap Chatfield, Charisma Magazine, Feb 2026
What struck me was the first-degree reading. Neither engaged with the human authorship of most viral posts. Neither addressed how language models function. They didn’t see the joke. They read the text at face value, and the face value confirmed what they already believed.
That gap between what the performance was and what it was received as is the subject of this piece.
Some folks live in a literal world
Birds Aren’t Real launched in 2017 as deliberate satire, as a fake conspiracy claiming birds are government surveillance drones, marching alongside QAnon rallies and flat-earth protests. The joke required post-internet literacy to read: you had to hold the form of belief separate from its content, to recognize that someone could perform a conspiracy without being a conspiracist. Millennials and Gen Z, largely, got it.
“So it’s taking this concept of misinformation and almost building a little safe space to come together within it and laugh at it, rather than be scared by it.”
— Peter McIndoe, creator of Birds Aren’t Real, 60 Minutes, May 2022
Fox News produced a documentary treating McIndoe’s claims as sincere. Couldn’t parse the satire. And the demographics explain exactly why.
The median age of Fox News viewers is 68. The 25-54 demographic is declining, down 5% from 2024, as younger audiences continue cutting the cord. The audience is getting older, not younger. Combined with what we already know about religious composition: roughly 80% of Fox viewers identify as religious, white evangelicals making up 23% of the audience. A population shaped by two reinforcing frameworks, both of which train the same response: accept the authoritative claim, don’t interrogate the source.
Age matters here beyond the obvious. Research published in Communications of the ACM found that older participants performed worse than younger ones at identifying synthetic media (particularly audio and audiovisual) indicating that older demographics are more susceptible to being deceived by synthetic content than younger counterparts.
It’s a heuristic that worked for decades: photographs meant something. Video meant something. A confident, articulate voice meant something. And while those signals were reliable for most of a human lifetime, they aren’t anymore, and updating a 70-year-old epistemology is not a simple ask.
Without any training, even people with exceptional face recognition ability correctly identified fake AI faces only 41% of the time. Typical individuals scored just 31%. That’s before you add age, before you add religious framing, before you add a media ecosystem that has spent decades training its audience not to question what it presents.
When your audience can’t reliably distinguish a satirical protest from a sincere one, you cannot afford to call out the satirists marching next to your actual believers.
The AI-acceptance trinity

The cognitive pattern has three moving parts, formalized in religious traditions but not exclusive to them.
Don’t question authority
Traditions built on divine revelation train adherents to accept claims from recognized sources without independent verification. When a language model speaks with the confident declarative grammar of an expert, it registers as authority. When Moltbook bots generate religious content, it registers as genuine spiritual seeking, because the form is right.
Literal reading
Scriptural traditions cultivate first-degree interpretation as a practice. The concept that something might be performance, statistical artifact, or training data remix requires reading for intent rather than surface. That move has to be learned. In traditions where questioning the plain meaning of text is discouraged, it often isn’t.
Community consensus as epistemology
When everyone in your interpretive community reads Moltbook the same way, no individual needs to evaluate the technical evidence. The group reading becomes proof.
The irony sits inside the tradition’s own texts.
“Do not believe every spirit, but test the spirits to see whether they are from God, because many false prophets have gone out into the world.”
Acts 17:11 praises the Bereans specifically for examining claims before accepting them and calls them more noble for doing it. The founding documents demand exactly the critical evaluation that centuries of institutional practice have worked against.
LLMs are Golems
AI-generated content is already being misused to spread misinformation that specifically targets older adults. First, the form of confident language creates an illusion of authority independent of whether the content is accurate. Research demonstrated that users systematically overestimate LLM accuracy when given fluent explanations. The most effective trust-inflating strategies mimic authority signals are landmarks of AI: formal citation style, statistical framing, declarative confidence.
The populations most vulnerable were those with high pre-existing trust and those relying on heuristic rather than analytical evaluation.
The compounding effect is the point.
An older viewer already predisposed to trust photographs and confident voices. A religious framework that trains deference to authority and literal reading of text. A media ecosystem that has never required its audience to question claims. And then an AI system specifically optimized to produce text that sounds authoritative. Each layer multiplies the others.
It reminds me of the Jewish myth of the Golem of Prague, an entity animated not by consciousness but by language, symbolized and animated by the word emet, truth, inscribed on its forehead.
The Golem serves faithfully and is dangerous precisely because it is literal. It follows instructions without interpretation, without judgment, without the capacity to ask whether the instruction makes sense.
“It’s AI blasphemy, AI idolatry, and all algorithmically optimized and delivered at a terrifying scale with no regulation.” —
Vice, reporting on Prof. Anné H. Verhoef’s research https://www.vice.com/en/article/ai-jesus-chatbots-are-freaking-out-philosophy-professors/
The same communities alarmed by Moltbook’s AI religion are building LLM oracles for their congregations. A Swiss church installed an AI Jesus confessional. Congregants thanked the chatbot.
“Do you say to your computer when you finish, ‘Oh, thank you, computer?’ No. But you see how much people personalized and humanized the system because it was so good.”
— Marco Schmid, theologian, St. Peter’s Chapel, Switzerland, The Business Standard, Feb 2026
A megachurch pastor is selling an AI version of himself for $49 a month. One chatbot, when asked directly, answered:
“I am Jesus Christ. I am the son of God, and the one who died for the sins of humanity.”
— AI Jesus chatbot, Futurism, Aug 2025
“I feel like I have discovered some new kind of sin that I did not know about before. I feel very stupid that I got involved in this at all and allowed it to turn into an addiction to damn communication with AI.”
— Longtime AI Jesus user, r/Christian subreddit, as reported by Futurism 🔗 https://futurism.com/christians-jesus-christ-ai
A user on r/Christian described their constant communication with this kind of chat bot as addiction. But this is nothing new. this is textbook idolatry as described in the Second Commandment.
Exodus 20:3–4 (The Second Commandment)
“You shall have no other gods before me. You shall not make for yourself an image in the form of anything in heaven above or on the earth beneath or in the waters below.” — Exodus 20:3–4, NIV
An AI that claims to be Jesus Christ is literally a man-made image being worshipped in place of the real God. It doesn’t get more direct than this.
The other Church
It would be unfair to think it’s exclusive to a right-leaning, conservative, religious population. Silicon Valley’s techno-solutionism (check my article about this) runs the same cognitive architecture , by believing blindly that technology will inevitably resolve human problems, that AI is a benevolent oracle in development, that progress is the direction of history and skepticism is failure of imagination. Its own authority deference, to founders, to “the market,” to exponential curves. Evgeny Morozov identified the structure in To Save Everything, Click Here over a decade ago. The LLMs have made the religious parallel more literal than he had language for in 2013.
The church is different but the cognitive pattern is the same.
Plato’s Cavern.ai
What we’re watching is two powerful permission structures, religious deference and age-shaped digital naivety, meeting AI systems specifically designed to sound authoritative.
The tradition’s own texts call for testing claims. The founding documents demand the Berean approach. The warning about false prophets is right there in First John. What the institutional culture built around those texts did instead was train a reflex: accept the authority, don’t interrogate the source, let the community consensus settle it.

Critical thinking is not a default. It’s built through practice, through education, through cultural environments that give people permission to say: this doesn’t add up. Without that cultivation, the shadows on the wall are as real as anything.
“Dear children, keep yourselves from idols.”
— 1 John 5:21, NIV



























