Over Christmas, I read Yuval Harari’s new book, Nexus: A Brief History of Information Networks from the Stone Age to AI. In chapter four, Harari makes an interesting connection between religion and AI. He notes that at the heart of every religion lies the desire to connect with an infallible intelligence free from all possibility of error (p. 71). The underlying logic of this desire makes sense as a source of complete and perfect knowledge pretty much guarantees success in just about any endeavor. I mean, who doesn’t want to win?
With the rise of the internet in the 1990s, a god-like vision of technology emerged. The vision was that of humans accessing all the world’s knowledge via a single portal, a place where search engines from Yahoo / Altavista and, finally, Google meet and talk with humans. The setting was new, but it shared much in common with the experience ancient Greeks had at the Oracle at Delphi, where humans encountered a divine intelligence dispensing wisdom to all their inquiries. Only the anointed few – a priestly class – actually know how the technology works. Even so, all now acknowledge our dependence on this new mediator, a technology that sits between us and an almost infinite expanse of knowledge. And just like the Oracle at Delphi, this oracle frequently responds in unexpected and surprising ways.
It’s surprising how quickly humans create idols and then serve them. Technology idols such as AI are everywhere today, and we accept their every pronouncement as if it had divine authority. While working in industry, I saw this happen many times. “Well, that’s what the computer says, so it must be true!” At one of the steel mills where we were implementing a new software package we had written, the system failed to notify the buyers to reorder a critical component. In short, our software had a bug, but no one questioned it until an inventory clerk said that the plant only had a day or two of supply left. Here, the true nature of a pseudo-kind of divine providence comes into view. It may appear infallible, but it’s not.
In classical Judeo-Christian theology, God alone is omniscient, omnipotent, and omnipresent. The big three omni’s, as I call them, distinguish this infinite being from finite creatures such as ourselves. Most of the world’s holy scriptures warn against the hubris of creating idols, of flying too close to the “sun” as Icarius did, resulting in melted wings and a tragic fall from heaven. Despite this ancient counsel, we continue to fashion technical tools that mimic the divine.
Large language models (LLMs) are just the latest iteration of this all too human impulse to refashion the infinite. They appear to be omniscient, even capable of conversing with us in a multitude of languages. Has this new god finally reversed what the Old Testament one did at the tower of Babel? It certainly looks like it.
A cursory examination, however, reveals that these two gods are not the same. An LLM, unlike a genuinely infinite being, is trained on a finite dataset, albeit a big one that includes most of what’s available on the internet. It may have access to a much larger knowledge base than the typical human, but it’s not infinite.
Additionally, LLMs are fashioned in our own image. That is, they simply reflect back to us what we have already revealed about ourselves in countless documents, articles, spreadsheets, blogs, reports, and books available in electronic form. The underlying data patterns they lay bare for all to see reflect our biases, assumptions, and prejudices. But unlike mute idols made of gold or silver, this technology speaks! It answers our every question, offering assistance in any way possible. And like a god, no ask is too big or too hard for it.
LLMs also speak with god-like confidence, though they lack omniscience. In his book, The AI Revolution in Medicine, Peter Lee offers some striking examples of exchanges between medical personnel and ChatGPT where the model was clearly wrong but refused to acknowledge it until boldly confronted. One of the examples was an erroneous drip rate calculation that could have led to a deadly outcome. Lee writes that AI is revolutionary, but humans must constantly monitor it in medical settings. Once again, we see this new god falling short of its ancient rival.
At first blush, a conversation about AI and religion seems a bit odd. What do these two things have in common, if anything? But as I worked on this piece, the ideas just kept coming. Is AI a god, an idol, or something else? I’d like to hear from you, my audience. What do you think? Leave me your comments!