Artificial intelligence is advancing at an ever-growing pace, reshaping industries and capturing the world’s attention. While often framed as an economic and technological revolution, there’s a deeper, more unexpected transformation underway.
In this context, people often refer to the technological singularity, a concept that has existed for over 60 years, dating back to John von Neumann. The idea describes an exponential explosion in technology, often enabled by AI. In 2005, Ray Kurzweil described in his book “The Singularity is Near” the most prominent version of this singularity with a self-improving AI that starts to automate research and the economy until it supersedes humans and begins to consume the universe in search of more energy and computational substrate.
This concept already sounds similar to other eschatologies, and I will explain in this blog post why this is a very accurate comparison, and the recent progress in AI has created a new religion rooted in this very idea. We will go over some of the striking parallels, like messiah figures, churches, different interpretations of this machine god, and the already-mentioned eschatology. We will start with the first era of niche books and forum discussions, and later move on to the current events that led to the spread of these ideas.
Genesis
Being on the younger side, I can’t speak to the last century, but I remember quite clearly how the singularity was understood in the early 2000s and 2010s. To hear about the singularity, you either had to stumble upon speeches or books from figureheads such as Ray Kurzweil or be really into science fiction and science. There were some online forums, often dedicated to a certain aspect, such as cyborgs (which is now called body hacking) or longevity.
AI was a very small part of it, especially before 2012. There wasn’t much news to discuss in this area before AlexNet revolutionized the field in 2012. Instead, topics such as nanotechnology, gene engineering, or even VR were more popular. There was a big overlap with ideas of transhumanism. People were sharing how to put a magnet in your finger to feel electromagnetic fields. Most of it was never in the news, and if it was, it was presented as strange. Many of the concepts were more on the extreme end of the spectrum. Dyson spheres, Mind uploading, or AGI are more interesting to speculate about than the next OCR model. The idea of a fast takeoff was more prevalent, and so it made less sense to talk about transition technologies and more about the final end of the tech tree. Most of the early fans had a utopia in their mind.
The Great Acceleration
A few years ago, a change happened. I am not talking about ChatGPT. It began earlier with DeepMind. The company itself is a product of this first era of futurists and people who believed in the singularity. That’s why DeepMind started with the mission of developing AGI to solve all other problems. Neural networks were just coming back, and DeepMind managed to be one of the first to really make use of Deep Learning. It helped that their research was not something theoretical or abstract, but games. Seeing AI learn to play the Atari games and later Go had the advantage of being very easily understood, and it can be compared against humans. Even the learning process of self-play looks somewhat familiar to humans and shows the process to someone without a background in ML. It was the perfect catalyst for all the futurists and believers in the singularity to latch onto and follow as an indicator. It marked the start of the events many talked about for years. AlphaGo was also the first time that mainstream media paid so much attention to AI since Deep Blue. This led to an influx of people interested in AI, DeepMind, and, for some, in the singularity. This included people such as Sam Altman and Elon Musk, who reacted by forming OpenAI. They followed in the footsteps of DeepMind and created mainstream attention by applying AI to popular games such as Dota. Then Google invented the Transformer, and after GPT-2 the LLM era began. At this point, the sentiment among the people who followed the idea of a singularity had already shifted to place AI in the center of all these technologies.
The popularity of ChatGPT has moved AI completely into the mainstream view. This resulted in a large influx of new users becoming interested in AI; not from a technical perspective, but from a cultural perspective. AI was everywhere: social media, news, documentaries, talk shows, Hollywood, and everywhere else. Many of the representatives of this new wave of LLM technology, like Sam Altman, Ilya Sutskever, Demis Hassabis, Elon Musk, Dario Amodei, and others, were a result of the first era of the singularity community and presented some of the ideas with their new reach. They got taken more seriously this time. It is mandatory to understand why they did not get laughed at this time. For most viewers, the ideas still sounded absurd, but some key components had changed. The technology itself was impressive enough from the perspective of a non-technical person to make the idea of LLMs getting to a human level easier to believe. The ideas presented in the news were packaged for the audience and often revolved around discussions about the economy and job replacement, which are easier to imagine than the Dyson spheres or grey goo scenarios, which are definitely a bit out there. They also spoke from a position of authority. OpenAI was the fastest-growing site in the world, they got huge amounts of money from investors, and the rapid release of GPT-3.5 and 4 gave the impression that they were rapidly improving the technology ahead of the rest of the world. They were all Silicon Valley elites with huge networks and had direct contact with multiple world leaders. No longer nameless forum accounts or small researchers. These guys were the next generation of Silicon Valley superstars.
All this media attention exposed the broader public to a lightweight version of the singularity. Most moved on, but many got hooked and found the growing online community. The motivation behind their interest often differs from the original pure fascination with the future. Large parts of the investment and venture capitalist world got interested in finding what to invest in next. Even government and defense agencies began commissioning reports, holding summits, and drafting AI strategies — not necessarily because they all fully believed in the Singularity, but because the momentum was undeniable.
Parallels to Religion
My thesis is that this movement has not only similarities but is, at its core, a religious movement. The structure, behavior, and development all follow a religious path.
I already mentioned key figures that resemble prophets. So let’s take a closer look at their role and their parallels to religious leaders. A classic modern example is Sam Altman or any of the other AI CEOs like Hassabis or Musk. A prophet is a voice of god, a messenger. So when Altman or Hassabis appear on podcasts every week and talk about their internal progress on AI, they are doing exactly that. People listen because they believe that they have a closer connection to “AGI“. After all, they have insider knowledge. Not only that, but because they are the CEOs, they decide what AGI is and when it will appear. So when they come on their next interview, they are not only giving their gospel, they are directly proselytizing. We cannot brush this off with the argument that they are simply trying to increase their shareholder value; they are true believers. Each one of them came out of the first wave. They managed to convince themself of their own religion as every good prophet does.

Another interesting figure is Bryan Johnson. He is a rich entrepreneur who became obsessed with longevity and invests his money in experiments on his own body, and trying to stay young (interestingly, there was a similar person in the first wave: Aubrey de Grey). He is not limiting this effort to himself, but spends considerable effort convincing others to adopt his lifestyle and ideology. His “Don’t die” Ideology could be a blog post and sect by itself. The point is that the “religion” around the singularity is not homogeneous and contains many different groups, prophet figures, and beliefs. We will talk more about that later, when we discuss the rift between EA and e/acc. The last person I want to write about specifically is Ray Kurzweil; as the author of the de facto bible, he has a special role here. He became famous for his many predictions of the future, many of which were correct. Later, he worked in a leading position at Google and continued writing books about the future and his predictions. The “bible” is not a single one of his books, but the collection of them. The one that stands out, however, is the one called “The Singularity Is Near”. It was written in 2005 and summarizes many of his predictions and gives timelines for many events. His books are central to understanding the beliefs of many of the key figures who came after him. I call them a bible, because they fulfill the same role: they lay a foundation for the beliefs of all followers of the singularity and act as a common denominator for discussions.
So now that we talked about the religious leaders, let’s compare the subjects of worship. In this new religion, it is “Deus ex Machina” the god from the machine. Artificial General Intelligence (AGI), or Artificial Super Intelligence (ASI), or even Advanced Machine Intelligence (AMI) are all names that are sometimes used for the same thing. A not-yet-existing digital entity that was created by humans and is supposed to surpass us. The concrete details of when, how, why, and where are unclear, and opinions differ. This is intended since it allows these key figures to present AGI as whatever they need to convince (,or convert) the next person. But the effect is already as tangible as the Christian/Jewish god. People act on this inter-subjective reality and change the world because they believe in it.
Different groups describe AGI either as a benevolent god or a world-ending destroyer. The first talks about a perfect world where AI solves all problems and humans live in leisure. The opposite interpretation imagines an AI takeover where AGI surpasses us and starts to consume all matter and energy, eliminating humanity in the process. These differing interpretations are what lead to the formation of sects and schisms. For some people, their support or opposition to AI is not linked to one specific scenario. Multiple popular figures work on AI, even though they believe the chance of the worst case is high. This leads to the next parallel to other prominent religions. Like all other religions, splinter groups and different interpretations lead to opposing groups with the same core belief. In the case of AI, we have two prominent groups that emerged from the events in the last few years. Effective Altruism (EA) and effective accelerationists (e/acc). These are also often called Doomer and Boomer. So where are they coming from, and how did the split occur? EA emerged around 2010 and is rooted in the idea that every human should try to maximise their positive impact on the world using logic to find out how to do this. This got quickly intertwined into AI research since a logical conclusion was that developing an all-powerful AI and preventing it from destroying the world had the biggest potential return. It was a popular idea during the early days of OpenAI and shaped their identity and research agenda. Effective accelerationism is a newer movement that developed as a reaction to the attempts to slow down AI from the EA camp. The supporters are sometimes called Boomer based on their wish to initiate an “Intelligence explosion,” which they argue has such large potential upsides that slowing down is the same as actively harming humanity. In the center of their differences is OpenAI, which contained both camps in 2022. Dario Amodei and his sister and co-worker Daniela did what every religious rebel does when the differences in belief become too large. They split and founded their own company as a counterweight to OpenAI, called Anthropic. It is interesting to observe how these quasi-religious beliefs have had a real and lasting impact on our world and the way people see AI.

In the following section, we will focus on OpenAI and the more concrete religious action that forms the cult around AGI and its figureheads. Ilya Sutskever, one of the founders and leading scientists at OpenAI until this year, used to start chants at company parties and burn effigies that symbolized unaligned AI. I don’t think I have to point out the parallels. More interesting is how the roles moved from priests and prophets to CEOs and scientists. The movement claims to be science-based and not belief-based. As such, they do not make prophecies; they make predictions or estimates. This can be clearly seen in the question that is part of nearly every public appearance of any of these figures. “What is your P(doom)?”, meaning what is your estimation for the probability of a catastrophic outcome of AI. The answer is presented as a probability, a scientific result, when it is in fact just a prophecy.
Let’s take a closer look at the language employed by believers and prophets alike. Like other religions, this one also developed its own code and language. AGI, P(doom), and Singularity are just some. Often used is also LEV (longevity escape velocity), the idea that progress in biology, with the help of AI, can lead to longer healthy life spans and lead to immortality. This is also a classic element of religion, which promises an afterlife, reincarnation, or immortality. Another one that sometimes appears is Roko’s basilisk, which is a direct derivative of Pascal’s wager and describes the idea that a future AI could punish people who did not help with or tried to prevent its creation. As you can see, fear before god translates really well to fear before AGI. And last but not least, the idea of UBI (universal basic income), which is the often proposed solution to the expected economic change that happens when large parts of jobs get replaced by AI. In this utopian idea, the products of the automated work get distributed to all humans and allow them to live in peace and leisure. As you can see, we have clear parallels to the Garden of Eden or heaven. There is a lesser-known version of heaven that AI promises, which was often discussed during the early days but did not make it into the mainstream. In this version, AI solves the process of scanning and digitizing a human brain, which would allow us to upload humans into a digital world and literally create a digital heaven or afterlife for all humans. This version is still known to everyone who is deeper into this “religion”, but is not often presented to the general public since it is strongly associated with movies and other works of fiction, which would undermine the credibility of the entire AI circle. Especially considering that these science fiction works like The Matrix often present this future as a dystopian one. But make no mistake, many of the thought leaders are well aware of concepts like mind uploading and Dyson spheres and will proclaim them as reachable targets when they are with like-minded believers.
And finally, looking at the physical manifestation of this new religion, their churches and artifacts. The church is the “house of god”; as such, we just have to look at the current data center hype to find the counterpart. Watching Sam Altman and others visiting the sites of their new data center projects makes it clear that they offer a kind of religious awe to them. Even the costs are comparable to the cost of medieval churches. We can get a similar impression when we look at the marketing around the upcoming hardware device OpenAI is working on which is not presented as a tool, but more like something that transcends traditional hardware. Their announcement is already presented very intimately. And the last physical manifestation of this religion is the humanoid robot. Instead of creating the human in the image of god, we create god in the image of humans. When tasked to visualize AI, artists and designers often use robots for obvious reasons. They are the embodied product that is supposed to replace us; First as workers, and potentially in everything else. It is no coincidence that the current wave of startups that develop humanoids happens in parallel to the AI wave. Both complement each other and help to create a image in peoples head that is inspired by works of fictions like “I, Robot” and “Ex Machina“.
So let’s conclude: We have an all-powerful being that will destroy or save the world. This entity is worshiped by rich and powerful people who try to convert others to get more money to build more buildings for their all-powerful being. They also disagree on how to understand and approach their entity and form smaller groups around figureheads. On top of all of this, they also make heavy use of symbolism, jargon, and digital gatherings to promote their ideas. These are all ingredients for a new religion that is filling a space that is becoming empty with the sunset of classic monotheistic religions.
Religions often partake of the myth of progress that shields us from the terrors of an uncertain future.
Frank Herbert
I would argue that religions are needed more than ever, since the world is becoming more complicated and people are looking for simple explanations and solutions. Wars, climate change, pandemics, social injustice, and the digital age all offer new challenges that are too much for most. A typical reaction is the anchor of religion, which can offer simple solutions and often a simpler worldview. On the other hand, classic religions are not attractive anymore. They feel outdated, not compatible with the knowledge most people have of the world, and offer little direct benefits anymore. This new religion is perfectly positioned to fill this position and offer a modern version, which has some clear advantages. The promise of AGI sounds a lot more plausible than a real god, and it offers similar benefits with the advantage that it is much easier to believe since they are based on “science” and therefore “real”. There are also direct monetary incentives for this new religion, even if you do not believe in an all-powerful AI or mind uploading. There are a lot of investors who can be converted, startups to be founded, and stocks to be traded. Every rich person is buying their letter of indulgence with AI right now. Arguments like “we do not have to worry about climate change now, AI will solve it if we develop it with coal and gas first” are often used unironically on the highest levels.
I believe that there is real harm in this new religion. To understand and use AI in the best possible way, it has to be developed based on a scientific and humanistic approach. Treating it as a future god-child of ours leads to hubris in the creators, fundamentalism in the regulators, and religious mysticism in the users. When the people in power believe they will create a god, they might do it for that exact reason.
Disclaimer: This post does not represent my beliefs or thoughts; however, I used to believe some of these things that I described here, and I still think that some of them might become true. While timelines are uncertain, I know that it is fundamentally possible to create human-level AI, surpass humans, or solve ageing. What I do not believe is that we need or want a religious relationship with such a system and should instead try to develop tools that help us improve our own life and all other life on this planet. Religious leaders are
Leave a Reply