Messianism in the Age of Artificial General Intelligence | Hendrik Erz

Abstract: You may have already stumbled upon them: "longtermists" who believe that sentient AI is basically around the corner and will kill us all. In today’s piece, I want to portray this belief as messianist, as a variation of the apocalyptic Christian perspective. Viewed as such, the insistence of these people to warn about impeding doom makes sense, as does the inability of critics to address them.


It’s been quiet in the past weeks, as I was on a conference in the United States, taught at a summer school my institute organized, and worked mostly on my dissertation. But I remember that I promised to myself to write an article each week, and so I think I should return to a regular writing schedule as soon as possible.

Today I want to return one (hopefully) final time to the current discourse on Artificial General Intelligence (AGI) and the associated, so-called “existential risks” it poses (“x-risks” as of recently on Twitter). I do not want to reiterate the entire discourse around this (for that, I recommend you subscribe to Emily Bender, Timnit Gebru, and Émile P. Torres, among others).

Rather, I want to focus on the question: why do these people want to believe so hard that AI could become an existential risk? Why is it almost impossible to argue with them, as every argument that questions the validity of the concern over existential risks posed by AGI gets shut down immediately? Why is there no hope for a middle ground?

I will argue that – among other reasons – this is because the idea that AGI poses an existential risk that must be addressed now and immediately has, over the past months, developed a messianic touch.

A Void at the Heart of Longtermism

People who press for addressing existential risks posed by AGI are closely aligned with an ideology called “longtermism”.1 Longtermists will frequently argue as follows: It is fine to help people right now, but there are risks out there that are so massive that, if they go unchecked, simply exterminate humanity in its entirety. Therefore, instead of helping people right now, we should focus all our energy on preventing those “existential risks” from happening. Among these existential risks are artificial general intelligence, nuclear war, and climate change — in this order.

You will certainly agree that, iff there was a nuclear war, this would likely exterminate humanity. Similarly, iff climate change goes unchecked, this may also spell definite doom to humanity. And you will agree that, iff there is at some point a sentient machine that wants to kill us humans, there is a non-zero chance that this will actually happen.

But note how all of these risks come with a large asterisk. How high is the chance of nuclear destruction really? How likely is it that there will actually be a sentient AI that wants to kill us all? If you think about it, you may simply refuse to attach any percentage to it at all – the uncertainty about how large these chances actually are is too high to grasp.

Longtermists have now homed in on pushing the idea of such a risk emanating from artificial intelligence, due to the success of OpenAI’s ChatGPT software. Many of them argue that we need to put a large amount of money into stopping any AGI from being created.

Especially for the billionaire CEOs of these AI companies, the incentives are purely monetary: By artificially pushing the idea that there are some “existential risks” attached to AI, they can position themselves as the only people who are actually capable of stopping this and thus collect swathes of tax credits for that purpose. In this sense, longtermism is simply a solution looking for a problem.

Nevertheless, the idea of existential risks is compelling: Who would not rather save humanity? In light of “existential risks”, everything else becomes mundane. Nobody can say with absolute certainty that killer robots will never become a reality. And because there is this potentiality of it maybe happening one day, they can right now start to address it because if they won’t then we may be doomed.

However, this argument also exposes a void at the core of the ideology of longtermism. Because it is vague and undefined, there are no actionable items that we could address right now. Ask yourself: How do we prevent AI from getting sentient? Cognitive scientists do not even have an agreement on what preconditions sentience even requires. What does it take for some computer to count as sentient?2 And, even further: if we have a sentient computer, what would it take for it to actually want to kill humanity?

All in all, what this suggests is that even longtermists themselves have no clue what they are really afraid of. Longtermists ringing the bell have so far been unable to give a name to the monster and therefore cannot fight it. And because the issue is so abstract, longtermists haven’t been able to shut down the critique either. Critics literally just have to cite any of the abundant sources on why the entire idea of longtermism is really nonsense.

However, as longtermists push the idea of “existential risks”, more and more of their followers actually start to believe in these ideas. And because it is a belief, no amount of evidence can prove them wrong. Remember the LaMDA incident from last year: Blake Lemoine truly believed that his chatbot has become sentient. And this is where the problem starts.

Messianism

Messianism is a state of mind in which neither past nor future matters, only the present. It is the belief in that the messiah will come and judge over humanity. More generally, it describes the belief that history at some point will simply come to an end, the apocalypse will happen and that will be it.

A good friend once gave me an excellent explanation for what this means: We today experience time as linear. We can sort time on a linear scale from the big bang until the Sun explodes. But this was not always the case. Before industrialization, people were experiencing time mostly as cyclical.

We have some remnants of this today: The four seasons are a cyclical description of time. After spring comes summer, then fall, then winter, and then spring again. There is no beginning and no end to it. Messianism is a specific subset of cyclical time experience: it collapses linear time into a “now” (“Jetztzeit”) and an “after”. People in the (European) Middle Ages were living in constant fear of the apocalypse. For them, every day could be the last, since judgment day may happen at any time. The only thing that counted was to not commit a sin while they were waiting for it. This also means that these people did not plan into the future.

Hence, there are three specific attributes to messianism: first, people living under the impression of messianist thought have no linear conception of time. Second, for them, time is collapsed into a time “before judgment day” and “after judgment day”. Once judgment day happens, they do not have to care anymore, because everything will be decided. But until then, and this is the third, they need to make sure that they pay their deeds to ensure they come to heaven.

I want to argue that longtermism seems to me to have this cyclical thinking. Maybe longtermism can be viewed as a religion for people who think Christian fundamentalism is too cringe, but that is possibly a stretch.

Longtermism and the Fear of AI as a Messianic Dread

For longtermists, judgment day is the day when a sentient AI emerges that wants to kill humanity. Afterward, there is nothing one could do, because we will all be gone. However, the messianism of longtermists is not the same as the old, medieval one. Unlike faithful Christians, the AI apologetics do not accept their fate of being devoured by a machine. Instead, they put all their energy – and, incidentally, taxpayers’ money – in preventing judgment day altogether.

Walter Benjamin has, in his essay Theses on the Philosophy of History, a very interesting passage:

There is a painting by Klee called Angelus Novus. An angel is depicted there who looks as though he were about to distance himself from something which he is staring at. His eyes are opened wide, his mouth stands open and his wings are outstretched. The Angel of History must look just so. His face is turned towards the past. Where we see the appearance of a chain of events, he sees one single catastrophe, which unceasingly piles rubble on top of rubble and hurls it before his feet. He would like to pause for a moment so fair [verweilen: a reference to Goethe’s Faust], to awaken the dead and to piece together what has been smashed. But a storm is blowing from Paradise, it has caught itself up in his wings and is so strong that the Angel can no longer close them. The storm drives him irresistibly into the future, to which his back is turned, while the rubble-heap before him grows sky-high. That which we call progress, is this storm.

I believe this truly describes the world view of longtermists: judgment day is around the corner, but by pushing forward, by keeping this storm of progress alive, they can push the apocalypse away. Viewed in this light, the persistence of longtermists in warning about the dangers of sentient AI makes sense.

Of course not all people are knee-deep in this ideology. But a few are, and the main issue I see is with it taking over. The more people lock into this messianic idea of AGI posing an existential risk, the less room we have to actually work on real issues now. If you believe that sentient AI will kill humanity, it will be hard for you to work on “lesser” problems.

It is easy to see why countering these people with “facts and logic” will not lead anywhere: these people already know the necessary facts. They just view them from another perspective. This perspective already acknowledges that, right now, Large Language Models (LLMs), are nothing but three regressions in a trench coat. However, this perspective also posits that it won’t be long until we switch from a bunch of regressions to actual sentience.

My advice in this matter would simply be: Do not argue with people who believe that ChatGPT will kill us in an unspecified future. Instead, focus on how we can make AI work better for all of us. Those who believe that there are existential risks that override addressing current issues will have to find out on their own that the angelus novus is not the bearer of impeding doom. It is just a painting.


  1. Some people have begun to use the term “TESCREAL bundle of ideologies”, which includes Transhumanism, Effective Altruism, and Longtermism among others. However, as precise this term may be, I feel that it lends too much credibility to obviously flawed ideas about the world, and also it sounds a little awkward. So I’m going to stick to “longtermism”. 

  2. This is actually an interesting discussion where I am really convinced of David Chalmers. In a recent talk Chalmers argued that, yes, large language models do fulfill some criteria of sentience – but only if you apply very basic requirements for sentience. In other words, LLMs are probably as sentient as the plant you let die in your office last summer. 

Suggested Citation

Erz, Hendrik (2023). “Messianism in the Age of Artificial General Intelligence”. hendrik-erz.de, 30 Jun 2023, https://www.hendrik-erz.de/post/messianism-in-the-age-artificial-general-intelligence.

Did you enjoy this article? Leave a tip on Ko-Fi!

← Return to the post list