“Pause Giant AI Experiments”: An Open Letter Full of Straw men | Hendrik Erz

Abstract: In a recent Open Letter, AI scientists and entrepreneurs are demanding a moratorium on the training of large AI models. In this article I argue that the letter is full of straw man arguments and does not significantly bear on the dangers emanating from AI.


Has the “Harder Better Faster Stronger” crew of artificial intelligence finally grown a conscience? If this open letter is to be believed, they did. The open letter, titled “Pause Giant AI Experiments”, calls for a moratorium on the training of any AI models “larger than GPT-4” for at least six months. From the outset, this sounds as if the AI community wants to take a step back before the risks of playing with large-scale AI outweigh the benefits. And they really mean it: “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

So, did they grow a conscience? Of course they didn’t. But it is still valuable thinking about this open letter – not for what it is demanding, but rather what it tells us about its authors and the signees. I won’t go over the tons of fallacies that are part of this letter, because Emily Bender did that already. I recommend her own article on the Open Letter where she dissects its dubious statements that are more a sign of an AI hype rather than actual knowledge about these systems.

I want to focus more on the dishonesty at the core of the letter.

“Future of Capitalism”, not “Future of Life”

The first issue I take with the letter is the organization responsible for it: Future of Life. This organization believes in an ideology termed “Longtermism”. Longtermism is an offspring of the “Effective Altruism” movement and serves as a fig leaf for its harmful effects on environment and mankind. Basically all the sectors that EA adherents negatively affect with their behavior, such as the climate or global inequality – keyword: The FTX crash in late 2022 – are named as areas of great concern for the longtermism movement.

But of course, the aim is not to ensure the “Future of Life” of human beings; the aim is to ensure the “Future of Capitalism”, or the future of life despite capitalism. Longtermism is not about finding solutions to the root causes of climate change (which likely involve changes to the market order), but to reconcile the current economic system with these aims. It is almost as if anthropologist Mark Fisher was right when he described “Capitalist Realism”.

Just take the order of priorities based on their website: The very first area of concern they list is AI. Climate change is only the fourth and last area of concern. In other words: As long as we don’t create a sentient AI that’s going to kill us all, climate change won’t be too much of a problem, even for future generations. If this doesn’t speak helplessness in the face of the task of reconciling an economic order built on fossil extraction with a healthy climate, I don’t know what does.

There is no Sentient AI

And this sentiment is precisely what also permeates the open letter. The main problem for the authors is not the ridiculous amount of harmful emissions produced by the large language models that OpenAI et al. are creating – no, it’s that we could face an Artificial General Intelligence (AGI) very soon if we don’t take precautions now.

This goes to tell that these people have stepped into their own trap: By evoking fears of sentient AI for years, they are now apparently getting frightened by the prospect that this could actually happen. Just last year I have already made this argument in the context of the LaMDA incident. When you tell yourself that sentient AI is coming, then you will start to see it everywhere. This open letter is an affirmation that the signees are now literally seeing ghosts.

Again, I refer you to Emily’s blog post to get to know why there won’t be any sentient AI anytime soon. The only person who has scientifically checked whether AI is sentient is David Chalmers, and he did not sign this letter.

Evoking Horror Stories

The open letter is full of straw man arguments. Instead of addressing real concerns that exist with regard to large AI models, they build straw man after straw man to then point at these and say “Look, it’s scary, isn’t it?” Take, for example, the following passage:

Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?

Yeah, should we? 👻

Let’s talk a little bit about these concerns of theirs. First, what about propaganda and untruth — that’s bad, isn’t it? While certainly bad, this is a straw man that distracts accountability away from the actual perpetrators of “fake news”: It’s not the language models that “flood” our information channels. It’s humans. Maybe with the help of AI.

And this directly links the first with the second concern on “automating away all the jobs”. As Erik Brynjolfsson has recently put it: “it’s not going to be AI replacing lawyers, it’s going to be lawyers working with AI replacing lawyers who don’t work with AI.” In other words: Machines don’t put fake news on Twitter just as they won’t replace us.

These two arguments make up the first straw man: AI allegedly has accountability. However, it doesn’t. The reason is that AI cannot perform willful acts, and it is also the reason why you can’t name ChatGPT a co-author when publishing with Springer Nature.

The latter two statements then build up the second straw man: By allowing AI to become stronger and more powerful, we are endangering our entire civilization! I hope that I do not have to explain why that argument is ridiculous – it’s vague, far into the future, and entirely meaningless for our current debates. And, with the impeding climate catastrophe, we are well en-route to destroy our civilization ourselves, without the help of any form of AI.

The real issues at hand have been excavated by folks such as Emily and the rest of the DAIR institute’s team. These are issues about global inequality, power in the hands of a few wealthy individuals, and the climate impact of training such large models.

Qui Bono?

That leaves us one last question: who benefits from this initiative? To me, two alternatives sound plausible. The first one is that more and more AI scientists begin to believe in their own ghost stories, just as I predicted almost a year ago. The second possibility is less likely, but I don’t think it’s entirely baseless: That this letter is once again an advertisement campaign touting the power of large AI models to policymakers to ensure the makers and owners of these large-scale systems remain relevant in the policy discourse. This way they are able to exert influence over the governance of these models and, by extension, ways of extracting profits from them.

This would also speak to the arbitrary and way too short moratorium of just six months. As if the government would suddenly take steps to stop the development of these models. As if a moratorium this short would actually foster a real discussion on the ethical implications of these models. As if this letter would actually make anybody who doesn’t already have it grow a conscience.

If you really care about the dangers of AI, you won’t be scared that it’ll take away your job, let alone endanger our entire civilization. Instead, you’ll want to ask how these models can affect global inequality, the climate, and power hierarchies emanating from Silicon Valley.

Suggested Citation

Erz, Hendrik (2023). ““Pause Giant AI Experiments”: An Open Letter Full of Straw men”. hendrik-erz.de, 31 Mar 2023, https://www.hendrik-erz.de/post/pause-giant-ai-experiments-an-open-letter-full-of-straw-men.

Did you enjoy this article? Leave a tip on Ko-Fi!

← Return to the post list