A cartoon by Martin Rowson shows a futuristic scene where soldiers hold up a CAPTCHA puzzle like a shield

A new spectre is haunting our imagination – that of an artificial superintelligence, which some fear will kill us all. Geoffrey Hinton and Yoshua Bengio, computer scientists garlanded with awards; Eliezer Yudkowsky, the internet’s prophet of AI doom; even some CEOs of billion-dollar AI companies are calling for urgent global action. And slowly, their fears of AI destroying humanity are seeping into the public imagination.

If they’re wrong, Roosevelt may be right: the only thing we have to fear is fear itself. But that’s no small concern. Fear is a dangerous companion. It can shatter the rule of law. More insidiously, fear can harden law into tyranny, as democracies reach for emergency powers and suspend freedoms. And when the fear fades, as it often does, we’re left surveying the wreckage, wondering how we let ourselves be so thoroughly spooked, again.

Peter Thiel, the US tech investor, was among the first to warn that fear-driven overregulation of AI could do more harm than good. This was easy to dismiss: Thiel has invested in a range of AI companies, from Palantir to OpenAI, so naturally he’d rail against regulation. Even so, we’d be wise to ask whether, in dreading the machine, we risk becoming worse than that which we fear. He who fights monsters, Nietzsche cautioned, must take care lest he become one himself.

p(doom)

Today’s AI, such as the chatbots that churn out emails or passable poetry, is useful in narrow areas. But many tech firms want something more: artificial general intelligence (AGI) that could match or exceed human reasoning in most areas. Beyond that lies the spectre of artificial superintelligence (ASI), a system that keeps improving itself until we become mere dust in its wake.

Fears about ASI stem from two ideas. First, misalignment: ASI’s goals may diverge from ours. Second, instrumental convergence: whatever its goal, ASI will seek more resources and remove obstacles to help it achieve that goal, and humans could be both.

To some, the threat could not be larger. In 2023, hundreds of scientists and tech leaders signed an open letter urging that AI-driven extinction be treated as seriously as nuclear war or pandemics. AI safety specialists even have a term for the probability of ASI wiping us out: p(doom). A 2023 survey of AI experts found that their median estimate of the likelihood of ASI, if achieved, causing either human extinction or our permanent disempowerment in the next 100 years was 5 per cent. And yet a vocal minority are far more pessimistic. Geoffrey Hinton, a Nobel laureate and AI pioneer, puts the risk of AI wiping out humanity at 10 per cent within just 30 years, if it is not strongly regulated. Eliezer Yudkowsky – an influential AI researcher and cofounder of the Machine Intelligence Research Institute – dispenses with percentage ranges altogether. “If anyone builds it,” he claims, “everyone dies.”

Some of the biggest AI companies echo these anxieties. Dario Amodei, CEO of the $170-billion AI company Anthropic, puts the chance of AI going “catastrophically wrong on the scale of human civilization” at 10-25 per cent. His company’s official line is starker: until proven otherwise, assume future AI systems are extremely dangerous. Sam Altman, CEO of $500-billion OpenAI, struck a similar note in 2015, calling ASI “the greatest threat to the continued existence of humanity”. Altman has since adopted a more optimistic tone, but his apocalyptic warnings still resonate.

Altman also illustrates a new trend among AI leaders: an apparent faith that their own ASI will save humanity, coupled with certainty that their rivals’ ASI will destroy it. In 2016, for example, Elon Musk accused Google DeepMind of harbouring a “one mind to rule the world philosophy”, warning it could create an AI dictatorship. Yet Musk now asserts that his own company, xAI, can be trusted to develop ASI.

Others dismiss this all as melodrama. Yann LeCun, chief AI scientist at Meta, thinks p(doom) is lower than the odds of an asteroid strike (well below 1 per cent) and has called “BS” on the whole debate. Critics argue that doomsday talk distracts from harms already here, including job losses through automation and environmental impact.

Public fears of an existential threat

What’s striking is how certain everyone sounds, even as they disagree entirely. Nevertheless, what remains is a serious, credentialed and vocal minority convinced that ASI could soon terminate humanity. This attitude leads to a belief that we need to act quickly and decisively. Yoshua Bengio, a pioneer in the field of deep learning and a Turing Award winner, told me: “While we had decades to respond to climate change, there’s no guarantee we’ll have the same window of opportunity to address the potentially catastrophic risks of AI.” He points to early signs that today’s models can already deceive and act to preserve themselves.

Many AI pessimists are working to shift public perception of ASI’s dangers, using media appearances – often on podcasts, debate shows and YouTube channels – to make the apparent “sci-fi risks” they warn about feel more tangible and credible. Indeed, Bengio tells me that public awareness is “one of the most important factors” for humanity to manage catastrophic AI risk.

Signs of a shift in public perception are emerging. Between 2022 and 2023, the share of Britons who ranked AI as a top-three likely cause of human extinction leapt from 7 per cent to 17 per cent, according to YouGov. And in 2024, the pollster found that 39 per cent of Americans were concerned about AI possibly ending the human race.

So far, however, we haven’t seen much public protest related to the threat of ASI, even from a committed minority. Other perceived existential threats, like climate change and nuclear war, have prompted mass demonstrations as well as violent action by protest groups. The term “ecoterrorism” was coined in the 1960s, and environmental activist groups have been using tactics such as property damage and economic sabotage for decades. Similarly, the Plowshares group in the 80s damaged military property and weapons, including nuclear warheads. Yet, aside from a possible link with the “Zizians,” a cult-adjacent offshoot allegedly linked to violence and a few small pavement protests, AI safety activism remains almost monastic, mostly carried out from behind computer screens.

One factor that could be holding protest at bay is that – unlike climate change or nuclear war – ASI promises extraordinary benefits alongside its risks. Philosopher Nick Bostrom is known for his work on existential risk, and is the author of the influential 2014 book Superintelligence: Paths, Dangers, Strategies – often cited as an early wake-up call about the dangers of ASI. Nevertheless, because society is now paying more attention to managing such risks, Bostrom has become more optimistic about the potential for ASI to benefit humanity. He tells me that ASI is “the big unlock” and a portal through which “any path to really great futures for humanity must sooner or later passage.” ASI could accelerate clean energy transitions, create sustainable materials, extend lifespans and cure diseases. It’s hard to rally people to smash the machine that might cure a loved one’s cancer.

Battles fought behind closed doors

For now, the battles are quiet, legal ones – fought not in the streets but in drafting rooms and regulatory hearings in Brussels, Washington and Beijing. These efforts, behind closed doors, present their own particular challenges to democracy. It’s still early days for AI, and the law is struggling to catch up. For new laws to be democratic, they should be built carefully, debated in public, open to challenge, limited in scope and reversible. But the temptation with ASI is to forge ahead, seeking shortcuts and making swift decisions based on discussions behind the scenes.

The UK and US have established government-backed AI safety institutes that probe advanced models, run evaluations and publish their verdicts. Both have drawn fire for a lack of transparency – such as not being clear why certain AI risks are focused on rather than others – and neither was created by an act of parliament or congress. For now, their outputs are only recommendations and not binding. But as the technology accelerates, these institutes are likely to grow in reach and consequence. Meanwhile, the groups and individuals raising their voices and applying pressure to these institutes are also likely to grow in strength and number, as the threat is perceived to grow.

Efforts are being made to follow a more democratic process. Across the Channel, the European Union’s AI Act made history as the world’s first comprehensive piece of AI regulation. One can quibble about the EU’s democratic deficit, but the Act itself ticks many boxes: consent and representation (Parliament and Council voted for it), due process (clear rules, the right to complain) and proportionality (focus on high-risk uses). What it lacks is strong reversibility. It requires periodic reviews, but there are no hard sunset clauses to force a proper rethink.

While progress is being made, many still believe that regulation is happening too slowly and is too limited. Just three days after the EU Council approved its AI Act, Bengio, Hinton and colleagues published a paper in Science that, while welcoming the Act as a positive step, warned that global AI regulation remained hampered by its often voluntary nature, limited geographic scope and the exclusion of military systems.

What’s needed, they argued, are “governance mechanisms matched to the magnitude of the risks”, which they judge to be existential. While low-risk uses and research should be protected, “The most pressing scrutiny should be on AI systems at the frontier: the few most powerful systems – trained on billion-dollar supercomputers – which will have the most hazardous and unpredictable capabilities,” they wrote, calling for expert, fast-moving agencies “with the authority to act swiftly”. The appeal of this approach is clear, but so is the cost, for speed rarely leaves space for public debate or the slow grind of proper democratic process.

The dangers of 'moral panic'

For some, using regulation to protect civilization from the dangers of AI is less a subject for debate and more of a sacred duty. As tech journalist Karen Hao notes, visceral emotion and quasi-religious fervour colour some advocates’ warnings. Their need to prevent ASI from – as they see it – destroying humanity looks like what psychologists call a “sacred value”. These are non-negotiable commitments that override cost–benefit reasoning. When sacred values are activated, the brain’s cost–benefit circuitry effectively goes quiet, leaving a “just do it” imperative.

There’s also the related danger of moral panic. In such panics, “moral regulators” enlist the law to make their private fears everyone’s problem. Members of an elite (in this case, some AI safety experts) construct “folk devils”, whether this be the unknowable monster of ASI itself or the companies building it. They then stoke hostility, pluck statistics from the ether, like the ones around the highly speculative p(doom) debate, and prompt legislators, eager to be seen to act, to pass laws that risk being rushed, ill-conceived and hard to undo.

Panics fade, but laws linger. When a 2024 report by researchers from academia, civil society and industry proposed worldwide “computer governance” – an international system to monitor and manage the computing power used to train advanced models – Peter Thiel derided it as a blueprint for totalitarianism that could potentially monitor every keystroke on every computer. This sounds hyperbolic, but the report itself warned that such governance could “infringe on civil liberties, prop up the powerful, and entrench authoritarian regimes”.

In a future panic, AI experts could dismiss ordinary safeguards – such as public input, judicial review and expiry clauses – as luxuries, insisting that extinction has no appeals process and that only extraordinary measures will suffice. The point isn’t that we shouldn’t act, but that action must be anchored in law that is transparent, proportionate and challengeable.

The other reason regulatory responses could become problematic is if they under step, failing to address a plausible ASI threat. One obvious cause of this would be regulatory capture. While some big tech companies, like Anthropic, are in favour of more regulation, many others have poured resources into lobbying against regulation worldwide. A 2025 investigation by Corporate Europe Observatory and LobbyControl found that industry pressure had weakened the EU’s Code of Practice for general-purpose AI, noting that some tech firms have fought “tooth and nail” against regulating the development of frontier AI models.

Such fierce fighting has also occurred in California – home to the headquarters of Google, Meta, Anthropic and OpenAI. In 2024, the state tried to pass the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. This aimed to “mitigate the risk of catastrophic harms from AI models so advanced that they are not yet known to exist” by regulating large AI firms. Four in five Americans supported it, but intense lobbying by industry groups and companies such as OpenAI helped kill the bill.

When law fails, history suggests the fight moves to the streets. To paraphrase JFK, those who make peaceful change impossible make violent change inevitable. Eliezer Yudkowsky has already floated the idea that, if international regulation fails, the US should consider airstrikes on rogue data centres. While this seems like an outlandish idea, we may see more like it if frustration, and fear, is allowed to build.

Act – but with guardrails

Some will dismiss any discussion of ASI as a distraction from the tangible harms AI is already inflicting on society. Others will see any resistance to harshly regulating ASI development as little more than useful idiocy in service of libertarian profit-seeking. Yet insisting on democratic principles in the face of ASI fears allows us to draw a clear through-line: the same commitment to democratic oversight and control is needed both for tomorrow’s hypothetical ASI and for today’s very real “empire of AI”, as Karen Hao has called it.

Our challenge is not just to survive whatever clever machine humanity builds next. Instead, we must survive ourselves: our appetite for control, our willingness to burn freedoms for safety, and our tendency to view due process and transparency as unnecessary friction. The task is to act, but with guardrails, remembering that democratic processes aren’t luxuries to be discarded but lifelines to be clung to. Democracies are infuriatingly slow, but that’s a feature, not a bug.

Whatever the future of AI, and however fast its development, our goal is simple: no catastrophe, no tyranny. The hard part, the deeply human part, will be remembering that the threat of the second is every bit as dangerous as that of the first.

This article is from New Humanist's Winter 2025 edition. Subscribe now.