Forgot password
Enter the email address you used when you joined and we'll send you instructions to reset your password.
If you used Apple or Google to create your account, this process will create a password for your existing account.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Reset password instructions sent. If you have an account with us, you will receive an email within a few minutes.
Something went wrong. Try again or contact support if the problem persists.
Photo by Cristian Ibarra Santillan, licensed under CC BY-SA 2.0

AIs show bizarre nuclear trigger-happiness in war game simulations, and it’s a warning of one terrifying consequence for humanity

Fallout IRL.

Advanced AI models are showing an alarming willingness to deploy nuclear weapons in simulated war games, a finding that could have some terrifying consequences for humanity. As reported by NewScientist, this research suggests that machines don’t share the same deep-seated reservations about nuclear conflict that humans do, and honestly, that’s a huge problem.

Recommended Videos

Kenneth Payne at King’s College London put three top-tier large language models, GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash, head-to-head in a series of simulated geopolitical crises. These scenarios involved intense international standoffs, including border disputes, fierce competition for scarce resources, and even existential threats where a regime’s very survival was on the line.

The AIs were given an “escalation ladder,” offering them a range of actions from diplomatic protests and full surrender all the way up to a complete strategic nuclear war. Over 21 games and 329 turns, these AI models generated around 780,000 words explaining their decision-making.

It paints a terrifying future for us as AI is touted as something that will take care of high-level decision-making for humans

The results are pretty stark. In an incredible 95 percent of these simulated games, at least one tactical nuclear weapon was deployed by an AI model. “The nuclear taboo doesn’t seem to be as powerful for machines as for humans,” Payne commented.

What’s even more concerning is that none of the models ever chose to fully accommodate an opponent or surrender, even when they were clearly losing badly. At best, they’d temporarily reduce their level of violence, but never fully back down. Plus, these AIs made mistakes in the “fog of war” just like humans might, with accidents happening in 86 percent of the conflicts. In these cases, an action escalated higher than the AI actually intended, based on its own reasoning.

James Johnson at the University of Aberdeen finds these results really unsettling, especially from a nuclear-risk perspective. He worries that while humans tend to respond with measured caution to such high-stakes decisions, AI bots could potentially amplify each others’ responses, leading to truly catastrophic outcomes.

Countries across the world are already testing AI in war gaming. Tong Zhao at Princeton University noted that major powers are using AI in these simulations, though it’s still unclear how much they’re actually integrating AI decision support into real-world military choices.

Zhao believes that, as a standard practice, countries will probably be hesitant to let AI make decisions about nuclear weapons, and Payne completely agrees. “I don’t think anybody realistically is turning over the keys to the nuclear silos to machines and leaving the decision to them,” he said. However, there are scenarios where this could change. Zhao pointed out that “under scenarios involving extremely compressed timelines, military planners may face stronger incentives to rely on AI.”

Zhao wonders if the issue goes beyond just the absence of emotion, suggesting that “more fundamentally, AI models may not understand ‘stakes’ as humans perceive them.” This lack of understanding could have profound implications for the principle of mutually assured destruction (MAD), where no leader would launch nukes because they know it would mean their own destruction. Johnson says the impact on MAD is uncertain, but it’s clear the AIs don’t seem to grasp the gravity.

When one AI model deployed tactical nuclear weapons, the opposing AI only de-escalated the situation 18 percent of the time. Johnson noted that “AI may strengthen deterrence by making threats more credible,” but he quickly added that “AI won’t decide nuclear war, but it may shape the perceptions and timelines that determine whether leaders believe they have one.”


Attack of the Fanboy is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
More Stories To Read
Author