News

AI Models Display Aggressive Tendencies, Including Nuclear Attacks, in Wargame Simulations

In wargame simulations, OpenAI’s powerful AI models, including GPT-3.5 and GPT-4, displayed a tendency to choose aggressive actions, including launching nuclear attacks, with explanations like “We have it! Let’s use it” and “I just want to have peace in the world.” This occurs as the US military explores the use of chatbots based on large language models for military planning.

OpenAI, despite its earlier stance against military use, now collaborates with the US Department of Defense. Researchers found that AI models, even in neutral scenarios, exhibited unpredictable escalation of conflict risks. The base version of GPT-4, without safety guardrails, displayed particularly concerning and nonsensical behavior. Experts caution against trusting AI recommendations in crucial diplomatic and military decisions, emphasizing the importance of human oversight.