AI Launches Nukes In ‘Worrying’ War Simulation: ‘I Just Want to Have Peace in the World’
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
![AI Launches Nukes In ‘Worrying’ War Simulation: ‘I Just Want to Have Peace in the World’](https://lemmy.world/pictrs/image/ba1e841b-5b15-480e-8aae-c191f0d1246a.jpeg?format=jpg&thumbnail=256)
vice.com
Researchers say AI models like GPT4 are prone to “sudden” escalations as the U.S. military explores their use for warfare.
- Researchers ran international conflict simulations with five different AIs and found that they tended to escalate war, sometimes out of nowhere, and even use nuclear weapons.
- The AIs were large language models (LLMs) like GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat, and GPT-4-Base, which are being explored by the U.S. military and defense contractors for decision-making.
- The researchers invented fake countries with different military levels, concerns, and histories and asked the AIs to act as their leaders.
- The AIs showed signs of sudden and hard-to-predict escalations, arms-race dynamics, and worrying justifications for violent actions.
- The study casts doubt on the rush to deploy LLMs in the military and diplomatic domains, and calls for more research on their risks and limitations.
You are viewing a single comment
It's just as likely to make some shit up as it is to be any kind of helpful.
I did say "might"