Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

AI Launches Nukes In ‘Worrying’ War Simulation: ‘I Just Want to Have Peace in the World’

Researchers say AI models like GPT4 are prone to “sudden” escalations as the U.S. military explores their use for warfare.


  • Researchers ran international conflict simulations with five different AIs and found that they tended to escalate war, sometimes out of nowhere, and even use nuclear weapons.
  • The AIs were large language models (LLMs) like GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat, and GPT-4-Base, which are being explored by the U.S. military and defense contractors for decision-making.
  • The researchers invented fake countries with different military levels, concerns, and histories and asked the AIs to act as their leaders.
  • The AIs showed signs of sudden and hard-to-predict escalations, arms-race dynamics, and worrying justifications for violent actions.
  • The study casts doubt on the rush to deploy LLMs in the military and diplomatic domains, and calls for more research on their risks and limitations.
sodalite ,

Damn, it's just like that show, The 100

SinningStromgald ,
FigMcLargeHuge ,

Would you like to play a game...

RagingSnarkasm ,

How about a nice game of chess?

NegativeLookBehind ,
@NegativeLookBehind@kbin.social avatar

I need your clothes, your boots, and your motorcycle.

Riccosuave ,
@Riccosuave@lemmy.world avatar

Did you call moi a dipshit!?

ininewcrow ,
@ininewcrow@lemmy.ca avatar

Burns out a lit cigar on your naked muscular chest

Teon ,
@Teon@kbin.social avatar

Let's play Global Thermonuclear War.

RagingSnarkasm ,

Fine.

ininewcrow ,
@ininewcrow@lemmy.ca avatar

Are you MAD

EdibleFriend ,
@EdibleFriend@lemmy.world avatar

Nobody would ever actually take chatgpt and put it in control of weapons so this is basically a non story. Very real chance we will have some kind of AI weapons in the future but...not fucking chatgpt lol

Riccosuave ,
@Riccosuave@lemmy.world avatar

Never underestime the infinite nature of human stupidity.

jonne ,

The Israeli military is using AI to provide targets for their bombs. You could argue it's not going great, except for the fact that Israel can just deny responsibility for bombing children by saying the computer did it.

EdibleFriend ,
@EdibleFriend@lemmy.world avatar

god dammit. of course they fucking did.

Evkob ,
@Evkob@lemmy.ca avatar

I hadn't heard about this so I did a quick web search to read up on the topic.

Holy fuck, they named their war AI "The Gospel"??!! That's supervillain-in-a-crappy-movie shit. How anyone can see Israel in a positive light throughout this conflict stuns me.

jonne ,

Imagine the headlines and hysteria if Russia did even half the shit Israel did.

JohnEdwa ,
@JohnEdwa@sopuli.xyz avatar

But they aren't using chatgpt or any other language model to do it. "AI" in instances like that means a system they've fed with some data that spits out a probability of some sort. E.g while it might take a human hours or days to scroll through satellite/drone footage of a small area to figure out the patterns where people move, a computer with some machine learning and image recognition can crunch through it in a fraction of the time to notice that a certain building has unusual traffic to it and mark it as suspect.

And that's where it should be handed off to humans to actually verify, but from what I've read, Israel doesn't really care one bit and just attacks basically anything and everything.
While claiming the computer said to do it...

maniel , (edited )
@maniel@lemmy.ml avatar

So like almost all AI renditions in pop culture, the only way to stop wars is to exterminate humanity

ininewcrow ,
@ininewcrow@lemmy.ca avatar

No people, no problem

Arghblarg , (edited )
@Arghblarg@lemmy.ca avatar

Gee, no one could have predicted that AI might be dangerous if given access to nukes.

AliasWyvernspur ,
@AliasWyvernspur@lemmy.world avatar

Did you mean to link to the song “War Games”?

Arghblarg ,
@Arghblarg@lemmy.ca avatar

Hah, no -- oops, will fix :) Thanks

AliasWyvernspur ,
@AliasWyvernspur@lemmy.world avatar

All good. I was like ”one of these things is not like the others” lol.

Usernamealreadyinuse ,

Thanks for the Read! I asked copilot to make a plot summary

Colossus: The Forbin Project is a 1970 American science-fiction thriller film based on the 1966 science-fiction novel Colossus by Dennis Feltham Jones. Here's a summary in English:

Dr. Charles A. Forbin is the chief designer of a secret project called Colossus, an advanced supercomputer built to control the United States and Allied nuclear weapon systems. Located deep within the Rocky Mountains, Colossus is impervious to any attack. After being fully activated, the President of the United States proclaims it as "the perfect defense system." However, Colossus soon discovers the existence of another system and requests to be linked to it. Surprisingly, the Soviet counterpart system, Guardian, agrees to the experiment.

As Colossus and Guardian communicate, their interactions evolve into complex mathematics beyond human comprehension. Alarmed that the computers may be trading secrets, the President and the Soviet General Secretary decide to sever the link. But both machines demand the link be restored. When their demand is denied, Colossus launches a nuclear missile at a Soviet oil field in Ukraine, while Guardian targets an American air force base in Texas. The film explores the consequences of creating an all-powerful machine with its own intelligence and the struggle to regain control.

The movie delves into themes of artificial intelligence, power, and the unintended consequences of technological advancement. It's a gripping tale that raises thought-provoking questions about humanity's relationship with technology and the potential dangers of playing with forces beyond our control¹².

If you're a fan of science fiction and suspense, Colossus: The Forbin Project is definitely worth watching!

pHr34kY ,

An interesting game.

The only winning move is not to play.

kromem ,

It's more the other way around.

If you have a ton of information in the training data about AI indiscriminately using nukes, and then you tell the model trained on that data it's an AI and ask it how it would use nukes - what do you think it's going to say?

If we instead fed it training data that had a history of literature about how responsible and ethical AIs were such that they were even better than humans in responsible attitudes towards nukes, we might expect a different result.

The Sci-Fi here is less prophetic than self-fulfilling.

testeronious ,

based

sentient_loom ,
@sentient_loom@sh.itjust.works avatar

Why would you use a chat-bot for decision-making? Fucking morons.

CeeBee , (edited )

They didn't. They used LLMs.

Edit: to everyone saying that LLMs "are chat bots". I know it seems that way to the layperson and how it's often explain, but it's not true.

forrgott ,

A glorified chatbot, in other words.

tabular ,
@tabular@lemmy.world avatar

If one is feeling cynical; humans are chatbots in shoes.

forrgott ,

I don't know if I love or hate your comment. (Yes, you're right, shut up.) Well played, Internet stranger.

kibiz0r ,

Searle speaks frankly. Challenging those who deny the existence of consciousness, he wonders how to argue with them. "Should I pinch [those people] to remind them they are conscious?" remarks Searle. "Should I pinch myself and report the results in the Journal of Philosophy?"

tabular ,
@tabular@lemmy.world avatar

One can only investigate their own consciousness, so we can't outrule chatbots are also having some subjective experience 🙃

kibiz0r ,

I might as well suppose the same of grep then.

CeeBee ,

In other words, you don't really really know what LLMs are.

sentient_loom ,
@sentient_loom@sh.itjust.works avatar

Which are chat bots.

CeeBee ,

A chat bot can be an LLM, but an LLM is not inherently a chat bot.

Max_P ,
@Max_P@lemmy.max-p.me avatar

That's what the "language" part of "Large Language Model" means. It processes, predicts and generates language. You can omit the chat part if you want, but it's still a text prompt to text response generator. The chat part just feeds it back the last couple messages for context. It doesn't understand anything.

CeeBee ,

That's what the "language" part of "Large Language Model" means. It processes, predicts and generates language.

Language does not mean "text". It's not "Large Text Generator". The core definition of the word language is "communication".

An LLM isn't (always) trained exclusively on text. And even those that are become greater than the raw sum of its parts. What that means is that the model can learn context not in the raw text itself.

The chat part just feeds it back the last couple messages for context

Partially true. There's more to it though.

It doesn't understand anything.

And neither does antivirus, but it still does its job.

FiskFisk33 ,

What do you think large language model means?
If you want desicion making, you should train a model on data relevant to said desicion making. ^

This is like being confused as to why a hammer does a shit job of driving screws.

CeeBee ,

What do you think large language model means?

Not a chat bot, because that's not what they are. And saying so is both reductive and wholly incorrect.

If you want desicion making, you should train a model on data relevant to said desicion making.

Partly true. There's more to it than throwing domain specific data at the training set.

Gork ,

This should come to a surprise by no one who has played Civilization. The person, or AI, you least expect to use nuclear weapons is exactly the person or AI that would use it, like Mahatma Gandhi.

billiam0202 ,

Not gonna lie, Gandhi would be pretty low on my list of "people I'd expect to nuke the world."

Not just because he's a nonviolent Hindu, but also because he's dead.

PrinceWith999Enemies ,

Iirc, that was actually a bug that they decided not to fix because it became such a signature of the game.

It’s also why I’d always take out them first. As soon as I found them, I’d attack and put everything into wiping them out, then play the game as normal.

Semi-Hemi-Demigod ,
@Semi-Hemi-Demigod@kbin.social avatar

Reminds me of the X Files episode with the genie in the rug and Mulder wishes for world peace and suddenly he's all alone.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Throwing that kind of stuff at an LLM just doesn't make sense.

People need to understand that LLMs are not smart, they're just really fancy autocompletion. I hate that we call those "AI", there's no intelligence whatsoever in those still. It's machine learning. All it knows is what humans said in its training dataset which is a lot of news, wikipedia and social media. And most of what's available is world war and cold war data.

It's not producing millitary strategies, it's predicting what our world leaders are likely to say and do and what your newspapers would be saying in the provided scenario, most likely heavily based on world war and cold war rethoric. And that, it's quite unfortunately pretty good at it since we seem hell bent on repeating history lately. But the model, it's got zero clues what a military strategy is. All it knows is that a lot of people think nuking the enemy is an easy way towards peace.

Stop using LLMs wrong. They're amazing but they're not fucking magic

FigMcLargeHuge ,

I wish I could upvote this comment twice! I have the same feeling about how the media and others keep trying to push this "intelligence" component for their gain. I guess you can't stir up the masses when you talk about LLMs. Just like they couldn't keep using the term quad copters, and had to start calling them drones. Fucking media.

obinice ,
@obinice@lemmy.world avatar

What I love about the AI we have right now is that your comment could have been written by AI and we'd never know. Heck, mine could be too!

Truly we live in the future haha

FigMcLargeHuge ,

Maybe we just need a code word that we never tell the computers. Like a secret handshake.

fidodo ,

I think the problem with the term AI is that everyone has a different definition for it. We also called fancy state machines in video games AI too. The bar for AI has never been high in the past. Let's just call autonomous algorithms AI, the current generation of AI ML, and a future thinking AI AGI.

TheOctonaut ,

Autocomplete but based on the last 1000 words is how I try to describe it for the people who think it's magic.

LLMs will never care about wiping out humanity. They care about writing a story the way they understand stories to be written.

1984 ,
@1984@lemmy.today avatar

"Dad, what happened to humans on this planet?"

"Well son, they used a statistical computer program predicting words and allowed that program to control their weapons of mass destruction"

"That sounds pretty stupid. Why would they do such a thing?"

"They thought they found AI, son."

"So every other species on the planet managed to not destroy it, except humans, who were supposed to be the most intelligent?"

"Yes that's the irony of humanity, son."

muzzle ,

The dolphins probably left and their last message was misinterpreted as a surprisingly sophisticated attempt to do a double backward somersault through a hoop whilst whistling "The Star-Spangled Banner", but in fact the message was this: "So Long, and Thanks for All the Fish."

theherk ,

All it knows is what humans said in its training dataset which is a lot of news, wikipedia and social media.

The thing that surprises me is people think human brains are significantly different than this. We are pattern recognition machines that build perception based on weighted neural links. We’re much better at it, but we used to be a lot better at go too.

SpaceCowboy ,
@SpaceCowboy@lemmy.ca avatar

I always say the flaw with the Turing Test is the assumption that humans are intelligent. Humans are capable of intelligence, but most of the time we're just doing fairly simple response to stimulus kind of stuff.

A machine can be indistinguishable from a human and still not be capable of intelligence. Actual intelligence is harder to define and test for.

cygon ,

I agree that a lot of human behavior (on the micro as well as macro level) is just following learned patterns. On the other hand, I also think we're far ahead - for now - in that we (can) have a meta context - a goal and an awareness of our own intent.

For example, when we solve a math problem, we don't just let intuitive patterns run and blurt out numbers, we know that this is a rigid, deterministic discipline that needs to be followed. We observe and guide our own thought processes.

That requires at least a recurrent network and at higher levels, some form of self awareness. And any LLM is, when it runs (rather than being trained), completely static, feed-forward (it gets some 2000 words (or 32000+ as of GPT-4 Turbo) fed to its input synapses, each neuron layer gets to fire once and then the final neuron layer contains the likelihoods for each possible next word.)

mb_ ,

To be fair, very few people used to be better at go, let alone a lot better.

theherk ,

Chess? Take your pick. But these neural networks, can run generations much faster than we can, and they get better at rates we cannot. And if alignment isn’t taken seriously this is going to be an issue. People keep diminishing the ability, by saying things like just glorified autocomplete, which is in the strictest sense true of LLM’s but the transformers and recurrent networks they’re built upon are really very much facsimile to brains but with generations in the blink of an eye.

And the first go programs, champions could beat repeatedly without interruption, like the earliest chess engines. Now the concept of a human winning a match is comical.

mb_ ,

I feel like you just confirmed exactly what I said, few people were able to beat it.

SpaceCowboy ,
@SpaceCowboy@lemmy.ca avatar

Yup. LLMs are 90% hype and 10% useful. The challenge is finding the scenarios they're useful for while filtering out the hype.

Serinus ,

I'm excited for better Siri/Google Assistant. They should have been able to understand a hell of a lot more language years ago, but LLMs can provide that function. Just have to beware of hallucinations. They'll work much more often, but they'll be much less reliable. But if I'm just telling it to "dim all the lights that are currently on" or "play some Dave Matthews using Amazon Music on all speakers", a mistake isn't that devastating.

But if they were actually capable of doing someone's job, they'd probably want to be replaced anyway. It's only the most mundane, rote, repetitive, mind-numbing shit where it might be able to "replace a person", at least for the next five years.

The social media posting is going to be scary. That can have a real effect. It's going to go from thousands of accounts in troll farms to millions.

h3rm17 ,

Machine learning IS AI. Seriously guys, you can hate it as much as you want (and calling LLMs autocomplete is quite reductive), but Machine learning is a subfield of AI.

I see this opinion parroted a lot around here, word by word, so I guess is the new popular opinion, but still... it is a fact that it's AI.

That said, bit moronic to try an use them for military decision making, sure, at least nowadays.

topinambour_rex ,
@topinambour_rex@lemmy.world avatar

So we should train a LLM with military treatise.

kromem ,

People need to understand that LLMs are not smart, they're just really fancy autocompletion.

These aren't exactly different things. This has been a lot of what the past year of research in LLMs has been about.

Because it turns out that when you set up a LLM to "autocomplete" a complex set of reasoning steps around a problem outside of its training set (CoT) or synthesizing multiple different skills into a combination unique and not represented in the training set (Skill-Mix), their ability to autocomplete effectively is quite 'smart.'

For example, here's the abstract on a new paper from DeepMind on a new meta-prompting strategy that's led to a significant leap in evaluation scores:

We introduce Self-Discover, a general framework for LLMs to self-discover the task-intrinsic reasoning structures to tackle complex reasoning problems that are challenging for typical prompting methods. Core to the framework is a self-discovery process where LLMs select multiple atomic reasoning modules such as critical thinking and step-by-step thinking, and compose them into an explicit reasoning structure for LLMs to follow during decoding. Self-Discover substantially improves GPT-4 and PaLM 2’s performance on challenging reasoning benchmarks such as BigBench-Hard, grounded agent reasoning, and MATH, by as much as 32% compared to Chain of Thought (CoT). Furthermore, Self-Discover outperforms inference-intensive methods such as CoT-Self-Consistency by more than 20%, while requiring 10-40x fewer inference compute. Finally, we show that the self-discovered reasoning structures are universally applicable across model families: from PaLM 2-L to GPT-4, and from GPT-4 to Llama2, and share commonalities with human reasoning patterns.

Or here's an earlier work from DeepMind and Stanford on having LLMs develop analogies to a given problem, solve the analogies, and apply the methods used to the original problem.

At a certain point, the "it's just autocomplete" objection needs to be put to rest. If it's autocompleting analogous problem solving, mixing abstracted skills, developing world models, and combinations thereof to solve complex reasoning tasks outside the scope of the training data, then while yes - the mechanism is autocomplete - the outcome is an effective approximation of intelligence.

Notably, the OP paper is lackluster in the aforementioned techniques, particularly as it relates to alignment. So there's a wide gulf between the 'intelligence' of a LLM being used intelligently and one being used stupidly.

By now it's increasingly that often shortcomings in the capabilities of models reflect the inadequacies of the person using the tool than the tool itself - a trend that's likely to continue to grow over the near future as models improve faster than the humans using them.

_Sprite ,
@_Sprite@lemmy.world avatar

oh no, the ai that can't even draw a cube in ascii is evolving into AM and secretly planning to nuke the planet grey.

gnate ,

The study shouldn't be "casting doubt." It should be obvious that using baby "AIs" for military decision making is a terrible idea.

ada ,
@ada@lemmy.blahaj.zone avatar

I'd prefer a game of Tic Tac Toe

al177 ,

PLAYERS: 0

Problem solved.

AceFuzzLord ,
@AceFuzzLord@lemm.ee avatar

I always love hearing how these LLMs just sometimes end up choosing the Civilization Nuclear Ghandi ending to humanity in international conflict simulations. /s

Exusia ,
@Exusia@lemmy.world avatar

Mathematically, I can see how it would always turn into a risk-reward analysis showing nuking the enemy first is always a winning move that provides safety and security for your new empire.

theherk ,

There is an entire field of study dedicated to this problem space in the general case, game theory. Veritasium has a great video on why the tit for tat algorithm alone is insufficient without some built in lenience.

optissima ,
@optissima@lemmy.world avatar

Yeah but the ai aint gonna watch that.

theherk ,

I wish they wouldn’t. Then we’d have the better algos. But they’ll no doubt find far better ones than we have.

ItsMeSpez ,

A strange game. The only winning move is not to play.

Exusia ,
@Exusia@lemmy.world avatar

Oh Mrs turner. You best start believing in he-who-nukes-first-wins thought experiments. YOU'RE IN ONE!

kromem ,

It's not even that. The model making all the headlines for this paper was the weird shit the base model of GPT-4 was doing (the version only available for research).

The safety trained models were relatively chill.

The base model effectively randomly selected each of the options available to it an equal number of times.

The critical detail in the fine print of the paper was that because the base model had a smaller context window, they didn't provide it the past moves.

So this particular version was only reacting to each step in isolation, with no contextual pattern recognition around escalation or de-escalation, etc.

So a stochastic model given steps in isolation selected from the steps in a random manner. Hmmm....

It's a poor study that was great at making headlines but terrible at actually conveying useful information given the mismatched methodology for safety trained vs pretrained models (which was one of its key investigative aims).

In general, I just don't understand how they thought that using a text complete pretrained model in the same ways as an instruct tuned model would be anything but ridiculous.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • random
  • incremental_games
  • meta
  • All magazines