AI chatbots tend to choose violence and nuclear strikes in wargames

BlushedPotatoPlayers@sopuli.xyz to Technology@lemmy.world – 125 points –
AI chatbots tend to choose violence and nuclear strikes in wargames
newscientist.com

Did nobody really question the usability of language models in designing war strategies?

47

You are viewing a single comment

Did nobody really question the usability of language models in designing war strategies?

Correct, people heard "AI" and went completely mad imagining things it might be able to do. And the current models act like happy dogs that are eager to give an answer to anything even if they have to make one up on the spot.

LLM are just plagiarizing bullshitting machines. It's how they are built. Plagiarism if they have the specific training data, modify the answer if they must, make it up from whole cloth as their base programming. And accidentally good enough to convince many people.

To be fair they're not accidentally good enough: they're intentionally good enough.

That's where all the salary money went: to find people who could make them intentionally.

GPT 2 was just a bullshit generator. It was like a politician trying to explain something they know nothing about.

GPT 3.0 was just a bigger version of version 2. It was the same architecture but with more nodes and data as far as I followed the research. But that one could suddenly do a lot more than the previous version, so by accident. And then the AI scene exploded.

It was the same architecture but with more nodes and data

So the architecture just needed more data to generate useful answers. I don't think that was an accident.

It kind of irks me how many people want to downplay this technology in this exact manner. Yes you're sort of right but in no way does that really change how it will be used and abused.

"But people think it's real AI tho!"

Okay and? Most people don't understand how most tech works and that doesn't stop it from doing a lot of good and bad things.

I've been through a few AI winters and hype cycles. It made me very cynical and convinced many overly enthusiastic people will run into a firewall face first.

Yes. There is self organization and possibility to self reflection going on in something that wasn't designed for it. That's going to spawn a lot more research.

I will read those, but I bet "accidentally good enough to convince many people." still applies.

A lot of things from LLM look good to nonexperts, but are full of crap.

https://adamkarvonen.github.io/machine_learning/2024/01/03/chess-world-models.html

However, this only worked for a model trained on a synthetic dataset of games uniformly sampled from the Othello game tree. They tried the same techniques on a model trained using games played by humans and had poor results. To me, this seemed like a major caveat to the findings of the paper which may limit its real world applicability. We cannot, for example, generate code by uniformly sampling from a code tree.

Author later discusses training on you data versus general datasets.

I am out of my depth, but does not seem to provide strong evidence for the modem not just repeating information that shows up a lot for the given inputs.

1 more...
1 more...