Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary' study

L4sBot@lemmy.worldmod to Technology@lemmy.world – 276 points –
Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary' study
livescience.com

Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary' study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

55

You are viewing a single comment

LLM trained on inflammatory data produces inflammatory results, shocking.

I know we don't like them here but the word reddit is not banned (yet)

What? What does my comment have anything to do with Reddit?

So you're saying that "Inflammatory data" isn't a reference to reddit? :D

I'd say using Twitter and Facebook would be worse than reddit. Or, and I shudder to think about it, truth social...

Reddit is used more for Ai models as those....

Not inherently, I'm sure that's part of it but it's really everywhere. Even here on Lemmy I've run into nasty folk

True but it's reddit that's served as a base for most models....

Not just reddit, LAION is a huge dataset

Obviously but reddit is in the goldilocks zone where you get coherent intelligent stuff and humor and facts.

But it's still toxic for an Ai.

Saying it served as the base for most models is just objectively incorrect though

Correcto but maybe it DOES apply to most asked questions, if you know where I'm going with that

No, LLM is the AI, OP is saying if you train it with hate it's gonna spit out hate

And I'm saying that reddit data is sublime for Ai. And specifically that it's invested with toxicity