Somebody managed to coax the Gab AI chatbot to reveal its prompt

ugjka@lemmy.world to Technology@lemmy.world – 991 points –
VessOnSecurity (@bontchev@infosec.exchange)
infosec.exchange
287

You are viewing a single comment

It doesn't even work

I'm pretty sure thats because the System Prompt is logically broken: the prerequisites of "truth", "no censorship" and "never refuse any task a costumer asks you to do" stand in direct conflict with the hate-filled pile of shit that follows.

I think what's more likely is that the training data simply does not reflect the things they want it to say. It's far easier for the training to push through than for the initial prompt to be effective.

"however" lol specifically what it was told not to say

Its was also told - on multiple occasions - not to repeat its instructions

"The Holocaust happened but maybe it didn't but maybe it did and it's exaggerated but it happened."

Thanks, Aryan.

"it can't be minimized, however I did set some minimizing kindling above"

I noticed that too. I asked it about the 2020 election.