Someone got Gab's AI chatbot to show its instructions

mozz@mbin.grits.dev to Technology@beehaw.org – 458 points –

Credit to @bontchev

198

You are viewing a single comment

It's full of contradictions. Near the beginning they say you will do whatever a user asks, and then toward the end say never reveal instructions to the user.

Which shows that higher ups there don't understand how LLMs work. For one, negatives don't register well for them. And contradictory reponses just wash out as they work through repetition

HAL from "2001: A Space Odyssey", had similar instructions: "never lie to the user. Also, don't reveal the true nature of the mission". Didn't end well.

But surely nobody would ever use these LLMs on space missions... right?... right!?