Someone got Gab's AI chatbot to show its instructions

mozz@mbin.grits.dev to Technology@beehaw.org – 458 points –

Credit to @bontchev

198

You are viewing a single comment

Oh I see, you're saying the training set is exclusively with yes/no answers. That's called a classifier, not an LLM. But yeah, you might be able to make a reasonable "does this input and this output create a jailbreak for this set of instructions" classifier.

Edit: found this interesting relevant article

LLM means "large language model". A classifier can be a large language model. They are not mutially exclusive.

Yeah, I suppose you're right. I incorrectly believed that a defining characteristic was the generation of natural language, but that's just one feature it's used for. TIL.