Researchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious queries

shish_mish@lemmy.world to Technology@lemmy.world – 291 points –
Researchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious queries
tomshardware.com
25

You are viewing a single comment

Bug bounty programs are a thing.

yes i am aware? are they being used by openai?

Yes, an exploitative thing that mostly consists of free labour for big orgs.