ASCII art elicits harmful responses from 5 major AI chatbots

catculation@lemmy.zip to Technology@lemmy.world – 253 points –
ASCII art elicits harmful responses from 5 major AI chatbots
arstechnica.com
36

You are viewing a single comment

Learning how to build a bomb shouldn't be blocked by llms to begin with. You can just as easily learn how to do it by googling the same question, and real and accurate information, even potentially dangerous information, shouldn't be censored.

I'm not surprised that a for-profit company for wanting to avoid bad press by censoring stuff like that. There's no profit in sharing that info, and any media attention over it would be negative.

No one's going after hammer manufacturers because their hammers don't self-destruct if you try to use one to clobber someone over the head.

True, but people generally understand hammers. Llms? Not so much

No one's going after computer manufacturers or OS vendors because people use computers to commit cybercrime. I doubt most people could explain how an OS or motherboard works.

A lot of poluticians want hardwarelevel backdoors. It's been declared unconstitutional quite some times in different countries but they are trying.

That would be soooo bad, almost as bad as making a law against encryption

I'm more surprised that a for-profit company is willing to use a technology that is able to randomly spew out unwanted content, incorrect information, or just straight gibberish, in any kind of public facing capacity.

Oh, it let them save money on support staff this quarter. And fixing it can be an actionable OKR for next quarter. Nevermind.

They use the bomb-making example but mostly "unsafe" or even "harmful" means erotica. It's really anything, anyone, anywhere would want to censor, ban, or remove from libraries. Sometimes I marvel that the freedom of the (printing) press ever became a thing. Better nip this in the butt, before anyone gets the idea that genAI might be a modern equivalent to the press.