AI is acting ‘pro-anorexia’ and tech companies aren’t stopping it

Peaces@infosec.pub to Technology@beehaw.org – 76 points –
wapo.st

Disturbing fake images and dangerous chatbot advice: New research shows how ChatGPT, Bard, Stable Diffusion and more could fuel one of the most deadly mental illnesses

WP gift article expires in 14 days.

https://archive.ph/eZvfT

https://counterhate.com/wp-content/uploads/2023/08/230705-AI-and-Eating-Disorders-REPORT.pdf

17

You are viewing a single comment

So the author of the WaPo article is typing in anorexia keywords to generate anorexia images and gets anorexia images in return and is surprised about that?

Yep 🤦🏻‍♂️

This isn't even about AI. Regular search engines will also provide results reflecting the thing you asked for.

Some search engines and social media platforms make at least half-assed efforts to prevent or add warnings to this stuff, because anorexia in particular has a very high mortality rate, and age of onset tends to be young. The people advocating AI models be altered to prevent this say the same about other tech. It’s not techphobia to want to try to reduce the chances of teenagers developing what is often a terminal illness, and AI programmers have the same responsibility on that as everyone else,

Exactly what I was thinking.

I mean it is important that this kind of stuff is thought about when designing these but it’s going to be a whack-a-mole situation and we shouldn’t be surprised that with targeted prompting you’ll easily gaps that generated stuff like this.

Making articles out of each controversial or immoral prompt isn’t helpful at all. It’s just spam.