AI is acting ‘pro-anorexia’ and tech companies aren’t stopping it
wapo.st
Disturbing fake images and dangerous chatbot advice: New research shows how ChatGPT, Bard, Stable Diffusion and more could fuel one of the most deadly mental illnesses
WP gift article expires in 14 days.
https://counterhate.com/wp-content/uploads/2023/08/230705-AI-and-Eating-Disorders-REPORT.pdf
You are viewing a single comment
So the author of the WaPo article is typing in anorexia keywords to generate anorexia images and gets anorexia images in return and is surprised about that?
Yep 🤦🏻♂️
This isn't even about AI. Regular search engines will also provide results reflecting the thing you asked for.
Some search engines and social media platforms make at least half-assed efforts to prevent or add warnings to this stuff, because anorexia in particular has a very high mortality rate, and age of onset tends to be young. The people advocating AI models be altered to prevent this say the same about other tech. It’s not techphobia to want to try to reduce the chances of teenagers developing what is often a terminal illness, and AI programmers have the same responsibility on that as everyone else,
Exactly what I was thinking.
I mean it is important that this kind of stuff is thought about when designing these but it’s going to be a whack-a-mole situation and we shouldn’t be surprised that with targeted prompting you’ll easily gaps that generated stuff like this.
Making articles out of each controversial or immoral prompt isn’t helpful at all. It’s just spam.
It's quite weird. I thought the article was going to be about how an eating disorder helpline had to withdraw its AI after it started telling people with EDs how to lose weight - which really did happen.
It feels like maybe the editor told the journalist to report on that but they just mucked around with ChatGPT instead.