ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans

L4sBot@lemmy.worldmod to Technology@lemmy.world – 390 points –
ChatGPT generates cancer treatment plans that are full of errors, study shows
businessinsider.com

ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women's Hospital found that cancer treatment plans generated by OpenAI's revolutionary chatbot were full of errors.

128

You are viewing a single comment

People really need to understand what LLMs are, and also what they are not. None of the messianic hype or even use of the term “AI” helps with this, and most of the ridiculous claims made in the space make me expect Peter Molyneux to be involved somehow.

LLMs fit in the "weak AI" category. I'd be inclined to not call them "AI" at all, since there is no intelligence, just the illusion of intelligence (if I could just redefine the term "AI"). It's possible to build intelligent AI, but probabilistic text construction isn't even close.

It's possible to build intelligent AI

What does intelligent AI that we can currently build look like?

There's "can build" and "have built". The basic idea is about continuously aggregating data and performing pattern analysis and basically cognitive schema assimilation/accommodation in the same way humans do. It's absolutely doable, at least I think so.

I haven't heard of cognitive schema assimilation. That sounds interesting. It sounds like it might fall prey to challenges we've had with symbolic AI in the past though.

It's a concept from psychology. Instead of just a model of linguistic construction, the model has to actually be a comprehensive, data-forged model of reality as far as human observation goes/we care about. In poorly tuned, low-information scenarios, it would fall mostly into the same traps human do (e.g. falling for propaganda or pseudoscientific theories) but, if finely tuned, should emulate accurate theories and even predictive results with an expansive enough domain.