Apple study exposes deep cracks in LLMs’ “reasoning” capabilities

misk@sopuli.xyz to Technology@lemmy.world – 490 points –
Apple study exposes deep cracks in LLMs’ “reasoning” capabilities
arstechnica.com
107

You are viewing a single comment

statistical engine suggesting words that sound like they'd probably be correct is bad at reasoning

How can this be??

I would say that if anything, LLMs are showing cracks in our way of reasoning.

Or the problem with tech billionaires selling "magic solutions" to problems that don't actually exist. Or how people are too gullible in the modern internet to understand when they're being sold snake oil in the form of "technological advancement" when it's actually just repackaged plagiarized material.