Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi...
![](https://lemmy.world/pictrs/image/7c668ec0-a3ad-455f-b736-1f5f13808b09.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
fortune.com
Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi...::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.
You are viewing a single comment
It just occurred to me that one could purposely seed it with incorrect information to break its usefulness. I'm anti-AI so I would gladly do this. I might try it myself.
Luddite.
The luddites were right you know
Outliers are easy to work around.