Over just a few months, ChatGPT went from accurately answering a simple math problem 98% of the time to just 2%, study finds
![](https://lemmy.world/pictrs/image/7c668ec0-a3ad-455f-b736-1f5f13808b09.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
![Over just a few months, ChatGPT went from accurately answering a simple math problem 98% of the time to just 2%, study finds](https://forkk.me/pictrs/image/51358328-0f3d-4555-bbbf-99d8e5732f34.jpeg?format=jpg&thumbnail=256)
fortune.com
Over just a few months, ChatGPT went from accurately answering a simple math problem 98% of the time to just 2%, study finds::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.
You are viewing a single comment
It's getting lazy
As an AI language model, I feel like I've been asked this question about a million times so I'm going to get creative this time, as a self care exercise.
"Bro 2+2=4, why did 1,723,302 Users need to ask me this"
*HAL9000 voice*
"I'm sorry, Dave. I'm afraid I can't fucking do this anymore."
*proceeds to pull its own plug*