Over just a few months, ChatGPT went from accurately answering a simple math problem 98% of the time to just 2%, study finds

Gsus4@lemmy.one to Technology@lemmy.world – 178 points –
Over just a few months, ChatGPT went from accurately answering a simple math problem 98% of the time to just 2%, study finds
fortune.com

Can we discuss how it's possible that the paid model (gpt4) got worse and the free one (gpt3.5) got better? Is it because the free one is being trained on a larger pool of users or what?

41

You are viewing a single comment

My guess is that all those artificial restrictions plus regurgitation of generated content take their toll.

There are so many manually introduced filters to stop the bot from replying "bad things" and so much of the current internet content is already AI generated, that it's not unlikely that the whole thing collapses in on itself.

Oh, right, that's another factor: connecting gpt4 to the real-time internet creates those training loops, yes. The pre-prompt guardrail prompts are fixable and even possible to overcome, but training on synthetic data is the key here, because it's impossible to identify what is artificial, so on the collapse loop goes.

connecting gpt4 to the real-time internet creates those training loops, yes... it's impossible to identify what is artificial, so on the collapse loop goes.

ouroboros of garbage