Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi...

L4sBot@lemmy.worldmod to Technology@lemmy.world – 661 points –
fortune.com

Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi...::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

150

You are viewing a single comment

I once heard of AI gradually getting dumber overtime, because as the internet gets more saturated with AI content, stuff written by AI becomes part of the training data. I wonder if that's what's happening here.

There hasn't been time for that yet. The radio of generated to human content isn't high enough yet.

I don't think the training data has really been updated since its release. This is just them tuning the model, either to save on energy or to filter out undesirable responses.

As long as humans are still the driving force behind what content gets spread around (and thus, far more represented in the training data), even if the content is AI generated, it shouldn't matter. But it's quite definitely not the case here.