Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi...

L4sBot@lemmy.worldmod to Technology@lemmy.world – 661 points –
fortune.com

Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi...::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

152

You are viewing a single comment

My personal pet theory is that a lot of people were doing work that involved getting multiple LLMs in communication. When those conversations were then used in the RL loop we start seeing degradation similar to what’s been in the news recently with regards to image generation models. I believe this is the paper that got everybody talking about it recently: https://arxiv.org/pdf/2307.01850.pdf

This is peer-reviewed? they use a line in the discussion which seems relatively unprofessional, telling people to join a 12-step program if they like to use artificial training data.

Not affiliated with the paper in any way. Have just been following the news around it.

1 more...
1 more...