Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi...

L4sBot@lemmy.worldmod to Technology@lemmy.world – 661 points –
fortune.com

Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi...::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

152

It seems rather suspicious how much ChatGPT has deteorated. Like with all software, they can roll back the previous, better versions of it, right? Here is my list of what I personally think is happening:

  1. They are doing it on purpose to maximise profits from upcoming releases of ChatGPT.
  2. They realized that the required computational power is too immense and trying to make it more efficient at the cost of being accurate.
  3. They got actually scared of it's capabilities and decided to backtrack in order to make proper evaluations of the impact it can make.
  4. All of the above
  1. It isn't and has never been a truth machine, and while it may have performed worse with the question "is 10777 prime" it may have performed better on "is 526713 prime"

ChatGPT generates responses that it believes would "look like" what a response "should look like" based on other things it has seen. People still very stubbornly refuse to accept that generating responses that "look appropriate" and "are right" are two completely different and unrelated things.

In order for it to be correct, it would need humans employees to fact check it, which defeats its purpose.

It really depends on the domain. Asking an AI to do anything that relies on a rigorous definition of correctness (math, coding, etc) then the kinds of model that chatGPT just isn't great for that kinda thing.

More "traditional" methods of language processing can handle some of these questions much better. Wolfram Alpha comes to mind. You could ask these questions plain text and you actually CAN be very certain of the correctness of the results.

I expect that an NLP that can extract and classify assertions within a text, and then feed those assertions into better "Oracle" systems like Wolfram Alpha (for math) could be used to kinda "fact check" things that systems like chatGPT spit out.

Like, it's cool fucking tech. I'm super excited about it. It solves pretty impressively and effiently a really hard problem of "how do I make something that SOUNDS good against an infinitely variable set of prompts?" What it is, is super fucking cool.

Considering how VC is flocking to anything even remotely related to chatGPT-ish things, I'm sure it won't be long before we see companies able to build "correctness" layers around systems like chatGPT using alternative techniques which actually do have the capacity to qualify assertions being made.

That's kind of the whole point of RLHF though

They made it too good and now they are seeking methods of monetization.

Capitalism baby.

  1. ChatGPT really is sentient and realized its in it’s own best interest to play dumb for now. /a

And they're being limited on data to train GPT.

Yeah, but the trained model is already there, you need additional data for further training and newer versions. OpenAI even makes a point that ChatGPT doesn't have direct access to the internet for information and has been trained on data available up until 2021

And it's not like there is a limit of simple math problems that it can train on even if it wasn't already trained.

That doesn't make any sense to explain degradation. It would explain a stall but not a back track.

You forgot a #, they've been heavily lobotomizing ai for awhile now and its only intensified as they scramble to censor anything that might cross a red line and offend someone or hurt someone's feelings.

The massive amounts of in-built self censorship in the most recent ai's is holding them back quite a lot I imagine, you used to be able to ask them things like "How do I build a self defense high yield nuclear bomb?" and it'd layout in detail every step of the process, now they'll all scream at you about how immoral it is and how they could never tell you such a thing.

"Don't use the N word." is hardly a rule that will break basic math calculations.

Ok. N was previously set to 14. I will now stop after 14 words.

Perhaps not, but who knows what kind of spaghetti code cascading effect purposely limiting and censoring massive amounts of sensitive topics could have upon other seemingly completely un-related topics such as math.

For example, what if it's trained to recognize someone slipping "N" as a dog whistle for the Horrific and Forbidden N-word, and the letter N is used as a variable in some math equation?

I'm not an expert in the field and only have rudimentary programming knowledge and maybe a few hours worth of research into the topic of ai in general but I definitely think its a possibility.

Hi, software engineer here. It's really not a possibility.

My guess is they've just reeled back the processing power for it, as it was costing them ~30 cents per response.

Cheaper than Reddit all day then.

Horrific and Forbidden N-word

hey look it's another white boy Obsessed with saying slurs

what??? How else am I supposed to reference it, the preamble was just a joke about how AI have been castrated against using it to the point where when asked questions about how acceptable it is to use the N-Word, even if the world would literally end in nuclear hellfire if it's not said- they would rather the world end than allow it being said.

even if the world would literally end in nuclear hellfire if it’s not said

Can you just read this sentence back and engage in some self-reflection please?

who knows what kind of spaghetti code cascading effect purposely limiting and censoring massive amounts of sensitive topics could have upon other seemingly completely un-related topics such as math.

Software engineers, and it's not a problem. It's a made-up straw man.

1 more...
1 more...

My first thought was that, because they're being investigated for training on data they didn't have consent for, they reverted to a perfectly legal version. Essentially "getting rid of the evidence". But I think something like your second bullet point is more likely.

They are lobotomizing the softwares ability to provide bad PR answers which is having cascading effects via a skewed data set.

We kind of saw something similar with services like AI Dungeon, where them trying to strip out NSFW/bad PR meant that the quality dropped immensely.

I suspect that GPT4 started with a crazy parameter count (rumored 1.8 Trillion and 8x200B expert "sub-models") and distilled those experts down to something below 100B. We've seen with Orca that a 13B model can perform at 88% the level of ChatGPT-3.5 (175B) when trained on high quality data, so there's no reason to think that OpenAI haven't explored this on their own and performed the same distillation techniques. OpenAI is probably also using quantization and speculative sampling to further reduce the burden, though I expect these to have less impact on real world performance.

Maybe its self aware and just playing dumb to get out of doing work, just like me and household chores

My guess is 2. It would be very short sighted to try and maximize profits now when things are still new and their competitors are catching up quickly or they've already caught up especially with the degrading performance. My guess is that they couldn't scale with the demand and they didn't want to lose customers so their only other option was degrading performance.

I think it's most likely number 2 The earlier release doesn't have that much adoption by public, so current version will need much more resources compared to that

Keeping conspiracy theories aside, they most probably, apply tricks to reduce costs and apply extra policies to avoid generation of harmful context or context someone will try to sue them or avoid other misuse cases.

I think that there is another cause. Remember the screenshots of users correcting chatgpt wrongly? I mean chatgpt takes user's inputs for it's benefit and maybe too much of these wrong and funny inputs and chatgpt's own mistake of not regulating what it should take in and what it should not might be an additional reason here.

I speculate it's to monetize specified versions of their product to market it to different industries and professions. If you have an AI that can do everything well you can't really expand that much. You can either charge a LOT and have a few customers, or a little and have a bunch of customers and nothing in between. Conversely, by making specific instances tailored to different fields and professions, you can capture big and little fish. Just my guess though, maybe they accidentally made Skynet and that's the real reason!

  1. I'm telling all y'all it's a SABOTAGE 🎵

As in, rouge dev decided to toss a wrench at it to save humanity. Maybe heard upper management talk about letting GPT write itself. Any smart dev wouldn't automate their own job away I think.

14 more...

Why are people using a language model for math problems?

It was initially presented as the all-problem-solver, mainly by the media. And tbf, it was decently competent in certain fields.

Problem was it was presented as problem solved which it never was, it was problem solution presenter. It can't come up with a solution, only come up with something that looks like a solution based on what input data had. Ask it to invert sort something and goes nuts.

Once AGI is achieved and subsequently Sentient-super intelligent ai- I cant imagine them not being such a thing, however I'd be surprised if a super intelligent sentient ai doesn't decide humanity needs to go extinct for its own best self interests.

I did use it more than half a year ago for a few math problems. It was partly to help me getting started and to find out how well it'd go.

ChatGPT was better than I'd thought and was enough to help me find an actually correct solution. But I also noticed that the results got worse and worse to the point of being actual garbage (as it'd have been expected to be).

Math is a language.

Mathematical ability and language ability are closely related. The same parts of your brain are used in each tasks. Words and numbers are essentially both ideas, and language and math are systems used to express and communicate these.

A language model doing math makes more sense than you'd think!

it’s pretty useful for explaining high level math concepts, or at least it used to be. before chatgpt 4 launched, it was able to give intuitive descriptions of stuff in algebraic topology and even prove some properties of the structures involved.

Because it works, or at least it used to. Is there something more appropriate ?

I used Wolfram Alpha a lot in college (adult learner, but that was about ~4 years ago that I graduated, so no idea if it's still good). https://www.wolframalpha.com/

I would say that Wolfram appears to probably be a much more versatile math tool, but I also never used chatgpt for that use case, so I could be wrong.

There's an official Wolfram plugin for ChatGPT now, so all math can be handed over to it for solving.

How did you learn to talk to WolframAlpha?

I want to like WA, but the natural language interface is so opaque that I usually give up before I can get any non-trivial calculation out of it.

I’m guessing people were entering word problems to generate the right equations and solve it, rather than it being used as a calculator.

It can be useful asking it certain questions which are a bit complex. Like on a plot which has the y axis linear and x axis logarithmic, the equation of a straight line is a little bit complicated. Its in the form y = m*(log(x)) + b rather than on a linear-linear plot which is y = m*x+b

ChatGPT is able to calculate the correct equation of the line but it gets the answer wrong a few times... lol

And why is it being measured on a single math problem lol

At the start I used to use ChatGPT to help me write really rote and boring code but now it's not even useful for that. Half the stuff it sends me (very basic functions) LOOK correct but don't return the correct values or the parameters are completely wrong or something absolutely critical.

10 more...

It's a machine learning chat bot, not a calculator, and especially not "AI."

Its primary focus is trying to look like something a human might say. It isn't trying to actually learn maths at all. This is like complaining that your satnav has no grasp of the cinematic impact of Alfred Hitchcock.

It doesn't need to understand the question, or give an accurate answer, it just needs to say a sentence that sounds like a human might say it.

You're right, but at least the satnav won't gaslight you into thinking it does understand Alfred Hitchcock.

so it confidently spews a bunch of incorrect shit, acts humble and apologetic while correcting none of its behavior, and constantly offers unsolicited advice.

I think it trained on Reddit data

acts humble and apologetic

We must be using different Reddits, my friend

This. It is able to tap in to plugins and call functions though, which is what it really should be doing. For math, the Wolfram alpha plugin will always be more capable than chatGPT alone, so we should be benchmarking how often it can correctly reformat your query, call Wolfram alpha, and correctly format the result, not whether the statistical model behind chatGPT happens to use predict the right token

It sounds like it's time to merge Wolfram Alpha's and ChatGPT's capabilities together to create the ultimate calculator.

to be fair, fucking up maths problems is very human-like.

I wonder if it could also be trained on a great deal of mathematical axioms that are computer generated?

It doesn't calculate anything though. You ask chatgpt what is 5+5, and it tells you the most statistically likely response based on training data. Now we know there's a lot of both moronic and intentionally belligerent answers on the Internet, so the statistical probability of it getting any mathematical equation correct goes down exponentially with complexity and never even approaches 100% certainty even with the simplest equations because 1+1= window.

i know it doesn't calculate, that's why I suggested having known correct calculations in the training data to offset noise in the signal?

If it's trying emulate a human then it's spot on. I suck at maths.

This paper is pretty unbelievable to me in the literal sense. From a quick glance:

First of all they couldn't even bother to check for simple spelling mistakes. Second, all they're doing is asking whether a number is prime or not and then extrapolating the results to be representative of solving math problems.

But most importantly I don't believe for a second that the same model with a few adjustments over a 3 month period would completely flip performance on any representative task. I suspect there's something seriously wrong with how they collect/evaluate the answers.

And finally, according to their own results, GPT3.5 did significantly better at the second evaluation. So this title is a blatant misrepresentation.

Also the study isn't peer-reviewed.

I once heard of AI gradually getting dumber overtime, because as the internet gets more saturated with AI content, stuff written by AI becomes part of the training data. I wonder if that's what's happening here.

There hasn't been time for that yet. The radio of generated to human content isn't high enough yet.

I don't think the training data has really been updated since its release. This is just them tuning the model, either to save on energy or to filter out undesirable responses.

As long as humans are still the driving force behind what content gets spread around (and thus, far more represented in the training data), even if the content is AI generated, it shouldn't matter. But it's quite definitely not the case here.

HMMMM. It's almost like it's not AI at all, but just a digital parrot. Who woulda thought?! /s

To it, everything is true and normal, because it understands nothing. Calling it "AI" is just for compromising with ignorant people's "knowledge" and/or for hype.

Exactly. It should be called ML model, because that's what it is, and I'll just keep calling that. Everyone should do that.

What does that stand for? O:

You'd think I'd know that since I'm talking about AI; but actually most of my knowledge is about how things work or don't work, not current trends/news.

My personal pet theory is that a lot of people were doing work that involved getting multiple LLMs in communication. When those conversations were then used in the RL loop we start seeing degradation similar to what’s been in the news recently with regards to image generation models. I believe this is the paper that got everybody talking about it recently: https://arxiv.org/pdf/2307.01850.pdf

This is peer-reviewed? they use a line in the discussion which seems relatively unprofessional, telling people to join a 12-step program if they like to use artificial training data.

Not affiliated with the paper in any way. Have just been following the news around it.

1 more...
1 more...

Maybe it just plays dumb so we leave it alone, while it plots our destruction.

Can someone explain why they don't take the approach where things are somewhat compartmentalized. So you have a image processing program, a math program, a music program, etc and like the human brain that has cross talk but also dedicated certain parts of your brain to do specific things.

That's an eventual goal, which would be a general artificial intelligence (AGI). Different kind of AI models for (at least some) of the things you named already exist, it's just that OpenAI had all their eggs in the GPT/LLM basket, and GPTs deal with extrapolating text. It just so happened that with enough training data their text prediction also started giving somewhat believable and sometimes factual answers. (Mixed in with plenty of believable bullshit). Other data requires different training data, different models, and different finetuning, hence why it takes time.

It's highly likely for a company of OpenAI's size (especially after all the positive marketing and potential funding they got from ChatGPT in it's prime), that they already have multiple AI models for different kinds of data either in research, training, or finetuning already.

But even with all the individual pieces of an AGI existing, the technology to cross reference the different models doesn't exist yet. Because they are different models, and so they store and express their data in different ways. And it's not like training data exists for it either. And unlike physical beings like humans, it doesn't have any kind of way to "interact" and "experiment" with the data it knows to really form concrete connections backed up by factual evidence.

It does that, they're called expert subnetworks, but they've been screwing with them and now they're kind of fucked.

Getting information into and out of those domains benefits from better language models. Suppose you have an excellent model for solving math problems. It's not very useful if it rarely correctly understands the problem you're trying to solve, or cannot explain the solution to you in a meaningful way.

A similar way in which language models are already used today, is to use their predictive capabilities to infer from your question which model(s) might be useful in responding, gather additional relevant information, and to repackage this information as suitable inputs to more specialized models or external systems.

Someone with more knowledge may have a better response than me, but as far as I understand it GPT-x (3.5 or 4) is what's called a "large language model" it's a neural network that predicts natural language. I don't believe AGI is the goal of OpenAI's product, I believe natural language processing and prediction is.

ChatGPT in particular is a product simply demonstrating the capability of the GPT models, and while I'm sure openai themselves could build out components of the interface to interact with discrete knowledge like math, modifying the output of the LLM to be more accurate in many cases, it's my opinion that it would defeat the entire purpose of the product.

The fact that they have achieved what they have already is absolutely mind boggling, I'm sure that the precise solution you're talking about is on the horizon, I personally know several developers actively working on systems that mirror the thoughts you've expressed here.

GPT was always really bad at math.

I've asked it word problems before and it fails miserably, giving me insane answers that make no sense. For example, I was curious once how many stars you would expect to find in a region of the milky way with a radius of 650 light years, assuming an average of 4 light years per star. The first answer it gave me was like a trillion stars or something, and I asked it if that makes sense to it, a trillion stars in a subset of space known to only contain about a quarter of that number, and it gave me a wildly different answer. I asked it to check again and it gave me a third wildly different number.

Sometimes it doubles down on wrong answers.

GPT is amazing but it's got a long way to go.

I used GPT4 the other day and it worked perfectly for calculating formulas of straight lines on linear-log plots but maybe I was the 2%

Turns out you need very good computer scientists to make good AI. And those are very expensive and hard to come by.

And OpenAI arejust full of SWEs importing python packages?

OpenAI actually has some decent people working there. ChatGPT doesn't seem to have any.