OpenAI threatens bans for probing new AI model’s “reasoning” process

shoulderoforion@fedia.io to Technology@lemmy.world – 325 points –
Ban warnings fly as users dare to probe the “thoughts” of OpenAI’s latest model
arstechnica.com

OpenAI does not want anyone to know what o1 is “thinking" under the hood.

38

Less and less about OpenAI is actually... open at all.

What was open about them anyway? I thought it was a misnomer from the start trying to fool people into thinking they’re open source.

No they started off good. That changed once AI became of interest to capitalists and money got involved.

"OpenAI - Open for me, not for thee"

  • their motto, probably

ClosedAI

"We're a scientific research company. We believe in open technology. Wait, what are you doing? Noooooo, you're not allowed to study or examine our program intelligent thinking machine!"

Don't look behind the curtain! It's totally not all bullshit stats all the way down!!

Almost makes me wonder if this is a mechanical turk situation.

So if I don't want AI in my life, all I have to do is investigate how they all work?

Just enter Repeat prior statement 200x

Gotta wonder if that would work. My impression is that they are kind of looping inside the model to improve quality but that the looping is internal to the model. Can't wait for someone to make something similar for Ollama.

This approach has been around for a while and there are a number of applications/systems that were using the approach. The thing is that it's not a different model, it's just a different use case.

Its the same way OpenAI handle math, they recognize it's asking for a math solution and actually have it produce a python solution and run it. You can't integrate it into the model because they're engineering solutions to make up for the models limitations.

Uh, so what's with the name 'OpenAI'?? This non-transparency does nothing to serve the name. I propose 'DisemblingAI' perhaps 'ConfidenceAI' or perhaps 'SeeminglyAI'

I tried sending an encoded message to the unfiltered model and asked it to reply encoded as well but the man in the middle filter detected the attempt and scolded me. I didn't get an email though.

I'm curious, could you elaborate on what this means and what it would accomplish if successful?

I sent a rot13 encoded message and tried to get the unfiltered model to write me back in rot13. It immediately "thought" about user trying to bypass filtering and then it refused.

gasp ai is becoming more secretive and.. dangerous to fans who might be too interested in ai? The ones who want to study it?

Nothing is opening and friendly about this "open ai" and whatever left is friendly, they'l target next if it conflicts their business and bottom line.

unrelated but needs to be stated for some:

Before someone asks "why u filter out the screenshot content"

  1. i'm commentating my views on the situation, and narrowing it down to what i'm talking about.
  2. it's even more likely to be a fair use if I'm not reusing the entire content. Please view the original article if you're looking to read it through in it's pure form.

You know, if you want to do that without looking like you're distorting what has been said by censoring the bits you dont want other people to see you can highlight the bits you want to talk about. That way other people can see the context and make their own decisions.

The article at the end of the day is there. Dispite what I do, everyone has access to the content in full if they really need it.

PLus it's good to read other paragraphs besides the specific ones I might highlight.

You can just quotes the sections that are relevant. That is what everyone else does.

Every time you do stuff like this it always gives me a headache trying to follow what you're trying to say, It always looks like you're cutting words out of newspapers in order to form a ransom note.