505 of 700 OpenAI employees tell the board to resign.

db0@lemmy.dbzer0.com to Technology@lemmy.world – 1492 points –
294

You are viewing a single comment

I'd like to know why exactly the board fired Altman before I pass judgment one way or the other, especially given the mad rush by the investor class to re-instate him. It makes me especially curious that the employees are sticking up for him. My initial intuition was that MSFT convinced Altman to cross bridges that he shouldn't have (for $$$$), but I doubt that a little more now that the employees are sticking up for him. Something fucking weird is going on, and I'm dying to know what it is.

Wanting to know why is reasonable but it’s sus that we don’t already know. Why haven’t they made that clear? How did they think they could do this without a solid explanation? Why hasn’t one been delivered to set the rumors to rest?

It stinks of incompetence, or petty personal drama. Otherwise we’d know by now the very good reason they had.

If there was something illegal going on, then all parties involved would have incentive to keep it under wraps.

And possibly legal orders to not discuss it in public

If this circus is what they consider “under wraps” then I don’t know what.

Altman wanted profit. Board prioritized (rightfully, and to their mission) responsible, non-profit care of AI. Employees now side with Altman out of greed and view the board as denying them their mega payday. Microsoft dangling jobs for employees wanting to jump ship and make as much money possible. This whole thing seems pretty simple: greed (Altman, Microsoft, employees) vs the original non-profit mission (the board).

Edit: spelling

That's what I thought it was at first too. But regular employees aren't usually all that interested in their company being profit driven. Especially AI researchers. Most of those that I know are extremely passionate about ethics in AI.

But do they know things we don't know? They certainly might. Or it might just be bandwagoning or the likes.

But regular employees aren't usually all that interested in their company being profit driven. Especially AI researchers. Most of those that I know are extremely passionate about ethics in AI.

I would have thought so too of the employees, but threatening a move to Microsoft kinda says the opposite. That or they are just all-in on Altman as a person.

The only explanation I can come up with is that the workers and Altman both agreed in monetizing AI as much as possible. They're worried that if the board doesn't resign, the company will remain a non-profit more conservative in selling its products, so they won't get their share of the money that could be made.

Yeah, the speed at which MS snapped him up makes me think of Zampella and West from Infinity Ward.

Microsoft Stock dropped 2% with the announcement, hiring him was just to stop the hemorrhaging while they figure out what to do.

Isn't that more because MS own lots of OpenAI stock? But then 2% is neither here nor there anyway. More background noise than anything.

The tone of the blog post is so amateurish I feel like I'm reading a reddit post on r/Cryptocurrency

Don't get me wrong, this move from the board reeks of some grade A bullshit but this article is absolute crap. Is this supposed to be a serious journalism?

Thanks for sharing. That is... Weird in ways I didn't anticipate. "Weird cult of pseudointellectuals upending the biggest name in silicon valley" wasn't on my bingo board.

IMO there are some good reasons to be concerned about AI, but those reasons are along the lines of "it's going to be massively disruptive to the economy and we need to prepare for that to ensure it's a net positive", not "it's going to take over our minds and turn us into paperclips."

Social media already did that.

Not the paperclips part, that might actually be of some use.

The author did a poor job of explaining that. He’s referencing the thought experiment of a businessman instructing a super effective AI to make paperclips. Given a terse enough objective and an effective enough AI, one can imagine a scenario in which the businessman and the whole world in fact are turned into paperclips. This is obviously not the businessman’s goal, but it was the instruction he gave the AI. The implication of the thought experiment is that AI needs guardrails, perhaps even ethics, or else it can unintentionally result in a doomsday scenario.

I don't know a lot about the background but this article feels super biased against one side.

Can somebody explain the following quote in the article for me please?

Rationalists’ chronic inability to talk like regular humans may even explain the statement calling Altman a liar.

Imagine "roko's basilisk", but extended into an entire philosophy. It's the idea that "we" need to anything and everything to create the inevitable ultimate super-ai, as fast as possible. Climate change, wars, exploitation, suffering? None of that matters compared to the benefits humanity stands to gain when the ultimate super-ai goes online

A duel between hucksters and the delusional makes sense. The delusional rely on the hucksters for funding whether they want to or not though. No heroes.

I don't think msft convinced him with money, but rather opportunity. He clearly still wants to work with AI and 2nd best place for that after openAI is Microsoft

Second best would be Google, but for him it's Microsoft because he's probably getting a sweetheart deal as being in control of his destiny (not really, but at least for a short while)

Microsoft has access to a lot of OpenAI's code, weights etc. and he's already been working with them. It would be much better for him than to join some other company he has no experience with.

He's not the guy who writes code, he's a VC or management guy. You might say he has good ideas, as ChatGPT interface is attributed to him, but he didn't make it.

1 more...