OpenAI could be on the brink of bankruptcy in under 12 months, with projections of $5 billion in losses

☆ Yσɠƚԋσʂ ☆@lemmy.ml to Technology@lemmy.ml – 141 points –
OpenAI could be on the brink of bankruptcy in under 12 months, with projections of $5 billion in losses
windowscentral.com
70

350,000 servers? Jesus, what a waste of resources.

just capitalist markets allocating resources efficiently where they're need

It's a brand new, highly competitive technology and ChatGPT has first mover status with a trailer load of capital behind it. They are going to burn a lot of resources right now to innovate quickly and reduce latency etc If they reach a successful product-market-fit getting costs down will eventually be critical to it actually being a viable product. I imagine they will pipe this back into ChatGPT for some sort of AI-driven scaling solution for their infrastructure.

TL;DR - It's kind of like how a car uses most of it's resources going from 0-60 and then efficiencies kick-in at highway speeds.

Regardless I don't think they will have to worry about being profitable for a while. With the competition heating up I don't think there is any way they don't secure another round of funding.

Facebook is trying to burn the forest around OpenAI and other closed models by removing the market for "models" by themselves, by releasing their own freely to the community. A lot of money is already pivoting away towards companies trying to find products that use the AI instead of the AI itself. Unless OpenAI pivots to something more substantial than just providing multimodal prompt completion they're gonna find themselves without a lot of runway left.

If they run out of money (unlikely), they still have a recent history with Microsoft.

TL;DR - It’s kind of like how a car

Yes. It's an inefficient and unsustainable con that's literally destroying the planet.

Sounds like we're going to get some killer deals on used hardware in a year or so

Now's the time to start saving for a discount GPU in approximately 12 months.

They don't use GPUs, they use more specialized devices like the H100.

Everyone that doesn’t have access to those is using gpus though.

We are talking specifically about OpenAI, though.

People who previously were at the high end of GPU can now afford used H100s -> they sell their GPUs -> we can maybe afford them

Yep and if OpenAI goes under the whole market will likely crash, people will dump their GPUs they’ve been using to create models and then boom, you’ve got a bunch of GPUs available.

That would depend entirely on why OpenAI might go under. The linked article is very sparse on details, but it says:

These expenses alone stack miles ahead of its rivals' expenditure predictions for 2024.

Which suggests this is likely an OpenAI problem and not an AI in general problem. If OpenAI goes under the rest of the market may actually surge as they devour OpenAI's abandoned market share.

Can I use a H100 to run hell divers 2?

I do expect them to receive more funding, but I also expect that to be tied to pricing increases. And I feel like that could break their neck.

In my team, we're doing lots of GenAI use-cases and far too often, it's a matter of slapping a chatbot interface onto a normal SQL database query, just so we can tell our customers and their bosses that we did something with GenAI, because that's what they're receiving funding for. Apart from these user interfaces, we're hardly solving problems with GenAI.

If the operation costs go up and management starts asking what the pricing for a non-GenAI solution would be like, I expect the answer to be rather devastating for most use-cases.

Like, there's maybe still a decent niche in that developing a chatbot interface is likely cheaper than a traditional interface, so maybe new projects might start out with a chatbot interface and later get a regular GUI to reduce operation costs. And of course, there is the niche of actual language processing, for which LLMs are genuinely a good tool. But yeah, going to be interesting how many real-world use-cases remain once the hype dies down.

It's also worth noting that smaller model work fine for these types of use cases, so it might just make sense to run a local model at that point.

Good. It's fake crap tech that no one needs.

It's actually really awesome and truly helps with my work.

I think the guy above is just mad he can't figure out how to use it. Always easier to be mad at the tool.

GPT is selectively useful. It's also, as of the last few weeks dumb as a bag of bricks. Dumber than usual. 4 and 4o are messed up. 4 mini is an idiot. Not sure how they broke them, but it started roughly around the time of the assassination attempt. Not sure if it was a national security request or a mere coincidence, but just the same.

I'm even seeing 4o make comically dumb and stubborn programming mistakes lately, like:

GPT: "I totally escaped that character"

Me: "no, it's the same as your previous response."

GPT: "Oh, sorry, here is the corrected code." replies with same code again.

I canceled my sub.

replies with the same code again

And that's exactly why I've already given up on AI before even really getting into it. The only things I use it for is when I want a basic skeleton for a simple script with the intention of turning it into a real script myself. It's also pretty good at generating grep, sed and awk commands and oneliners (or at least it was when I last tried it), and sometimes, in spotting mistakes with them.

different guy here. It seemed to be fairly useful for software engineers to solve quick issues where the answer isn't immediately obvious - but it's terrible at most other jobs.

And part of why it's bad is because you have to type into a text box what you want and read it back (unless you build you own custom API integration- which goes without saying is also a terrible way to access a product for 99% of people)

Another part of why it's bad is because you're sharing proprietary information with a stranger that is definitely cataloging and profiling it

Very few people interact with language in a way that is bidirectionally friendly with AI, and AI just isnt very good at writing. It's very good at creating strings of words that make sense and fit a theme, but most of what makes "very good" writing isn't just basic competency of the language.

The start(-up?)[sic] generates up to $2 billion annually from ChatGPT and an additional $ 1 billion from LLM access fees, translating to an approximate total revenue of between $3.5 billion and $4.5 billion annually.

I hope their reporting is better then their math...

Last time a batch of these popped up it was saying they'd be bankrupt in 2024 so I guess they've made it to 2025 now. I wonder if we'll see similar articles again next year.

For anyone doing a serious project, it's much more cost effective to rent a node and run your own models on it. You can spin them up and down as needed, cache often-used queries, etc.

For sure, and in a lot of use cases you don't even need a really big model. There are a few niche scenarios where you require a large context that's not practical to run on your own infrastructure, but in most cases I agree.

This sounds like FUD to me. If it were it would be acquired pretty quickly.

They're wholly owned by Microsoft so it'd probably be mothballed at worst.

I hope not, I use it a lot for quickly programming answers and prototypes and for theory on my actuarial science MBA.

I find you can just run local models for that. For example, I've been using gpt4all with a the phind model and it works reasonably well

I use it all the time for work especially for long documents and formatting technical documentation. It's all but eliminated my removed work. A lot of people are sour on AI because "it's not going to deliver on generative AI etc etc" but it doesn't matter. It's super useful and we've really only scratched the surface of what it can be used for.

I also thing we just need to find use cases where it is working.

While it will not solve everything, it did solve some things. Like you have found, I have used it for generating simple artwork for internal documents, that would never get design funding (even if it would I would have spent much more time dealing with designer), rewriting sentences so it sounds better, grammar check, quick search engine, enciclopedia, copywriting some non important texts...

I would pay few bucks per month if it wasn't free. I gave it to grammarly and barely use it.

So I guess next step is just reducing cost of running those models, which is not that hard as we can see by open source space.

Is 1) the fact that an LLM can be indistinguishable from your original thought and 2) an MBA (lmfao) supposed to be impressive?

I don't think that person is bragging, just saying why it's useful to them

OpenAI is no longer the cutting edge of AI these days, IMO. It'll be fine if they close down. They blazed the trail, set the AI revolution in motion, but now lots of other companies have picked it up and are doing better at it than them.

There is no AI Revolution. There never was. Generative AI was sold as an automation solution to companies looking to decrease labor costs, but's it's not actually good at doing that. Moreover, there's not enough good, accurate training material to make generative AI that much smarter or more useful than it already is.

Generative AI is a dead end, and big companies are just now starting to realize that, especially after the Goldman-Sachs report on AI. Sam Altman is just a snake oil saleman, another failing-upwards executive who told a bunch of other executives what they wanted to hear. It's just now becoming clear that the emperor has no clothes.

Generative AI is not smart to begin with. LLM are basically just compressed versions of the internet that predict statistically what a sentence needs to be to look "right". There's a big difference between appearing right and being right. Without a critical approach to information, independent reasoning, individual sensing, these AI's are incapable of any meaningful intelligence.

In my experience, the emperor and most people around them still has not figured this out yet.

Generative AI is just classification engines run in reverse. Classification engines are useful but they've been around and making incremental improvements for at least a decade. Also, just like self-driving cars they've been writing checks they can't honor. For instance, legal coding and radiology were supposed to be automated by classification engines a long time ago.

It's sort of like how you can create a pretty good text message on your phone using voice to text but no courtroom is allowing AI transcription.

There's still too much risk that it will capitalize the wrong word or replace a word that's close to what was said or do something else wholly unconceived of to trust it with our legal process.

If they could guarantee a 100% accurate transcription of spoken word to text it would put the entire field of Court stenographers out of business and generate tens of millions of dollars worth of digital contracts for the company who can figure it out.

Not going to do it because even today a phone can't tell the difference between the word holy and the word holy. (Wholly)

If they closed down, and the people still aligned with safety had to take up the mantle, that would be fine.

If they got desperate for money and started looking for people they could sell their soul to (more than they have already) in exchange for keeping the doors open, that could potentially be pretty fuckin bad.

Well, my point is that it's already largely irrelevant what they do. Many of their talented engineers have moved on to other companies, some new startups and some already-established ones. The interesting new models and products are not being produced by OpenAI so much any more.

I wouldn't be surprised if "safety alignment" is one of the reasons, too. There are a lot of folks in tech who really just want to build neat things and it feels oppressive to be in a company that's likely to lock away the things they build if they turn out to be too neat.

Many of their talented engineers have moved on to other companies, some new startups and some already-established ones.

When did this happen? I know some of the leadership departed but I hadn’t heard of it from the rank and file.

I’m not saying necessarily that you’re wrong; definitely it seems like something has changed between the days of GPT-3 and GPT-4 up until the present day. I just hadn’t heard of it.

There are a lot of folks in tech who really just want to build neat things and it feels oppressive to be in a company that's likely to lock away the things they build if they turn out to be too neat.

I’m not sure this is true for AI. Some of the people who are most worried about AI safety are the AI engineers. I have some impression that OpenAI’s safety focus was why so many people liked working for them, back when they were doing groundbreaking work.

AI engineers are not a unitary group with opinions all aligned. Some of them really like money too. Or just want to build something that changes the world.

I don't know of a specific "when" where a bunch of engineers left OpenAI all at once. I've just seen a lot of articles over the past year with some variation of " is a startup founded by former OpenAI engineers." There might have been a surge when Altman was briefly ousted, but that was brief enough that I wouldn't expect a visible spike on the graph.