The air begins to leak out of the overinflated AI bubble

grid11@lemy.nl to Technology@lemmy.world – 27 points –
Column: The air begins to leak out of the overinflated AI bubble
latimes.com
160

Argh, after 25 years in tech I am surprised this keeps surprising you.

We’ve crested for sure. AI isn’t going to solve everything. AI stock will fall. Investor pressure to put AI into everything will subside.

The we will start looking at AI as a cost benefit analysis. We will start applying it where it makes sense. Things will get optimised. Real profit and long term change will happen over 5-10 years. And afterwards, the utter magical will seem mundane while everyone is chasing the next hype cycle.

I'm far far more concerned about all the people who were deemed non essential so quickly after being "essential" for so long because AI will do so much work slaps employees with 2 weeks severance

I’m right there with you. One of my daughters love drawing and designing clothes and I don’t know what to tell her in terms of the future. Will human designs be more valued? Less valued?

I’m trying to remain positive; when I went into software my parents barely understood that anyone could make a living of that “toy computer”.

But I agree; this one feels different. I’m hoping they all feel different to the older folks (me).

Truth. I would say the actual time scales will be longer, but this is the harsh, soul-crushing reality that will make all the kids and mentally disturbed cultists on r/singularity scream in pain and throw stones at you. They're literally planning for what they're going to do once ASI changes the world to a star-trek, post-scarcity civilization... in five years. I wish I was kidding.

Thank fucking god.

I got sick of the overhyped tech bros pumping AI into everything with no understanding of it....

But then I got way more sick of everyone else thinking they're clowning on AI when in reality they're just demonstrating an equal sized misunderstanding of the technology in a snarky pessimistic format.

I’m more annoyed that Nvidia is looked at like some sort of brilliant strategist. It’s a GPU company that was lucky enough to be around when two new massive industries found an alternative use for graphics hardware.

They happened to be making pick axes in California right before some prospectors found gold.

And they don’t even really make pick axes, TSMC does. They just design them.

They just design them.

It's not trivial though. They also managed to lock dev with CUDA.

That being said I don't think they were "just" lucky, I think they built their luck through practices the DoJ is currently investigating for potential abuse of monopoly.

Yeah CUDA, made a lot of this possible.

Once crypto mining was too hard nvidia needed a market beyond image modeling and college machine learning experiments.

They didn't just "happen to be around". They created the entire ecosystem around machine learning while AMD just twiddled their thumbs. There is a reason why no one is buying AMD cards to run AI workloads.

I feel like for a long time, CUDA was a laser looking for a problem.
It's just that the current (AI) problem might solve expensive employment issues.
It's just that C-Suite/managers are pointing that laser at the creatives instead of the jobs whose task it is to accumulate easily digestible facts and produce a set of instructions. You know, like C-Suites and middle/upper managers do.
And NVidia have pushed CUDA so hard.

AMD have ROCM, an open source cuda equivalent for amd.
But it's kinda like Linux Vs windows. NVidia CUDA is just so damn prevalent.
I guess it was first. Cuda has wider compatibility with Nvidia cards than rocm with AMD cards.
The only way AMD can win is to show a performance boost for a power reduction and cheaper hardware. So many people are entrenched in NVidia, the cost to switching to rocm/amd is a huge gamble

One of the reasons being Nvidia forcing unethical vendor lock in through their licensing.

Go ahead and design a better pickaxe than them, we'll wait...

Go ahead and design a better pickaxe than them, we’ll wait…

Same argument:

"He didn't earn his wealth. He just won the lottery."

"If it's so easy, YOU go ahead and win the lottery then."

My fucking god.

"Buying a lottery ticket, and designing the best GPUs, totally the same thing, amiriteguys?"

In the sense that it's a matter of being in the right place at the right time, yes. Exactly the same thing. Opportunities aren't equal - they disproportionately effect those who happen to be positioned to take advantage of them. If I'm giving away a free car right now to whoever comes by, and you're not nearby, you're shit out of luck. If AI didn't HAPPEN to use massively multi-threaded computing, Nvidia would still be artificial scarcity-ing themselves to price gouging CoD players. The fact you don't see it for whatever reason doesn't make it wrong. NOBODY at Nvidia was there 5 years ago saying "Man, when this new technology hits we're going to be rolling in it." They stumbled into it by luck. They don't get credit for forseeing some future use case. They got lucky. That luck got them first mover advantage. Intel had that too. Look how well it's doing for them. Nvidia's position over AMD in this space can be due to any number of factors... production capacity, driver flexibility, faster functioning on a particular vector operation, power efficiency... hell, even the relationship between the CEO of THEIR company and OpenAI. Maybe they just had their salespeople call first. Their market dominance likely has absolutely NOTHING to do with their GPU's having better graphics performance, and to the extent they are, it's by chance - they did NOT predict generative AI, and their graphics cards just HAPPEN to be better situated for SOME reason.

they did NOT predict generative AI, and their graphics cards just HAPPEN to be better situated for SOME reason.

This is the part that's flawed. They have actively targeted neural network applications with hardware and driver support since 2012.

Yes, they got lucky in that generative AI turned out to be massively popular, and required massively parallel computing capabilities, but luck is one part opportunity and one part preparedness. The reason they were able to capitalize is because they had the best graphics cards on the market and then specifically targeted AI applications.

The tech bros had to find an excuse to use all the GPUs they got for crypto after they bled that dry

The tech bros had to find an excuse to use all the GPUs they got for crypto after they bled that dry upgraded to proof-of-stake.

I don't see a similar upgrade for "AI".

And I'm not a fan of BTC but $50,000+ doesn't seem very dry to me.

As I job-hunt, every job listed over the past year has been "AI-drive [something]" and I'm really hoping that trend subsides.

"This is an mid level position requiring at least 7 years experience developing LLMs." -Every software engineer job out there.

Reminds me of when I read about a programmer getting turned down for a job because they didn't have 5 years of experience with a language that they themselves had created 1 to 2 years prior.

A lot of the AI boom is like the DotCom boom of the Web era. The bubble burst and a lot of companies lost money but the technology is still very much important and relevant to us all.

AI feels a lot like that, it's here to stay, maybe not in th ways investors are touting, but for voice, image, video synthesis/processing it's an amazing tool. It also has lots of applications in biotech, targetting systems, logistics etc.

So I can see the bubble bursting and a lot of money being lost, but that is the point when actually useful applications of the technology will start becoming mainstream.

The bubble burst and a lot of companies lost money but the technology is still very much important and relevant to us all.

The DotCom bubble was built around the idea of online retail outpacing traditional retail far faster than it did, in fact. But it was, at its essence, a system of digital book keeping. Book your orders, manage your inventory, and direct your shipping via a more advanced and interconnected set of digital tools.

The fundamentals of the business - production, shipping, warehousing, distribution, the mathematical process of accounting - didn't change meaningfully from the days of the Sears-Roebuck Catalog. Online was simply a new means of marketing. It worked well, but not nearly as well as was predicted. What Amazon did to achieve hegemony was to run losses for ten years, while making up the balance as a government sponsored series of data centers (re: AWS) and capitalize on discount bulk shipping through the USPS before accruing enough physical capital to supplant even the big box retailers. The digital front-end was always a loss-leader. Nobody is actually turning a profit on Amazon Prime. It's just a hook to get you into the greater Amazon ecosystem.

Pivot to AI, and you've got to ask... what are we actually improving on? It's not a front-end. It's not a data-service that anyone benefits from. It is hemorrhaging billions of dollars just at OpenAI alone (one reason why it was incorporated as a Non-Profit to begin with - THERE WAS NO PROFIT). Maybe you can leverage this clunky behemoth into... low-cost mass media production? But its also extremely low-rent production, in an industry where - once again - marketing and advertisement are what command the revenue you can generate on a finished product. Maybe you can use it to optimize some industrial process? But it seems that every AI needs a bunch of human babysitters to clean up all the shit is leaves. Maybe you can get those robo-taxis at long last? I wouldn't hold my breath, but hey, maybe?!

Maybe you can argue that AI provides some kind of hook to drive retail traffic into a more traditional economic model. But I'm still waiting to see what that is. After that, I'm looking at AI in the same way I'm looking at Crypto or VR. Just a gimmick that's scaring more people off than it drags in.

The funny thing about Amazon, is we are phasing it out of our home now. Because it has become an online 7Eleven. You don’t pay for shipping and it comes fast, but you are often paying 50-100% more for everything. If you use AliExpress, 300-400% more… just to get it a week or two faster. I would rather go to local retailers that are increasing Chinese goods for a 150% profit, than Amazon and pay 300%. It just means I have to leave the house for 30 minutes.

would rather go to local retailers that are increasing Chinese goods for a 150% profit, than Amazon and pay 300%

A lot of the local retailors are going out of business in my area. And those that exist are impossible to get into and out of, due to the fixation on car culture. The Galleria is just a traffic jam that spans multiple city blocks.

The thing that keeps me at Amazon, rather than Target, is purely the time visit of shopping versus shipping.

I don't mean it's like the dotcom bubble in terms of context, I mean in terms of feel. Dotcom had loads of investors scrambling to "get in on it" many not really understanding why or what it was worth but just wanted quick wins.

This has same feel, a bit like crypto as you say but I would say crypto is very niche in real world applications at the moment whereas AI does have real world usages.

They are not the ones we are being fed in the mainstream like it replacing coders or artists, it can help in those areas but it's just them trying to keep the hype going. Realistically it can be used very well for some medical research and diagnosis scenarios, as it can correlate patterns very easily showing likelyhood of genetic issues.

The game and media industry are very much trialling for voice and image synthesis for improving environmental design (texture synthesis) and providing dynamic voice synthesis based off actors likenesses. We have had peoples likenesses in movies for decades via cgi but it's only really now we can do the same but for voices and this isn't getting into logistics and/or financial where it is also seeing a lot of application.

Its not going to do much for the end consumer outside of the guff you currently use siri or alexa for etc, but inside the industries AI is very useful.

crypto is very niche in real world applications at the moment whereas AI does have real world usages.

Crypto has a very real niche use for money laundering that it does exceptionally well.

AI does not appear to do anything significantly more effectively than a Google search circa 2018.

But neither can justify a multi billion dollar market cap on these terms.

The game and media industry are very much trialling for voice and image synthesis for improving environmental design (texture synthesis) and providing dynamic voice synthesis based off actors likenesses. We have had peoples likenesses in movies for decades via cgi but it’s only really now we can do the same but for voices and this isn’t getting into logistics and/or financial where it is also seeing a lot of application.

Voice actors simply don't cost that much money. Procedural world building has existed for decades, but it's generally recognized as lackluster beside bespoke design and development.

These tools let you build bad digital experiences quickly.

For logistics and finance, a lot of what you're exploring is solved with the technology that underpins AI (modern graph theory). But LLMs don't get you that. They're an extraneous layer that takes enormous resources to compile and offers very little new value.

I disagree, there are loads of white papers detailing applications of AI in various industries, here's an example, cba googling more links for you.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7577280/

there are loads of white papers detailing applications of AI in various industries

And loads more of its ineffectual nature and wastefulness.

Are you talking specifically about LLMs or Neural Network style AI in general? Super computers have been doing this sort of stuff for decades without much problem, and tbh the main issue is on training for LLMs inference is pretty computationally cheap

Super computers have been doing this sort of stuff for decades without much problem

Idk if I'd point at a supercomputer system and suggest it was constructed "without much problem". Cray has significantly lagged the computer market as a whole.

the main issue is on training for LLMs inference is pretty computationally cheap

Again, I would not consider anything in the LLM marketplace particularly cheap. Seems like they're losing money rapidly.

Google Search is such an important facet for Alphabet that they must invest as many billions as they can to lead the new generative-AI search. IMO for Google it's more than just a growth opportunity, it's a necessity.

"AI" is what put Google into trouble to begin with... Sure, let's double-down on the shittiness, I don't see how anything could go wrong.

I'm glad someone else is acknowledging that AI can be an amazing tool. Every time I see AI mentioned on lemmy, people say that it's entirely useless and they don't understand why it exists or why anyone talks about it at all. I mention I use ChatGPT daily for my programming job, it's helpful like having an intern do work for me, etc, and I just get people disagreeing with me all day long lol

Have any regular users actually looked at the prices of the "AI services" and what they actually cost?

I'm a writer. I've looked at a few of the AI services aimed at writers. These companies literally think they can get away with "Just Another Streaming Service" pricing, in an era where people are getting really really sceptical about subscribing to yet another streaming service and cancelling the ones they don't care about that much. As a broke ass writer, I was glad that, with NaNoWriMo discount, I could buy Scrivener for €20 instead of regular price of €40. [note: regular price of Scrivener is apparently €70 now, and this is pretty aggravating.] So why are NaNoWriMo pushing ProWritingAid, a service that runs €10-€12 per month? This is definitely out of the reach of broke ass writers.

Someone should tell the AI companies that regular people don't want to subscribe to random subscription services any more.

I work for an AI company that's dying out. We're trying to charge companies $30k a year and upwards for basically chatgpt plus a few shoddily built integrations. You can build the same things we're doing with Zapier, at around $35 a month. The management are baffled as to why we're not closing any of our deals, and it's SO obvious to me - we're too fucking expensive and there's nothing unique with our service.

As someone dabbling with writing, I bit the bullet and tried to start looking into the tools to see if they're actually useful, and I was impressed with the promised tools like grammar help, sentence structure and making sure I don't leave loose ends in the story writing, these are genuinely useful tools if you're not using generative capability to let it write mediocre bullshit for you.

But I noticed right away that I couldn't justify a subscription between $20 - $30 a month, on top of the thousand other services we have to pay monthly for, including even the writing software itself.

I have lived fine and written great things in the past without AI, I can survive just fine without it now. If these companies want to actually sell a product that people want, they need to scale back the expectations, the costs and the bloated, useless bullshit attached to it all.

At some point soon, the costs of running these massive LLM's versus the number of people actually willing to pay a premium for them are going to exceed reasonable expectations and we will see the companies that host the LLM's start to scale everything back as they try to find some new product to hype and generate investment on.

Shed a tear, if you wish, for Nvidia founder and Chief Executive Jenson Huang, whose fortune (on paper) fell by almost $10 billion that day.

Thanks, but I think I'll pass.

I’m sure he won’t mind. Worrying about that doesn’t sound like working.

I work from the moment I wake up to the moment I go to bed. I work seven days a week. When I'm not working, I'm thinking about working, and when I'm working, I'm working. I sit through movies, but I don't remember them because I'm thinking about work.

- Huang on his 14 hour workdays

It is one way to live.

That sounds like mental illness.

ETA: Replace "work" in that quote with practically any other activity/subject, whether outlandish or banal.

I sit through movies but I don't remember them because I'm thinking about baking cakes.

I sit through movies but I don't remember them because I'm thinking about traffic patterns.

I sit through movies but I don't remember them because I'm thinking about cannibalism.

I sit through movies but I don't remember them because I'm thinking about shitposting.

Obsessed with something? At best, you're "quirky" (depending on what you're obsessed with). Unless it's money. Being obsessed with that is somehow virtuous.

Valid argument for sure

It would be sad if therapists kept telling him that but he could never remember

Some would not call that living

Too much optimism and hype may lead to the premature use of technologies that are not ready for prime time.

— Daron Acemoglu, MIT

Preach!

I've noticed people have been talking less and less about AI lately, particularly online and in the media, and absolutely nobody has been talking about it in real life.

The novelty has well and truly worn off, and most people are sick of hearing about it.

My only real hope out of this is that that copilot button on keyboards becomes the 486 turbo button of our time.

Meaning you unpress it, and computer gets 2x faster?

Actually you pressed it and everything got 2x slower. Turbo was a stupid label for it.

I could be misremembering but I seem to recall the digits on the front of my 486 case changing from 25 to 33 when I pressed the button. That was the only difference I noticed though. Was the beige bastard lying to me?

Lying through its teeth.

There was a bunch of DOS software that runs too fast to be usable on later processors. Like a Rouge-like game where you fly across the map too fast to control. The Turbo button would bring it down to 8086 speeds so that stuff is usable.

The stock market is not based on income. It's based entirely on speculation.

Since then, shares of the maker the high-grade computer chips that AI laboratories use to power the development of their chatbots and other products have come down by more than 22%.

June 18th: $136 August 4th: $100 August 18th: $130 again now: $103 (still above 8/4)

It's almost like hype generates volatility. I don't think any of this is indicative of a "leaking" bubble. Just tech journalists conjuring up clicks.

Also bubbles don't "leak".

The broader market did the same thing

https://finance.yahoo.com/quote/SPY/

$560 to $510 to $560 to $540

So why did $NVDA have larger swings? It has to do with the concept called beta. High beta stocks go up faster when the market is up and go down lower when the market is done. Basically high variance risky investments.

Why did the market have these swings? Because of uncertainty about future interest rates. Interest rates not only matter vis-a-vis business loans but affect the interest-free rate for investors.

When investors invest into the stock market, they want to get back the risk free rate (how much they get from treasuries) + the risk premium (how much stocks outperform bonds long term)

If the risks of the stock market are the same, but the payoff of the treasuries changes, then you need a high return from stocks. To get a higher return you can only accept a lower price,

This is why stocks are down, NVDA is still making plenty of money in AI

Also bubbles don't "leak".

I mean, sometimes they kinda do? They either pop or slowly deflate, I'd say slow deflation could be argued to be caused by a leak.

We taking about bubbles or are we talking about balloons? Maybe we should change to using the word balloon instead, since these economic 'bubbles' can also deflate slowly.

Good point, not sure that economists are human enough to take sense into account, but I think we should try and make it a thing.

I've never seen a bubble deflate, but I digress.

You can do it easily with a balloon (add some tape then poke a hole). An economic bubble can work that way as well, basically demand slowly evaporates and the relevant companies steadily drop in value as they pivot to something else. I expect the housing bubble to work this way because new construction will eventually catch up, but building new buildings takes time.

The question is, how much money (tape) are the big tech companies willing to throw at it? There's a lot of ways AI could be modified into niche markets even if mass adoption doesn't materialize.

A bubble, not a balloon...

You do realize an economic bubble is a metaphor, right? My point is that a bubble can either deflate rapidly (severe market correction, or a "burst"), or it can deflate slowly (a bear market in a certain sector). I'm guessing the industry will do what it can to have AI be the latter instead of the former.

Yes, I do. It's a metaphor that you don't seem to understand.

My point is that a bubble can either deflate rapidly (severe market correction, or a "burst"), or it can deflate slowly (a bear market in a certain sector).

No, it cannot. It is only the former. The entire point of the metaphor is that its a rapid deflation. A bubble does not slowly leak, it pops.

One good example of a bubble that usually deflates slowly is the housing market. The housing market goes through cycles, and those bubbles very rarely pop. It popped in 2008 because banks were simultaneously caught with their hands in the candy jar by lying about risk levels of loans, so when foreclosures started, it caused a domino effect. In most cases, the fed just raises rates and housing prices naturally fall as demand falls, but in 2008, part of the problem was that banks kept selling bad loans despite high mortgage rates and high housing prices, all because they knew they could sell those loans off to another bank and make some quick profit (like a game of hot potato).

In the case of AI, I don't think it'll be the fed raising rates to cool the market (that market isn't impacted as much by rates), but the industry investing more to try to revive it. So Nvidia is unlikely to totally crash because it'll be propped up by Microsoft, Amazon, and Google, and Microsoft, Apple, and Google will keep pitching different use cases to slow the losses as businesses pull away from AI. That's quite similar to how the fed cuts rates to spur economic investment (i.e. borrowing) to soften the impact of a bubble bursting, just driven from mega tech companies instead of a government.

At least that's my take.

The AI bubble is never going to "pop" for Nvidia because they're not dependent AI. Other than slightly modifying the design of their chips. When the AI bubble does pop Nvidia will just go back to selling cards to gamers and professionals. They'll be the biggest profiteer of the bubble.

A lot of Nvidia's stock price is based on AI demand. If that evaporates, Nvidia's stock price would drop back to where it was before AI became a major profit driver. The big players will fight to keep AI business going, so I think we'd be in for a pretty soft landing there.

I didn't say anything about their stock price...

"Bubbles" are typically defined by stock/commodities prices. The 2000 dotcom bubble was defined by investor losses, the 2008 housing bubble was defined by housing price drops, etc. So an AI "bubble" will be quantified by stock prices of AI-related companies, like Nvidia.

I think the stock price will be at least partially supported by spending by the big tech companies trying to keep AI relevant. So I expect less of a "pop" and more of a gradual deflation.

2 more...
2 more...
2 more...
2 more...
2 more...
2 more...
2 more...
2 more...
2 more...
2 more...
2 more...
2 more...

I find it insane when "tech bros" and AI researchers at major tech companies try to justify the wasting of resources (like water and electricity) in order to achieve "AGI" or whatever the fuck that means in their wildest fantasies.

These companies have no accountability for the shit that they do and consistently ignore all the consequences their actions will cause for years down the road.

What’s funny is that we already have general intelligence in billions of brains. What tech bros want is a general intelligence slave.

Well put.

I'm sure plenty of people would be happy to be a personal assistant for searching, summarizing, and compiling information, as long as they were adequately paid for it.

It's research. Most of it never pans out, so a lot of it is "wasteful". But if we didn't experiment, we wouldn't find the things that do work.

Most of the entire AI economy isn't even research. It's just grift. Slapping a label on ChatGPT and saying you're an AI company. It's hustlers trying to make a quick buck from easy venture capital money.

Is it really a grift when you are selling possible value to an investor who would make money from possible value?

As in, there is no lie, investors know it’s a gamble and are just looking for the gamble that everyone else bets on, not that it l would provide real value.

I would classify speculation as a form of grift. Someone gets left holding the bag.

Personally I can't wait for a few good bankruptcies so I can pick up a couple of high end data centre GPUs for cents on the dollar

Search Nvidia p40 24gb on eBay, 200$ each and surprisingly good for selfhosted llm, if you plan to build array of gpus then search for p100 16gb, same price but unlike p40, p100 supports nvlink, and these 16gb is hbm2 memory with 4096bit bandwidth so it's still competitive in llm field while p40 24gb is gddr5 so it's good point is amount of memory for money it cost but it's rather slow compared to p100 and compared to p100 it doesn't support nvlink

Thanks for the tips! I'm looking for something multi-purpose for LLM/stable diffusion messing about + transcoder for jellyfin - I'm guessing that there isn't really a sweet spot for those 3. I don't really have room or power budget for 2 cards, so I guess a P40 is probably the best bet?

Try ryzen 8700g integrated gpu for transcoding since it supports av1 and these p series gpus for llm/stable diffusion, would be a good mix i think, or if you don't have budget for new build, then buy intel a380 gpu for transcoding, you can attach it as mining gpu through pcie riser, linus tech tips tested this gpu for transcoding as i remember

It's like the least popular opinion I have here on Lemmy, but I assure you, this is the begining.

Yes, we'll see a dotcom style bust. But it's not like the world today wasn't literally invented in that time. Do you remember where image generation was 3 years ago? It was a complete joke compared to a year ago, and today, fuck no one here would know.

When code generation goes through that same cycle, you can put out an idea in plain language, and get back code that just "does" it.

I have no idea what that means for the future of my humanity.

I agree with you but not for the reason you think.

I think the golden age of ML is right around the corner, but it won’t be AGI.

It would be image recognition and video upscaling, you know, the boring stuff that is not game changing but possibly useful.

I feel the same about the code generation stuff. What I really want is a tool that suggests better variable names.

you can put out an idea in plain language, and get back code that just “does” it

No you can't. Simplifying it grossly:

They can't do the most low-level, dumbest detail, splitting hairs, "there's no spoon", "this is just correct no matter how much you blabber in the opposite direction, this is just wrong no matter how much you blabber to support it" kind of solutions.

And that happens to be main requirement that makes a task worth software developer's time.

We need software developers to write computer programs, because "a general idea" even in a formalized language is not sufficient, you need to address details of actual reality. That is the bottleneck.

That technology widens the passage in the places which were not the bottleneck in the first place.

I think you live in a nonsense world. I literally use it everyday and yes, sometimes it's shit and it's bad at anything that even requires a modicum of creativity. But 90% of shit doesn't require a modicum of creativity. And my point isn't about where we're at, it's about how far the same tech progressed on another domain adjacent task in three years.

Lemmy has a "dismiss AI" fetish and does so at its own peril.

And I wouldn't know where to start using it. My problems are often of the "integrate two badly documented company-internal APIs" variety. LLMs can't do shit about that; they weren't trained for it.

They're nice for basic rote work but that's often not what you deal with in a mature codebase.

Again, dismiss at your own peril.

Because "Integrate two badly documented APIs" is precisely the kind of tasks that even the current batch of LLMs actually crush.

And I'm not worried about being replaced by the current crop. I'm worried about future frameworks on technology like greyskull running 30, or 300, or 3000 uniquely trained LLMs and other transformers at once.

I'm with you. I'm a Senior software engineer and copilot/chatgpt have all but completely replaced me googling stuff, and replaced 90% of the time I've spent writing the code for simple tasks I want to automate. I'm regularly shocked at how often copilot will accurately auto complete whole methods for me. I've even had it generate a whole child class near perfectly, although this is likely primarily due to being very consistent with my naming.

At the very least it's an extremely valuable tool that every programmer should get comfortable with. And the tech is just in it's baby form. I'm glad I'm learning how to use it now instead of pooh-poohing it.

Ikr? It really seems like the dismissiveness is coming from people either not experienced with it, or just politically angry at its existence.

And my point isn’t about where we’re at, it’s about how far the same tech progressed on another domain adjacent task in three years.

First off, are you extrapolating the middle part of the sigmoid thinking it's an exponential. Secondly, https://link.springer.com/content/pdf/10.1007/s11633-017-1093-8.pdf

Dismiss at your own peril is my mantra on this. I work primarily in machine vision and the things that people were writing on as impossible or "unique to humans" in the 90s and 2000s ended up falling rapidly, and that generation of opinion pieces are now safely stored in the round bin.

The same was true of agents for games like go and chess and dota. And now the same has been demonstrated to be coming true for languages.

And maybe that paper built in the right caveats about "human intelligence". But that isn't to say human intelligence can't be surpassed by something distinctly inhuman.

The real issue is that previously there wasn't a use case with enough viability to warrant the explosion of interest we've seen like with transformers.

But transformers are like, legit wild. It's bigger than UNETs. It's way bigger than ltsm.

So dismiss at your own peril.

But that isn’t to say human intelligence can’t be surpassed by something distinctly inhuman.

Tell me you haven't read the paper without telling me you haven't read the paper. The paper is about T2 vs. T3 systems, humans are just an example.

Yeah I skimmed a bit. I'm on like 4 hours of in flight sleep after like 24 hours of air ports and flying. If you really want me to address the points of the paper, I can, but I can also tell it doesn't diminish my primary point: dismiss at your own peril.

dismiss at your own peril.

Oooo I'm scared. Just as much as I was scared of missing out on crypto or the last 10000 hype trains VCs rode into bankruptcy. I'm both too old and too much of an engineer for that BS especially when the answer to a technical argument, a fucking information-theoretical one on top of that, is "Dude, but consider FOMO".

That said, I still wish you all the best in your scientific career in applied statistics. Stuff can be interesting and useful aside from AI BS. If OTOH you're in that career path because AI BS and not a love for the maths... let's just say that vacation doesn't help against burnout. Switch tracks, instead, don't do what you want but what you can.

Or do dive into AGI. But then actually read the paper, and understand why current approaches are nowhere near sufficient. We're not talking about changes in architecture, we're about architectures that change as a function of training and inference, that learn how to learn. Say goodbye to the VC cesspit, get tenure aka a day job, maybe in 50 years there's going to be another sigmoid and you'll have written one of the papers leading up to it because you actually addressed the fucking core problem.

I mean I've been doing this for 20 years and have led teams from 2-3 in size to 40. I've been the lead on systems that have had to undergo legal review at a state level, where the output literally determines policy for almost every home in a state. So you can be as dismissive or enthusiastic as you like. I could truly actually give a shit about ley opinion cus I'm out here doing this, building it, and I see it every day.

For any one with ears to listen, dismiss this current round at your at your own peril.

Perilous, eh. Threatening tales of impeding doom and destruction. Who are you actually trying to convince, here. I doubt it's me I'd be flattered but don't think you care enough.

If Roko's Basilisk is forcing you, blink twice.

Spreading FUD is just this guy's way of trying to keep the hype alive. Techbro bullshittery 101. Reminds me of Crypto YouTube a few years back.

Those shitty investments won't pay themselves back on their own, you know?

I wish I could ignore this, but it's harming the environment so much that we can't just ignore those greedy shitheads.

Are you a software developer? Or a hardware engineer? EDIT: Or anyone credible in evaluating my nonsense world against yours?

Machine learning scientist.

That explains your optimism. Code generation is at a stage where it slaps together Stack Overflow answers and code ripped off from GitHub for you. While that is quite effective to get at least a crappy programmer to cobble together something that barely works, it is a far cry from having just anyone put out an idea in plain language and getting back code that just does it. A programmer is still needed in the loop.

I'm sure I don't have to explain to you that AI development over the decades has often reached plateaus where the approach needed to be significantly changed in order for progress to be made, but it could certainly be the case where LLMs (at least as they are developed now) aren't enough to accomplish what you describe.

So close, but not there.

OK, you'll know that I'm right when you somewhat expand your expertise to neighboring areas. Should happen naturally.

But the trillion dollar valued Nvidia...

Maybe we can have normal priced graphics cards again.

I'm tired of people pretending £600 is a reasonable price to pay for a mid range GPU.

I'm not sure, these companies are building data centers with so many gpus that they have to be geo located with respect to the power grid because if it were all done in one place it would take the grid down.

And they are just building more.

But the company doesn't have the money. Stock value means investor valuation, not company funds.

Once a company goes public for the very first time, it's getting money into its account, but from then on forward, that's just investors speculating and hoping on a nice return when they sell again.

Of course there should be some correlation between the company's profitability and the stock price, so ideally they do have quite some money, but in an investment craze like this, the correlation is far from 1:1. So whether they can still afford to build the data centers remains to be seen.

They're not building them for themselves, they're selling GPU time and SuperPods. Their valuation is because there's STILL a lineup a mile long for their flagship GPUs. I get that people think AI is a fad, and it's public form may be, but there's thousands of GPU powered projects going on behind closed doors that are going to consume whatever GPUs get made for a long time.

Their valuation is because there’s STILL a lineup a mile long for their flagship GPUs.

Genuinely curious, how do you know where the valuation, any valuation, come from?

This is an interesting story, and it might be factually true, but as far as I know unless someone has actually asked the biggest investor WHY they did bet on a stock, nobody why a valuation is what it is. We might have guesses, and they might even be correct, but they also change.

I mentioned it few times here before but my bet is yes, what you did mention BUT also because the same investors do not know where else do put their money yet and thus simply can't jump boats. They are stuck there and it might again be become they initially though the demand was high with nobody else could fulfill it, but I believe that's not correct anymore.

but I believe that's not correct anymore.

Why do you believe that? As far as I understand, other HW exists...but no SW to run on it...

Right, and I mentioned CUDA earlier as one of the reason of their success, so it's definitely something important. Clients might be interested in e.g Google TPU, startups like Etched, Tenstorrent, Groq, Cerebras Systems or heck even design their own but are probably limited by their current stack relying on CUDA. I imagine though that if backlog do keep on existing there will be abstraction libraries, at least for the most popular ones e.g TensorFlow, JAX or PyTorch, simply because the cost of waiting is too high.

Anyway what I meant isn't about hardware or software but rather ROI, namely when Goldman Sachs and others issue analyst report saying that the promise itself isn't up to par with actual usage for paying customers.

Those reports might effect investments from the smaller players, but the big names(Google, Microsoft, Meta, etc.) are locked in a race to the finish line. So their investments will continue until one of them reaches the goal...[insert sunk cost fallacy here]...and I think we're at least 1-2 years from there.

Edit: posted too soon

Well, I'm no stockologist, but I believe when your company has a perpetual sales backlog with a 15-year head start on your competition, that should lead to a pretty high valuation.

I'm also no stockologist and I agree but I that's not my point. The stock should be high but that might already have been factored in, namely this is not a new situation, so theoretically that's been priced in since investors have understood it. My point anyway isn't about the price itself but rather the narrative (or reason, as the example you mention on backlog and lack of competition) that investors themselves believe.

I think they're going to be bankrupt within 5 years. They have way too much invested in this bubble.

I highly doubt that. If the AI bubble pops, they'll probably be worth a lot less relative to other tech companies, but hardly bankrupt. They still have a very strong GPU business, they probably have an agreement with Nintendo on the next Switch (like they did with the OG Switch), and they could probably repurpose the AI tech in a lot of different ways, not to mention various other projects where they package GPUs into SOCs.

So should we be fearing a new crash?

Do you have money and/or personal emotional validation tied up in the promise that AI will develop into a world-changing technology by 2027? With AGI in everyone's pocket giving them financial advice, advising them on their lives, and romancing them like a best friend with Scarlett Johansson's voice whispering reassurances in your ear all day?

If you are banking on any of these things, then yeah, you should probably be afraid.

Wether we like it or not AI is here to stay, and in 20-30 years, it’ll be as embedded in our lives as computers and smartphones are now.

Right, it did have an AI winter few decades ago. It's indeed here to stay, it doesn't many any of the current company marketing it right now will though.

AI as a research field will stay, everything else maybe not.

Is there a "young man yells at clouds meme" here?

"Yes, you're very clever calling out the hype train. Oooh, what a smart boy you are!" Until the dust settles...

Lemmy sounds like my grandma in 1998, "Pushah. This 'internet' is just a fad.'"

The difference is that the Internet is actually useful.

Yeah, the early Internet didn't require 5 tons of coal be burned just to give you a made up answer to your query. This bubble is Pets.com only it is also murdering the rainforest while still be completely useless.

Estimates for chatgpt usage per query are on the order of 20-50 Wh, which is about the same as playing a demanding game on a gaming pc for a few minutes. Local models are significantly less.

Can you imagine all the troll farms automatically using all this power

What do people mean with "AI bubble"?

The term "AI bubble" refers to the idea that the excitement, investment, and hype surrounding artificial intelligence (AI) may be growing at an unsustainable rate, much like historical financial or technological bubbles (e.g., the dot-com bubble of the late 1990s). Here are some key aspects of this concept:

  1. Overvaluation and Speculation: Investors and companies are pouring significant amounts of money into AI technologies, sometimes without fully understanding the technology or its realistic potential. This could lead to overvaluation of AI companies and startups.

  2. Hype vs. Reality: There is often a mismatch between what people believe AI can achieve in the short term and what it is currently capable of. Some claims about AI may be exaggerated, leading to inflated expectations that cannot be met.

  3. Risk of Market Crash: Like previous bubbles in history, if AI does not deliver on its overhyped promises, there could be a significant drop in AI investments, stock prices, and general interest. This could result in a burst of the "AI bubble," causing financial losses and slowing down real progress.

  4. Comparison to Previous Bubbles: The "AI bubble" is compared to the dot-com bubble or the housing bubble, where early optimism led to massive growth and investment, followed by a sudden collapse when the reality didn't meet expectations.

Not everyone believes an AI bubble is forming, but the term is often used as a cautionary reference, urging people to balance enthusiasm with realistic expectations about the technology’s development and adoption.

Not everyone believes an AI bubble is forming

Well, the AI's not wrong. No one believes a bubble is forming, since it's already about to burst!

As in as soon as companies realise they won't be able to lay off everybody except executives and personal masseuses, nVidia will go back to having a normal stock price.

Rich people will become slightly less grotesquely wealthy, and everything must be done to prevent this.

I'm just praying people will fucking quit it with the worries that we're about to get SKYNET or HAL when binary computing would inherently be incapable of recreating the fast pattern recognition required to replicate or outpace human intelligence.

Moore's law is about similar computing power, which is a measure of hardware performance, not of the software you can run on it.

Unfortunately it's part of the marketing, thanks OpenAI for that "Oh no... we can't share GPT2, too dangerous" then... here it is. Definitely interesting then but now World shattering. Same for GPT3 ... but through exclusive partnership with Microsoft, all closed, rinse and repeat for GPT4. It's a scare tactic to lock what was initially open, both directly and closing the door behind them through regulation, at least trying to.

Welp, it was 'fun' while it lasted. Time for everyone to adjust their expectations to much more humble levels than what was promised and move on to the next sceme. After Metaverse, NFTs and 'Don't become a programmer, AI will steal your job literally next week!11', I'm eager to see what they come up with next. And with eager I mean I'm tired. I'm really tired and hope the economy just takes a damn break from breaking things.

I just hope I can buy a graphics card without having to sell organs some time in the next two years.

I'd love an upgrade for my 2080 TI, really wish Nvidia didn't piss off EVGA into leaving the GPU business...

Don't count on it. It turns out that the sort of stuff that graphics cards do is good for lots of things, it was crypto, then AI and I'm sure whatever the next fad is will require a GPU to run huge calculations.

AI is shit but imo we have been making amazing progress in computing power, just that we can’t really innovate atm, just more race to the bottom.

——

I thought capitalism bred innovation, did tech bros lied?

/s

FMO is the best explanation of this psychosis and then of course denial by people who became heavily invested in it. Stuff like LLMs or ConvNets (and the likes) can already be used to do some pretty amazing stuff that we could not do a decade ago, there is really no need to shit rainbows and puke glitter all over it. I am also not against exploring and pushing the boundaries, but when you explore a boundary while pretending like you have already crossed it, that is how you get bubbles. And this again all boils down to appeasing some cancerous billionaire shareholders so they funnel down some money to your pockets.

there is really no need shit rainbows and puke glitter all over it

I'm now picturing the unicorn from the Squatty Potty commercial, with violent diarrhea and vomiting.

Hopefully this means the haters will shut up and we can get on with using it for useful stuff

You're no no longer using the term Luddite on us! Character development!

Oh you're a luddite, you're also a hater and about as intractable and strupid as a trump supporter. You can be many crappy things at once!

That would be absolutely amazing. How can we work out a community effort that is designed to teach, you some crowdsource tests maybe we can bring education to the masses for free...

That would indeed be great but completely unrelated to what I said so I suspect you may have answered the wrong person

Now I want the heaters to shut down so we can make some cool s*** too

Shitty useless pictures each costing kilowatt hours.

I mean, machine learning and AI does have benefits especially in research in the medical field. The consumer AI products are just stupid though.

It's help me learn coding, Spanish, and helped me build scripts of which I would never have been able to do by myself or with technical works alone.

If we're talking specifically about the value I get out of what Gpt is right now, its priceless to me. Like my second, albeit braindead, systems administrator on my shoulder when I need something I don't want to type out myself. And what ever mistakes it makes is within my abilities to repair on my own without fighting for it.

AI didn't do that. It stole all the information for free on the internet from people who tried to help others and make money of it.

No, no, and also no. Try again? Or cram your face into a blender? Either is good with me

Are you ok? Too long in the sun?

Bit tired (had to get up too early today) but otherwise okay, thanks. How's your face? Blended to a fine paste yet?