"LLMs such as they are, will become a commodity; price wars will keep revenue low. Given the cost of chips, profits will be elusive," Marcus predicts. "When everyone realizes this, the financial bubble may burst quickly."
Please let this happen
Market crash and third world war. What a time to be alive!
I wish just once we could have some kind of tech innovation without a bunch of douchebag techbros thinking it's going to solve all the world's problems with no side effects while they get super rich off it.
... bunch of douchebag techbros thinking it's going to solve all the world's problems with no side effects...
one doesn't imagine any of them even remotely thinks a technological panacaea is feasible.
... while they get super rich off it.
because they're only focusing on this.
Oh they definitely exist. At a high level the bullshit is driven by malicious greed, but there are also people who are naive and ignorant and hopeful enough to hear that drivel and truly believe in it.
Like when Microsoft shoves GPT4 into notepad.exe. Obviously a terrible terrible product from a UX/CX perspective. But also, extremely expensive for Microsoft right? They don't gain anything by stuffing their products with useless annoying features that eat expensive cloud compute like a kid eats candy. That only happens because their management people truly believe, honest to god, that this is a sound business strategy, which would only be the case if they are completely misunderstanding what GPT4 is and could be and actually think that future improvements would be so great that there is a path to mass monetization somehow.
That's not what's happening here. Microsoft management are well aware that AI isn't making them any money, but the company made a multi billion dollar bet on the idea that it would, and now they have to convince shareholders that they didn't epicly fuck up. Shoving AI into stuff like notepad is basically about artificially inflating "consumer uptake" numbers that they can then show to credulous investors to suggest that any day now this whole thing is going to explode into an absolute tidal wave of growth, so you'd better buy more stock right now, better not miss out.
Yeah my management was all gungho about exploiting AI to do all sorts of stuff.
Like read. Not generative AI crap, but read. They came to us and said quite literally: "how can we use something like ChatGPT and make it read."
I don't know who or how they convinced them to use something that wasn't generative AI, but it did convince me that managers think someone being convincing and confident is correct all the time.
No no, I disagree I think that shoving AI into all these apps is a solid plan on their behalf. People are going to stop recall and shut it off. So instead they put AI components into every app, It now has the right to overview everything you're doing and every app collects data on you sending it home to update their personalized models for you so they can better sell you products.
True, they just sell it to their investors as a panacea
Some are just opportunists, but there are certainly true believers — either in specific technologies, or pedal-to-the-metal growth as the only rational solution to the world’s problems.
I think Andreessen is lying and the "techno optimist manifesto" is a ruse for PR.
a16z has been involved in various crypto pump and dumps. They are smart enough to know that something like "play to earn" is not sustainable and always devolves into a pyramid scheme. Doesn't stop them from getting in early and dumping worthless tokens on the marks.
The manifesto honestly reads like it was written by a teenager. The style, the tone, the excessive quotes from economists. This is pretty typical stuff for American oligarch polemics, no?
Of course most don't actually even believe it, that's just the pitch to get that VC juice. It's basically fraud all the way down.
Soooo... Without capitalism?
Pretty much.
No shit. This was obvious from day one. This was never AGI, and was never going to be AGI.
Institutional investors saw an opportunity to make a shit ton of money and pumped it up as if it was world changing. They'll dump it like they always do, it will crash, and they'll make billions in the process with absolutely no negative repercussions.
Then what is this I’m feeling if it’s not AGI? 🤔
Maybe GERD?
Turns out AI isn't real and has no fidelity.
Machine learning could be the basis of AI but is anyone even working on that when all the money is in LLMs?
I'm not an expert, but the whole basis of LLM not actually understanding words, just the likelihood of what word comes next basically seems like it's not going to help progress it to the next level... Like to be an artificial general intelligence shouldn't it know what words are?
I feel like this path is taking a brick and trying to fit it into a keyhole...
learning is the basis of all known intelligence. LLMs have learned something very specific, AGI would need to be built by generalising the core functionality of learning not as an outgrowth of fully formed LLMs.
and yes the current approach is very much using a brick to open a lock and that's why it's ... ahem ... hit a brick wall.
Yeah, 20 something years ago when I was trying to learn PHP of all things, I really wanted to make a chat bot that could learn what words are... I barely got anywhere but I was trying to program the understanding of sentence structure and feeding it a dictionary of words... My goal was to have it output something on its own ...
I see these things become less resource intensive and hopefully running not on some random server...
I found the files... It was closer to 15 years ago...
Trying to invent artificial intelligence to learn php is quite funny lol
Also a bit sadistic to be honest. Bringing a new form of life into the world only to subject it to PHP.
I'm amazed I still have the files... But yeah this was before all this shit was big... If I had a better drive I would have ended up more evil than zuck .. my plan was to collect data on everyone who used the thing and be able to build profiles on everyone based on what information you gave the chat ... And that's all I can really remember... But it's probably for the best...
Right, so AIs don’t really know what words are. All they see are tokens. The tokens could be words and letters, but they could also be image/video features, audio waveforms, or anything else.
largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence
Who said that LLMs were going to become AGI? LLMs as part of an AGI system makes sense but not LLMs alone becoming AGI. Only articles and blog posts from people who didn't understand the technology were making those claims. Which helped feed the hype.
I 100% agree that we're going to see an AI market correction. It's going to take a lot of hard human work to achieve the real value of LLMs. The hype is distracting from the real valuable and interesting work.
Journalists have no clue what AI even is. Nearly every article about AI is written by somebody who couldn't tell you the difference between an LLM and an AGI, and should be dismissed as spam.
OpenAI published a paper about GPT titled "Sparks of AGI".
I don't think they really believe it but it's good to bring in VC money
That is a very VC baiting title. But it's doesn't appear from the abstract that they're claiming that LLMs will develop to the complexity of AGI.
You assume most stock investors read beyond the headline, you assume wrong.
Do you have a non paywalled link? And is that quote in relation to LLMs specifically or AI generally?
I read a lot I guess, and I didn’t understand why they think like this. From what I see, are constant improvements in MANY areas! Language models are getting faster and more efficient. Code is getting better across the board as people use it to improve their own, contributing to the whole of code improvements and project participation and development. I feel like we really are at the beginning of a lot of better things and it’s iterative as it progresses. I feel hopeful
It's so funny how all this is only a problem within a capitalist frame of reference.
What they call "AI" is only "intelligent" within a capitalist frame of reference, too.
Well duhhhh.
Language models are insufficient.
They also need:
Someone in here has once linked me a scientific article about how today's "AI" are basically one level below what they need to be anything like an AI. A bit like the difference between exponent and Ackermann function, but I really forgot what that was all about.
LLMs are AI. There’s a common misconception about what ‘AI’ actually means. Many people equate AI with the advanced, human-like intelligence depicted in sci-fi - like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, and GERTY. These systems represent a type of AI called AGI (Artificial General Intelligence), designed to perform a wide range of tasks and demonstrate a form of general intelligence similar to humans.
However, AI itself doesn't imply general intelligence. Even something as simple as a chess-playing robot qualifies as AI. Although it’s a narrow AI, excelling in just one task, it still fits within the AI category. So, AI is a very broad term that covers everything from highly specialized systems to the type of advanced, adaptable intelligence that we often imagine. Think of it like the term ‘plants,’ which includes everything from grass to towering redwoods - each different, but all fitting within the same category.
I know those terms. I wanted to edit it, but was too lazy. You still did understand what I meant, right?
We don't call a shell script "AI" after all, and we do call those models that, while for your definition there shouldn't be any difference.
"The economics are likely to be grim," Marcus wrote on his Substack. "Sky high valuation of companies like OpenAI and Microsoft are largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence."
"As I have always warned," he added, "that's just a fantasy."
Microsoft shit is a mega corp... AI is based on their revenue lol
Even Zuckerberg admits that trying to scale LLMs larger doesn’t work because the energy and compute requirements go up exponentially. There must exist a different architecture that is more efficient, since the meat computers in our skulls are hella efficient in comparison.
Once we figure that architecture out though, it’s very likely we will be able to surpass biological efficiency like we have in many industries.
That's a bad analogy. We weren't able to surpass biological efficiency in industry sector because we figured out human anatomy and how to improve it. It's simply alternative ways to produce force like electricity and motors which had absolutely no relation to how muscles works.
I imagine it would be the same for computers, simply another, better method to achieve something but it's so uncertain that it's barely worth discussing about.
Of course! It’s not like animals have jet engines!
Human brains are merely the proof that such energy efficiencies are possible for intelligence. It’s likely we can match or go far beyond that, probably not by emulating biology directly. (Though we certainly may use it as inspiration while we figure out the underlying principles.)
With current stat prediction models?
Thank fuck. Can we have cheaper graphics cards again please?
I'm sure a RTX 4090 is very impressive, but it's not £1800 impressive.
I swapped to AMD this generation and it's still expensive.
A well researched pre-owned is the way to go. I bought a 6900xt a couple years ago for a deal.
I used to buy broken video cards on ebay for ~$25-50. The ones that run, but shut off have clogged heat sinks. No tools or parts required. Just blow out the dust. Obviously more risky, but sometimes you can hit gold.
If you can buy a ten and one works, you've saved money. Two work and you're making money. The only question is whether the tenth card really will work or not.
I used to get EVGA bstock which was reasonable but they got out of the business 😞
Sorry, crypto is back in season.
Huh?
The smartphone improvements hit a rubber wall a few years ago (disregarding folding screens, that compose a small market share, improvement rate slowed down drastically), and the industry is doing fine. It's not growing like it use to, but that just means people are keeping their smartphones for longer periods of time, not that people stopped using them.
Even if AI were to completely freeze right now, people will continue using it.
Why are people reacting like AI is going to get dropped?
Because in some eyes, infinite rapid growth is the only measure of success.
People are dumping billions of dollars into it, mostly power, but it cannot turn profit.
So the companies who, for example, revived a nuclear power facility in order to feed their machine with ever diminishing returns of quality output are going to shut everything down at massive losses and countless hours of human work and lifespan thrown down the drain.
This will have an economic impact quite large as many newly created jobs go up in smoke and businesses who structured around the assumption of continued availability of high end AI need to reorganize or go out of business.
Search up the Dot Com Bubble.
People pay real money for smartphones.
People pay real Money for AIaaS as well..
Hope?
Because novelty is all it has. As soon as it stops improving in a way that makes people say "oh that's neat", it has to stand on the practical merits of its capabilities, which is, well, not much.
I’m so baffled by this take. “Create a terraform module that implements two S3 buckets with cross-region bidirectional replication. Include standard module files like linting rules and enable precommit.” Could I write that? Yes. But does this provide an outstanding stub to start from? Also yes.
And beyond programming, it is otherwise having positive impact on science and medicine too. I mean, anybody who doesn’t see any merit has their head in the sand. That of course must be balanced with not falling for the hype, but the merits are very real.
The merits are real. I do understand the deep mistrust people have for tech companies, but there's far too much throwing out of the baby with the bath water.
As a solo developer, LLMs are a game-changer. They've allowed me to make amazing progress on some of my own projects that I've been stuck on for ages.
But it's not just technical subjects that benefit from LLMs. ChatGPT has been a great travel guide for me. I uploaded a pic of some architecture in Berlin and it went into the history of it, I asked it about some damage to an old church in Spain - turned out to be from the Spanish civil war, where revolutionaries had been mowed down by Franco's firing squads.
Just today, I was getting help from an LLM for an email to a Portuguese removals company. I sent my message in English with a Portuguese translation, but the guy just replied back with a single sentence in broken English:
"Yes a can , need tho mow m3 you need delivery after e gif the price"
The first bit is pretty obviously "Yes I can" but I couldn't really be sure what he was trying to say with the rest of it. So I asked ChatGPT who responded:
It seems he's saying he can handle the delivery but needs to know the total volume (in cubic meters) of your items before he can provide a price. Here's how I’d interpret it:
“Yes, I can [do the delivery]. I need to know the [volume] in m³ for delivery, and then I’ll give you the price.”
Thanks to LLMs, I'm able to accomplish so many things that would have previously taken multiple internet searches and way more effort.
There's a pretty big difference between chatGPT and the science/medicine AIs.
And keep in mind that for LLMs and other chatbots, it's not that they aren't useful at all but that they aren't useful enough to justify their costs. Microsoft is struggling to get significant uptake for Copilot addons in Microsoft 365, and this is when AI companies are still in their "sell below cost and light VC money on fire to survive long enough to gain market share" phase. What happens when the VC money dries up and AI companies have to double their prices (or more) in order to make enough revenue to cover their costs?
Nothing to argue with there. I agree. Many companies will go out of business. Fortunately we’ll still have the llama3’s and mistral’s laying around that I can run locally. On the other hand cost justification is a difficult equation with many variables, so maybe it is or will be in some cases worth the cost. I’m just saying there is some merit.
The hype should go the other way. Instead of bigger and bigger models that do more and more - have smaller models that are just as effective. Get them onto personal computers; get them onto phones; get them onto Arduino minis that cost $20 - and then have those models be as good as the big LLMs and Image gen programs.
Other than with language models, this has already happened: Take a look at apps such as Merlin Bird ID (identifies birds fairly well by sound and somewhat okay visually), WhoBird (identifies birds by sound, ) Seek (visually identifies plants, fungi, insects, and animals). All of them work offline. IMO these are much better uses of ML than spammer-friendly text generation.
Platnet and iNaturalist are pretty good for plant identification as well, I use them all the time to find out what's volunteering in my garden. Just looked them up and it turns out iNaturalist is by Seek.
This has already started to happen. The new llama3.2 model is only 3.7GB and it WAAAAY faster than anything else. It can thow a wall of text at you in just a couple of seconds. You're still not running it on $20 hardware, but you no longer need a 3090 to have something useful.
Well, you see, that's the really hard part of LLMs. Getting good results is a direct function of the size of the model. The bigger the model, the more effective it can be at its task. However, there's something called compute efficient frontier (technical but neatly explained video about it). Basically you can't make a model more effective at their computations beyond said linear boundary for any given size. The only way to make a model better, is to make it larger (what most mega corps have been doing) or radically change the algorithms and method underlying the model. But the latter has been proving to be extraordinarily hard. Mostly because to understand what is going on inside the model you need to think in rather abstract and esoteric mathematical principles that bend your mind backwards. You can compress an already trained model to run on smaller hardware. But to train them, you still need the humongously large datasets and power hungry processing. This is compounded by the fact that larger and larger models are ever more expensive while providing rapidly diminishing returns. Oh, and we are quickly running out of quality usable data, so shoveling more data after a certain point starts to actually provide worse results unless you dedicate thousands of hours of human labor producing, collecting and cleaning the new data. That's all even before you have to address data poisoning, where previously LLM generated data is fed back to train a model but it is very hard to prevent it from devolving into incoherence after a couple of generations.
That would be innovation, which I'm convinced no company can do anymore.
It feels like I learn that one of our modern innovations was already thought up and written down into a book in the 1950s, and just wasn't possible at that time due to some limitation in memory, precision, or some other metric. All we did was do 5 decades of marginal improvement to get to it, while not innovating much at all.
Are you talking about something specific?
Because nobody could have possibly saw that coming. /s
is this where we get to explain again why its not really ai?
Nope, just where you divest your stocks like any other tech run.
I have to do similar things when it comes to 'raytracing'. It meant one thing, and then a company comes along and calls something sorta similar the same thing, then everyone has these ideas of what it should be vs. what it actually is doing. Then later, a better version comes out that nearly matches the original term, but there's already a negative hype because it launched half baked and misnamed. Now they have to name the original thing something new new to market it because they destroyed the original name with a bad label and half baked product.
He is writing about LLM mainly, and that is absolutely AI, it's just not strong AI or general AI (AGI).
You can't invent your own meaning for existing established terms.
LLMs are AI in the same way that the lane assist on my car is AI. Tech companies, however, very carefully and deliberately play up LLMs as being AGI or close to it. See for example toe convenient fear-mongering over the "risks" of AI, as though ChatGPT will become Skynet.
LLMs are AI as it is defined in Computer Science, not SciFi. And the lane assist on your car might also be, although it may just be a well tuned PID for all I know.
I agree, but the problem is that the media (encouraged by tech companies) use the sci-fi definition, and the layman doesn't know any better.
Good. I look forward to all these idiots finally accepting that they drastically misunderstood what LLMs actually are and are not. I know their idiotic brains are only able to understand simple concepts like "line must go up" and follow them like religious tenants though so I'm sure they'll waste everyone's time and increase enshitification with some other new bullshit once they quietly remove their broken (and unprofitable) AI from stuff.
I am so tired of the ai hype and hate. Please give me my gen art interest back please just make it obscure again to program art I beg of you
It's still quite obscure to actually mess with AI art instead of just throwing prompts at it, resulting in slop of varying quality levels. And I don't mean controlnet, but github repos with comfyui plugins with little explanation but a link to a paper, or "this is absolutely mathematically unsound but fun to mess with". Messing with stuff other than conditioning or mere model selection.
I know, it's actually still a beautiful community but much harder to talk to outsiders about
This is why you're seeing news articles from Sam Altman saying that AGI will blow past us without any societal impact. He's trying to lessen the blow of the bubble bursting for AI/ML.
Oh no!
Anyway...
I've been hearing about the imminent crash for the last two years. New money keeps getting injected into the system. The bubble can't deflate while both the public and private sector have an unlimited lung capacity to keep puffing into it. FFS, bitcoin is on a tear right now, just because Trump won the election.
This bullshit isn't going away. Its only going to get forced down our throats harder and harder, until we swallow or choke on it.
It's been 5 minutes since the new thing did a new thing. Is it the end?
Of course it'll crash. Saying it's imminent though suggests someone needs to exercise their shorts.
Marcus is right, incremental improvements in AIs like ChatGPT will not lead to AGI and were never on that course to begin with. What LLMs do is fundamentally not "intelligence", they just imitate human response based on existing human-generated content. This can produce usable results, but not because the LLM has any understanding of the question. Since the current AI surge is based almost entirely on LLMs, the delusion that the industry will soon achieve AGI is doomed to fall apart - but not until a lot of smart speculators have gotten in and out and made a pile of money.
I think I've heard about enough of experts predicting the future lately.
Apparently, there was only so much IP to steal
As I use copilot to write software, I have a hard time seeing how it'll get better than it already is. The fundamental problem of all machine learning is that the training data has to be good enough to solve the problem. So the problems I run into make sense, like:
Copilot can't read my mind and figure out what I'm trying to do.
I'm working on an uncommon problem where the typical solutions don't work
Copilot is unable to tell when it doesn't "know" the answer, because of course it's just simulating communication and doesn't really know anything.
2 and 3 could be alleviated, but probably not solved completely with more and better data or engineering changes - but obviously AI developers started by training the models on the most useful data and strategies that they think work best. 1 seems fundamentally unsolvable.
I think there could be some more advances in finding more and better use cases, but I'm a pessimist when it comes to any serious advances in the underlying technology.
Not copilot, but I run into a fourth problem:
4. The LLM gets hung up on insisting that a newer feature of the language I'm using is wrong and keeps focusing on "fixing" it, even though it has access to the newest correct specifications where the feature is explicitly defined and explained.
Oh god yes, ran into this asking for a shell.nix file with a handful of tricky dependencies. It kept trying to do this insanely complicated temporary pull and build from git instead of just a 6 line file asking for the right packages.
"This code is giving me a return value of X instead of Y"
"Ah the reason you're having trouble is because you initialized this list with brackets instead of new()."
"How would a syntax error give me an incorrect return"
"You're right, thanks for correcting me!"
"Ok so like... The problem though."
Yeah, once you have to question its answer, it's all over. It got stuck and gave you the next best answer in it's weights which was absolutely wrong.
You can always restart the convo, re-insert the code and say what's wrong in a slightly different way and hope the random noise generator leads it down a better path :)
I'm doing some stuff with translation now, and I'm finding you can restart the session, run the same prompt and get better or worse versions of a translation. After a few runs, you can take all the output and ask it to rank each translation on correctness and critique them. I'm still not completely happy with the output, but it does seem that sometime if you MUST get AI to answer the question, there can be value in making it answer it across more than one session.
So you use other people's open source code without crediting the authors or respecting their license conditions? Good for you, parasite.
Very frequently, yes. As well as closed source code and intellectual property of all kinds. Anyone who tells you otherwise is a liar.
Ah, I guess I'll have to question why I am lying to myself then.
Don't be a douchebag. Don't use open source without respecting copyrights & licenses. The authors are already providing their work for free. Don't shit on that legacy.
Ahh right, so when I use copilot to autocomplete the creation of more tests in exactly the same style of the tests I manually created with my own conscious thought, you're saying that it's really just copying what someone else wrote? If you really believe that, then you clearly don't understand how LLMs work.
I know both LLM mechanisms better than you, it would appear, and my point is not so weak that I would have to fabricate a strawman that I then claim is what you said, to proceed to argue the strawman.
Using LLMs trained on other people's source code is parasitic behaviour and violates copyrights and licenses.
Programmers don't have the luxury of using inferior toolsets.
That statement is as dumb as it is non-sensical.
It'll implode but there are much larger elephants in the room - geopolitical dumbassery and the suddenly transient nature of the CHIPS Act are two biggies.
Third, high flying growth, blue sky darlings, they're flaky. In a downturn growth is worth 0 fucking dollars, throw that shit in a dumpster and rotate into staples. People can push off a phone upgrade or new TV and cut down on subscriptions, but they'll always need Pampers.
The thing propping up AI and semis is an arms race between those high flying tech companies, so this whole thing is even more prone to imploding than tech itself, since a ton of revenue comes from tech. Sensitive sector supported by an already sensitive sector. House of cards with NVDA sitting right at the tippy top. Apple, Facebook, those kinds of companies, when they start trimming back it's over.
But, it's one of those things that is anyone's guess. When you think it's not even possible for everything to still have steam one of the big guys like TSMC posts some really delightful earnings and it gets another second wind, for the 29th time.
Definitely a house of cards tho, and suddenly a lot more precarious because suddenly nobody knows how policy will affect the industry or the market as a whole
They say shipping is the bellwhether of the economy and there's a lot of truth to that. I think semis are now the bellwhether of growth. Sit back and watch the change in the wind
nvidia at least sells shovels, they already made some real profit unlike openai
True, but it's not a competition. When big tech tightens their belts NVDA starves to death
Edit: guess I forgot to point out the hyperbole. Nvidia obviously won't literally die
Death?
They got the IP my man, pipe down.
Nice, looking forward to it! So much money and time wasted on pipe dreams and hype. We need to get back to some actually useful innovation.
Fingers crossed.
AI was 99% a fad. Besides OpenAI and Nvidia, none of the other corporations bullshitting about AI have made anything remotely useful using it.
Absolutely not true. Disclaimer, I do work for NVIDIA as a forward deployed AI Engineer/Solutions Architect—meaning I don’t build AI software internally for NVIDIA but I embed with their customers’ engineering teams to help them build their AI software and deploy and run their models on NVIDIA hardware and software. edit: any opinions stated are solely my own, N has a PR office to state any official company opinions.
To state this as simply as possible: I wouldn’t have a job if our customers weren’t seeing tremendous benefit from AI technology. The companies I work with typically are very sensitive to CapX and OpX costs of AI—they self-serve in private clouds. If it doesn’t help them make money (revenue growth) or save money (efficiency), then it’s gone—and so am I. I’ve seen it happen; entire engineering teams laid off because a technology just couldn’t be implemented in a cost-effective way.
LLMs are a small subset of AI and Accelerated-Compute workflows in general.
To state this as simply as possible: I wouldn’t have a job if our customers weren’t seeing tremendous benefit from AI technology.
Right because corporate management doesn't ever blindly and stupidly overinvest in fads that blow up in their faces...
I work with typically are very sensitive to CapX and OpX costs of AI—they self-serve in private clouds. If it doesn’t help them make money (revenue growth) or save money (efficiency), then it’s gone—and so am I.
You clearly have no clue what you're on about. As someone with a degrees and experience in both CS and Finance all I have to say is that's not at all how these things work. Plenty of companies lose money on these things in the hopes that their FP&A projection fever dreams will come true. And they're wrong much more often than you seem to think. FP&A is more art than science and you can get financial models to support any argument you want to make to convince management to keep investing in what you think they should. And plenty of CEOs and boards are stupid enough to buy it. A lot of the AI hype has been bought and sold that way in the hopes that it would be worthwhile eventually or that other alternatives can't be just as good or better.
I’ve seen it happen; entire engineering teams laid off because a technology just couldn’t be implemented in a cost-effective way.
This is usually what happens once they finally realize spending money on hype doesn't pay off and go back to more established business analytics, operations research, and conventional software which never makes mistakes if it's programmed correctly.
LLMs are a small subset of AI and Accelerated-Compute workflows in general.
No one ever said otherwise. And we're talking about AI only, no moving the goalposts to accelerated computing, which is a mechanism through which to implement a wide range of solutions and not a specific one in and of itself.
That’s fair. I see what I see at an engineering and architecture level. You see what you see at the business level.
That said. I stand by my statement because I and most of my colleagues in similar roles get continued, repeated and expanded-scope engagements. Definitely in LLMs and genAI in general especially over the last 3-5 years or so, but definitely not just in LLMs.
“AI” is an incredibly wide and deep field; much more so than the common perception of what it is and does.
Perhaps I’m just not as jaded in my tech career.
operations research, and conventional software which never makes mistakes if it's programmed correctly.
Now this is where I push back. I spent the first decade of my tech career doing ops research/industrial engineering (in parallel with process engineering). You’d shit a brick if you knew how much “fudge-factoring” and “completely disconnected from reality—aka we have no fucking clue” assumptions go into the “conventional” models that inform supply-chain analytics, business process engineering, etc. To state that they “never make mistakes” is laughable.
That’s fair. I see what I see at an engineering and architecture level. You see what you see at the business level.
I respect that. Finance was my old career and I hated it. I liked coding more so I went back got my M.S. in CS and now do embedded software which I love. I left finance specifically because of what both of us have talked about. It's all about using nunber to tell whatever story you want and it's filled with corporate politics. I hated that world. It was disgusting and people were terrible two faced assholes.
That said. I stand by my statement because I and most of my colleagues in similar roles get continued, repeated and expanded-scope engagements. Definitely in LLMs and genAI in general especially over the last 3-5 years or so, but definitely not just in LLMs.
“AI” is an incredibly wide and deep field; much more so than the common perception of what it is and does.
So I think I need to amend what I said before. AI as a whole is definitely useful for various things but what makes it a fad is that companies are basically committing the hammer fallacy with it. They're throwing it at everything even things where it may not be a good solution just to say hey look we used AI. What I respect about you guys at Nvidia is that you all make really awesome AI based tools and software that actually does solve problem that other types of software and tools either cannot do or cannot do well and that's how it should be.
At the same time I'm also a gamer and I really hope Uncle Jensen doesn't forget about us and how we literally were his core market for most of Nvidia's history as a business.
Now this is where I push back. I spent the first decade of my tech career doing ops research/industrial engineering (in parallel with process engineering). You’d shit a brick if you knew how much “fudge-factoring” and “completely disconnected from reality—aka we have no fucking clue” assumptions go into the “conventional” models that inform supply-chain analytics, business process engineering, etc. To state that they “never make mistakes” is laughable.
What I said was that traditional software if programmed correctly doesn't make mistakes. As for operations research and supply chain optimization and all the rest of it, it's not different that what I said about finance. You can make the models tell any story you want and it's not even hard but the flip side is that the decision makers in your organization should be grilling you as an analyst on how you came up with your assumptions and why they make sense. I actually think this is an area where AI could be useful because if trained right it has no biases unlike human analysts.
The other thing to sort of take away from what I said is the "if it is programmed correctly" part which is also a big if. Humans make mistakes and we see it a lot in embedded where in some cases we need to flash our code onto a product and deploy it in a place where we won't be able to update it for a long time or maybe ever and so testing and making sure the code works right and is safe is a huge thing. Tool like Rust help to an extent but even then errors can leak through and I've actually wondered how useful AI based tools could eventually be in proving the correctness of traditional software code or finding potential bugs and sources of unsafety. I think a deep learning based tool could make formal verification of software a much cheaper and more commonplace practice and I think on the hardware side they already have that sort of thing. I know AMD/Xilinx use machine learning in their FPGA tools to synthesize designs so I don't see why we couldn't use such a thing for software that needs to be correct the first time as well.
So that's really it. My only gripe at all with AI and DL in particular is when executive who have no CS or engineering background throw around the term AI like it's the magic solution to everything or always the best option when the reality is that sometimes it is and other times it isn't and they need to have a competent technology professional make that call.
Nvidia made money, but I've not seen OpenAI do anything useful, and they are not even profitable.
ChatGPT is basically the best LLM of its kind. As for Nvidia I'm not talking about hardware I'm talking about all of the models it's trained to do everything from DLSS and ACE to creating virtual characters that can converse and respond naturally to a human being.
I would say LLMs specifically are in that ball park. Things like machine vision have been boringly productive and relatively un hyped.
There's certainly some utility to LLMs, but it's hard to see through all the crazy over estimations and being shoved everywhere by grifters.
lalal AI has made some great innovations in taking songs and separating them into vocals and instrumentals. that's a game changer for remix artists.
other than that niche utility and a handful of others, AI is largely bullshit.
The tech priests of Mars were right; death to abominable intelligence.
That's a Space Grudgin'
Sigh I hope LLMs get dropped from the AI bandwagon because I do think they have some really cool use cases and love just running my little local models. Cut government spending like a madman, write the next great American novel, or eliminate actual jobs are not those use cases.
Yay
Seems to me the rationale is flawed. Even if it isn't strong or general AI, LLM based AI has found a lot of uses. I also don't recognize the claimed ignorance among people working with it, about the limitations of current AI models.
while you may be right, one would think that the problem lies in the overestimated peception of the abilities of llms leading to misplaced investor confidence -- which in turn leads to a bubble ready to burst.
Yup. Investors have convinced themselves that this time AI development is going to grow exponentially. The breathless fantasies they’ve concocted for themselves require it. They’re going to be disappointed.
Can you name some of those uses that you see lasting in the long term or even the medium term? Because while it has been used for a lot of things it seems to be pretty bad at the overwhelming majority of them.
AI is already VERY successful in some areas, when you take a photo, it is treated with AI features to improve the image, and when editing photos on your phone, the more sophisticated options are powered by AI. Almost all new cars have AI features.
These are practical everyday uses, you don't even have to think about when using them.
But it's completely irrelevant if I can see use cases that are sustainable or not. The fact is that major tech companies are investing billions in this.
Of course all the biggest tech companies could all be wrong, but I bet they researched the issue more than me before investing.
Show me by what logic you believe to know better.
The claim that it needs to be strong AI to be useful is ridiculous.
The fact is that major tech companies are investing billions in this.
They have literally invested billions in every single hype cycle of the last few decades that turned out to be a pile of crap in hindsight. This is a bad argument.
And which are those? There is no technology all major tech companies have invested in like AI AFAIK.
Maybe the dot com wave way back, but are you arguing the Internet came to nothing?
so long, see you all in the next hype. Any guesses?
Tradwives
AI vagina Fleshlight beds. You just find your sleep inside one and it will do you all night long! Telling you stories of any topic. Massaging you in every possible way. Playing your favorite music. It's like a living room! Oh I'm sleeping in the living room again. Yeah I'm in the dog house. But that's why you need an AI vagina Fleshlight bed!
Get a few more hours of sleep
I woke up at 4 this morning. The fridge made a big ice maker noise that sounded like a door getting slammed. Anyway here I am shit posting and reading shit posts.
Theres no bracing for this, OpenAI CEO said the same thing like a year ago and people are still shovelling money at this dumpster fire today.
It's had all the signs of a bubble for the last few years.
supermicro's accountants have just resigned 🤭
Crash? Doesn't it have to be moving at all to crash?
Nvidia shares ..
Until Open AI announces a new 5t model or something and then the hype refreshes
It's gonna crash like a self driving tesla. It's gonna fall apart like a cybertrukkk.
I'm shocked I tell you
Great!! ....I don't what chatGPT to go anywhere, I use it every day and Google has become assss.
Ya AI was never going to be it. But I wouldn’t understate its impact even in its current stage. I think it’ll be a tool that will be incredibly useful for just about every industry
There aren't many industries where results that are correct in the very common case everybody knows anyway, a bit wrong in the less common case and totally hallucinated in the actually original cases is useful. Especially if you can't distinguish between those automatically.
yep Knew ai should die some day.
🤷♂️ I only use local generators at this point,so I don't care.
Even Pied Piper didn’t scale.
I believe this about as much as I believed the "We're about to experience the AI singularity" morons.
Well classic computers will always limited and power hungry. Quantum computer is the key to AI achieving next level
The only people who say this know nothing about quantum or computers
I love the or in this sentence
Quantum computers are only good at a very narrow subset of tasks. None of those tasks are related to Neural Networks, AGI, or the emulation of neurons.
Just put another number behind it. Luddites won't know the difference.
Luddites weren't against new technology, they were against the aristocrats using new technology as a tool or excuse to oppress and kill the labor class. The problem is not the new technology, the problem is that people were dying of hunger and being laid off in droves. Destroying the machinery, which almost always they were the operators of when working on said aristocrat's factories, was an act of protest, just like a riot, or a strike. It was a form of collective bargaining.
Please let this happen
Market crash and third world war. What a time to be alive!
I wish just once we could have some kind of tech innovation without a bunch of douchebag techbros thinking it's going to solve all the world's problems with no side effects while they get super rich off it.
one doesn't imagine any of them even remotely thinks a technological panacaea is feasible.
because they're only focusing on this.
Oh they definitely exist. At a high level the bullshit is driven by malicious greed, but there are also people who are naive and ignorant and hopeful enough to hear that drivel and truly believe in it.
Like when Microsoft shoves GPT4 into notepad.exe. Obviously a terrible terrible product from a UX/CX perspective. But also, extremely expensive for Microsoft right? They don't gain anything by stuffing their products with useless annoying features that eat expensive cloud compute like a kid eats candy. That only happens because their management people truly believe, honest to god, that this is a sound business strategy, which would only be the case if they are completely misunderstanding what GPT4 is and could be and actually think that future improvements would be so great that there is a path to mass monetization somehow.
That's not what's happening here. Microsoft management are well aware that AI isn't making them any money, but the company made a multi billion dollar bet on the idea that it would, and now they have to convince shareholders that they didn't epicly fuck up. Shoving AI into stuff like notepad is basically about artificially inflating "consumer uptake" numbers that they can then show to credulous investors to suggest that any day now this whole thing is going to explode into an absolute tidal wave of growth, so you'd better buy more stock right now, better not miss out.
Yeah my management was all gungho about exploiting AI to do all sorts of stuff.
Like read. Not generative AI crap, but read. They came to us and said quite literally: "how can we use something like ChatGPT and make it read."
I don't know who or how they convinced them to use something that wasn't generative AI, but it did convince me that managers think someone being convincing and confident is correct all the time.
No no, I disagree I think that shoving AI into all these apps is a solid plan on their behalf. People are going to stop recall and shut it off. So instead they put AI components into every app, It now has the right to overview everything you're doing and every app collects data on you sending it home to update their personalized models for you so they can better sell you products.
True, they just sell it to their investors as a panacea
Some are just opportunists, but there are certainly true believers — either in specific technologies, or pedal-to-the-metal growth as the only rational solution to the world’s problems.
Andreessen is pretty open about it: https://a16z.com/the-techno-optimist-manifesto/
I think Andreessen is lying and the "techno optimist manifesto" is a ruse for PR.
a16z has been involved in various crypto pump and dumps. They are smart enough to know that something like "play to earn" is not sustainable and always devolves into a pyramid scheme. Doesn't stop them from getting in early and dumping worthless tokens on the marks.
The manifesto honestly reads like it was written by a teenager. The style, the tone, the excessive quotes from economists. This is pretty typical stuff for American oligarch polemics, no?
Of course most don't actually even believe it, that's just the pitch to get that VC juice. It's basically fraud all the way down.
Soooo... Without capitalism?
Pretty much.
No shit. This was obvious from day one. This was never AGI, and was never going to be AGI.
Institutional investors saw an opportunity to make a shit ton of money and pumped it up as if it was world changing. They'll dump it like they always do, it will crash, and they'll make billions in the process with absolutely no negative repercussions.
Then what is this I’m feeling if it’s not AGI? 🤔
Maybe GERD?
Turns out AI isn't real and has no fidelity.
Machine learning could be the basis of AI but is anyone even working on that when all the money is in LLMs?
I'm not an expert, but the whole basis of LLM not actually understanding words, just the likelihood of what word comes next basically seems like it's not going to help progress it to the next level... Like to be an artificial general intelligence shouldn't it know what words are?
I feel like this path is taking a brick and trying to fit it into a keyhole...
learning is the basis of all known intelligence. LLMs have learned something very specific, AGI would need to be built by generalising the core functionality of learning not as an outgrowth of fully formed LLMs.
and yes the current approach is very much using a brick to open a lock and that's why it's ... ahem ... hit a brick wall.
Yeah, 20 something years ago when I was trying to learn PHP of all things, I really wanted to make a chat bot that could learn what words are... I barely got anywhere but I was trying to program the understanding of sentence structure and feeding it a dictionary of words... My goal was to have it output something on its own ...
I see these things become less resource intensive and hopefully running not on some random server...
I found the files... It was closer to 15 years ago...
Trying to invent artificial intelligence to learn php is quite funny lol
Also a bit sadistic to be honest. Bringing a new form of life into the world only to subject it to PHP.
I'm amazed I still have the files... But yeah this was before all this shit was big... If I had a better drive I would have ended up more evil than zuck .. my plan was to collect data on everyone who used the thing and be able to build profiles on everyone based on what information you gave the chat ... And that's all I can really remember... But it's probably for the best...
Right, so AIs don’t really know what words are. All they see are tokens. The tokens could be words and letters, but they could also be image/video features, audio waveforms, or anything else.
Who said that LLMs were going to become AGI? LLMs as part of an AGI system makes sense but not LLMs alone becoming AGI. Only articles and blog posts from people who didn't understand the technology were making those claims. Which helped feed the hype.
I 100% agree that we're going to see an AI market correction. It's going to take a lot of hard human work to achieve the real value of LLMs. The hype is distracting from the real valuable and interesting work.
Journalists have no clue what AI even is. Nearly every article about AI is written by somebody who couldn't tell you the difference between an LLM and an AGI, and should be dismissed as spam.
OpenAI published a paper about GPT titled "Sparks of AGI".
I don't think they really believe it but it's good to bring in VC money
That is a very VC baiting title. But it's doesn't appear from the abstract that they're claiming that LLMs will develop to the complexity of AGI.
You assume most stock investors read beyond the headline, you assume wrong.
The call is coming from inside. Google CEO claims it will be like alien intelligence so we should just trust it to make political decisions for us bro: https://www.computing.co.uk/news/2024/ai/former-google-ceo-eric-schmidt-urges-ai-acceleration-dismisses-climate
Do you have a non paywalled link? And is that quote in relation to LLMs specifically or AI generally?
I read a lot I guess, and I didn’t understand why they think like this. From what I see, are constant improvements in MANY areas! Language models are getting faster and more efficient. Code is getting better across the board as people use it to improve their own, contributing to the whole of code improvements and project participation and development. I feel like we really are at the beginning of a lot of better things and it’s iterative as it progresses. I feel hopeful
It's so funny how all this is only a problem within a capitalist frame of reference.
What they call "AI" is only "intelligent" within a capitalist frame of reference, too.
Well duhhhh.
Language models are insufficient.
They also need:
Someone in here has once linked me a scientific article about how today's "AI" are basically one level below what they need to be anything like an AI. A bit like the difference between exponent and Ackermann function, but I really forgot what that was all about.
LLMs are AI. There’s a common misconception about what ‘AI’ actually means. Many people equate AI with the advanced, human-like intelligence depicted in sci-fi - like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, and GERTY. These systems represent a type of AI called AGI (Artificial General Intelligence), designed to perform a wide range of tasks and demonstrate a form of general intelligence similar to humans.
However, AI itself doesn't imply general intelligence. Even something as simple as a chess-playing robot qualifies as AI. Although it’s a narrow AI, excelling in just one task, it still fits within the AI category. So, AI is a very broad term that covers everything from highly specialized systems to the type of advanced, adaptable intelligence that we often imagine. Think of it like the term ‘plants,’ which includes everything from grass to towering redwoods - each different, but all fitting within the same category.
I know those terms. I wanted to edit it, but was too lazy. You still did understand what I meant, right?
We don't call a shell script "AI" after all, and we do call those models that, while for your definition there shouldn't be any difference.
Microsoft shit is a mega corp... AI is based on their revenue lol
Even Zuckerberg admits that trying to scale LLMs larger doesn’t work because the energy and compute requirements go up exponentially. There must exist a different architecture that is more efficient, since the meat computers in our skulls are hella efficient in comparison.
Once we figure that architecture out though, it’s very likely we will be able to surpass biological efficiency like we have in many industries.
That's a bad analogy. We weren't able to surpass biological efficiency in industry sector because we figured out human anatomy and how to improve it. It's simply alternative ways to produce force like electricity and motors which had absolutely no relation to how muscles works.
I imagine it would be the same for computers, simply another, better method to achieve something but it's so uncertain that it's barely worth discussing about.
Of course! It’s not like animals have jet engines!
Human brains are merely the proof that such energy efficiencies are possible for intelligence. It’s likely we can match or go far beyond that, probably not by emulating biology directly. (Though we certainly may use it as inspiration while we figure out the underlying principles.)
With current stat prediction models?
Thank fuck. Can we have cheaper graphics cards again please?
I'm sure a RTX 4090 is very impressive, but it's not £1800 impressive.
I swapped to AMD this generation and it's still expensive.
A well researched pre-owned is the way to go. I bought a 6900xt a couple years ago for a deal.
I used to buy broken video cards on ebay for ~$25-50. The ones that run, but shut off have clogged heat sinks. No tools or parts required. Just blow out the dust. Obviously more risky, but sometimes you can hit gold.
If you can buy a ten and one works, you've saved money. Two work and you're making money. The only question is whether the tenth card really will work or not.
I used to get EVGA bstock which was reasonable but they got out of the business 😞
Sorry, crypto is back in season.
Huh?
The smartphone improvements hit a rubber wall a few years ago (disregarding folding screens, that compose a small market share, improvement rate slowed down drastically), and the industry is doing fine. It's not growing like it use to, but that just means people are keeping their smartphones for longer periods of time, not that people stopped using them.
Even if AI were to completely freeze right now, people will continue using it.
Why are people reacting like AI is going to get dropped?
Because in some eyes, infinite rapid growth is the only measure of success.
People are dumping billions of dollars into it, mostly power, but it cannot turn profit.
So the companies who, for example, revived a nuclear power facility in order to feed their machine with ever diminishing returns of quality output are going to shut everything down at massive losses and countless hours of human work and lifespan thrown down the drain.
This will have an economic impact quite large as many newly created jobs go up in smoke and businesses who structured around the assumption of continued availability of high end AI need to reorganize or go out of business.
Search up the Dot Com Bubble.
People pay real money for smartphones.
People pay real Money for AIaaS as well..
Hope?
Because novelty is all it has. As soon as it stops improving in a way that makes people say "oh that's neat", it has to stand on the practical merits of its capabilities, which is, well, not much.
I’m so baffled by this take. “Create a terraform module that implements two S3 buckets with cross-region bidirectional replication. Include standard module files like linting rules and enable precommit.” Could I write that? Yes. But does this provide an outstanding stub to start from? Also yes.
And beyond programming, it is otherwise having positive impact on science and medicine too. I mean, anybody who doesn’t see any merit has their head in the sand. That of course must be balanced with not falling for the hype, but the merits are very real.
The merits are real. I do understand the deep mistrust people have for tech companies, but there's far too much throwing out of the baby with the bath water.
As a solo developer, LLMs are a game-changer. They've allowed me to make amazing progress on some of my own projects that I've been stuck on for ages.
But it's not just technical subjects that benefit from LLMs. ChatGPT has been a great travel guide for me. I uploaded a pic of some architecture in Berlin and it went into the history of it, I asked it about some damage to an old church in Spain - turned out to be from the Spanish civil war, where revolutionaries had been mowed down by Franco's firing squads.
Just today, I was getting help from an LLM for an email to a Portuguese removals company. I sent my message in English with a Portuguese translation, but the guy just replied back with a single sentence in broken English:
"Yes a can , need tho mow m3 you need delivery after e gif the price"
The first bit is pretty obviously "Yes I can" but I couldn't really be sure what he was trying to say with the rest of it. So I asked ChatGPT who responded:
Thanks to LLMs, I'm able to accomplish so many things that would have previously taken multiple internet searches and way more effort.
There's a pretty big difference between chatGPT and the science/medicine AIs.
And keep in mind that for LLMs and other chatbots, it's not that they aren't useful at all but that they aren't useful enough to justify their costs. Microsoft is struggling to get significant uptake for Copilot addons in Microsoft 365, and this is when AI companies are still in their "sell below cost and light VC money on fire to survive long enough to gain market share" phase. What happens when the VC money dries up and AI companies have to double their prices (or more) in order to make enough revenue to cover their costs?
Nothing to argue with there. I agree. Many companies will go out of business. Fortunately we’ll still have the llama3’s and mistral’s laying around that I can run locally. On the other hand cost justification is a difficult equation with many variables, so maybe it is or will be in some cases worth the cost. I’m just saying there is some merit.
The hype should go the other way. Instead of bigger and bigger models that do more and more - have smaller models that are just as effective. Get them onto personal computers; get them onto phones; get them onto Arduino minis that cost $20 - and then have those models be as good as the big LLMs and Image gen programs.
Other than with language models, this has already happened: Take a look at apps such as Merlin Bird ID (identifies birds fairly well by sound and somewhat okay visually), WhoBird (identifies birds by sound, ) Seek (visually identifies plants, fungi, insects, and animals). All of them work offline. IMO these are much better uses of ML than spammer-friendly text generation.
Platnet and iNaturalist are pretty good for plant identification as well, I use them all the time to find out what's volunteering in my garden. Just looked them up and it turns out iNaturalist is by Seek.
This has already started to happen. The new llama3.2 model is only 3.7GB and it WAAAAY faster than anything else. It can thow a wall of text at you in just a couple of seconds. You're still not running it on $20 hardware, but you no longer need a 3090 to have something useful.
Well, you see, that's the really hard part of LLMs. Getting good results is a direct function of the size of the model. The bigger the model, the more effective it can be at its task. However, there's something called compute efficient frontier (technical but neatly explained video about it). Basically you can't make a model more effective at their computations beyond said linear boundary for any given size. The only way to make a model better, is to make it larger (what most mega corps have been doing) or radically change the algorithms and method underlying the model. But the latter has been proving to be extraordinarily hard. Mostly because to understand what is going on inside the model you need to think in rather abstract and esoteric mathematical principles that bend your mind backwards. You can compress an already trained model to run on smaller hardware. But to train them, you still need the humongously large datasets and power hungry processing. This is compounded by the fact that larger and larger models are ever more expensive while providing rapidly diminishing returns. Oh, and we are quickly running out of quality usable data, so shoveling more data after a certain point starts to actually provide worse results unless you dedicate thousands of hours of human labor producing, collecting and cleaning the new data. That's all even before you have to address data poisoning, where previously LLM generated data is fed back to train a model but it is very hard to prevent it from devolving into incoherence after a couple of generations.
That would be innovation, which I'm convinced no company can do anymore.
It feels like I learn that one of our modern innovations was already thought up and written down into a book in the 1950s, and just wasn't possible at that time due to some limitation in memory, precision, or some other metric. All we did was do 5 decades of marginal improvement to get to it, while not innovating much at all.
Are you talking about something specific?
Because nobody could have possibly saw that coming. /s
is this where we get to explain again why its not really ai?
Nope, just where you divest your stocks like any other tech run.
I have to do similar things when it comes to 'raytracing'. It meant one thing, and then a company comes along and calls something sorta similar the same thing, then everyone has these ideas of what it should be vs. what it actually is doing. Then later, a better version comes out that nearly matches the original term, but there's already a negative hype because it launched half baked and misnamed. Now they have to name the original thing something new new to market it because they destroyed the original name with a bad label and half baked product.
He is writing about LLM mainly, and that is absolutely AI, it's just not strong AI or general AI (AGI).
You can't invent your own meaning for existing established terms.
LLMs are AI in the same way that the lane assist on my car is AI. Tech companies, however, very carefully and deliberately play up LLMs as being AGI or close to it. See for example toe convenient fear-mongering over the "risks" of AI, as though ChatGPT will become Skynet.
LLMs are AI as it is defined in Computer Science, not SciFi. And the lane assist on your car might also be, although it may just be a well tuned PID for all I know.
I agree, but the problem is that the media (encouraged by tech companies) use the sci-fi definition, and the layman doesn't know any better.
Good. I look forward to all these idiots finally accepting that they drastically misunderstood what LLMs actually are and are not. I know their idiotic brains are only able to understand simple concepts like "line must go up" and follow them like religious tenants though so I'm sure they'll waste everyone's time and increase enshitification with some other new bullshit once they quietly remove their broken (and unprofitable) AI from stuff.
I am so tired of the ai hype and hate. Please give me my gen art interest back please just make it obscure again to program art I beg of you
It's still quite obscure to actually mess with AI art instead of just throwing prompts at it, resulting in slop of varying quality levels. And I don't mean controlnet, but github repos with comfyui plugins with little explanation but a link to a paper, or "this is absolutely mathematically unsound but fun to mess with". Messing with stuff other than conditioning or mere model selection.
I know, it's actually still a beautiful community but much harder to talk to outsiders about
This is why you're seeing news articles from Sam Altman saying that AGI will blow past us without any societal impact. He's trying to lessen the blow of the bubble bursting for AI/ML.
Oh no!
Anyway...
I've been hearing about the imminent crash for the last two years. New money keeps getting injected into the system. The bubble can't deflate while both the public and private sector have an unlimited lung capacity to keep puffing into it. FFS, bitcoin is on a tear right now, just because Trump won the election.
This bullshit isn't going away. Its only going to get forced down our throats harder and harder, until we swallow or choke on it.
It's been 5 minutes since the new thing did a new thing. Is it the end?
Of course it'll crash. Saying it's imminent though suggests someone needs to exercise their shorts.
Marcus is right, incremental improvements in AIs like ChatGPT will not lead to AGI and were never on that course to begin with. What LLMs do is fundamentally not "intelligence", they just imitate human response based on existing human-generated content. This can produce usable results, but not because the LLM has any understanding of the question. Since the current AI surge is based almost entirely on LLMs, the delusion that the industry will soon achieve AGI is doomed to fall apart - but not until a lot of smart speculators have gotten in and out and made a pile of money.
I think I've heard about enough of experts predicting the future lately.
Apparently, there was only so much IP to steal
As I use copilot to write software, I have a hard time seeing how it'll get better than it already is. The fundamental problem of all machine learning is that the training data has to be good enough to solve the problem. So the problems I run into make sense, like:
2 and 3 could be alleviated, but probably not solved completely with more and better data or engineering changes - but obviously AI developers started by training the models on the most useful data and strategies that they think work best. 1 seems fundamentally unsolvable.
I think there could be some more advances in finding more and better use cases, but I'm a pessimist when it comes to any serious advances in the underlying technology.
Not copilot, but I run into a fourth problem:
4. The LLM gets hung up on insisting that a newer feature of the language I'm using is wrong and keeps focusing on "fixing" it, even though it has access to the newest correct specifications where the feature is explicitly defined and explained.
Oh god yes, ran into this asking for a shell.nix file with a handful of tricky dependencies. It kept trying to do this insanely complicated temporary pull and build from git instead of just a 6 line file asking for the right packages.
"This code is giving me a return value of X instead of Y"
"Ah the reason you're having trouble is because you initialized this list with brackets instead of
new()
.""How would a syntax error give me an incorrect return"
"You're right, thanks for correcting me!"
"Ok so like... The problem though."
Yeah, once you have to question its answer, it's all over. It got stuck and gave you the next best answer in it's weights which was absolutely wrong.
You can always restart the convo, re-insert the code and say what's wrong in a slightly different way and hope the random noise generator leads it down a better path :)
I'm doing some stuff with translation now, and I'm finding you can restart the session, run the same prompt and get better or worse versions of a translation. After a few runs, you can take all the output and ask it to rank each translation on correctness and critique them. I'm still not completely happy with the output, but it does seem that sometime if you MUST get AI to answer the question, there can be value in making it answer it across more than one session.
So you use other people's open source code without crediting the authors or respecting their license conditions? Good for you, parasite.
Very frequently, yes. As well as closed source code and intellectual property of all kinds. Anyone who tells you otherwise is a liar.
Ah, I guess I'll have to question why I am lying to myself then. Don't be a douchebag. Don't use open source without respecting copyrights & licenses. The authors are already providing their work for free. Don't shit on that legacy.
Ahh right, so when I use copilot to autocomplete the creation of more tests in exactly the same style of the tests I manually created with my own conscious thought, you're saying that it's really just copying what someone else wrote? If you really believe that, then you clearly don't understand how LLMs work.
I know both LLM mechanisms better than you, it would appear, and my point is not so weak that I would have to fabricate a strawman that I then claim is what you said, to proceed to argue the strawman.
Using LLMs trained on other people's source code is parasitic behaviour and violates copyrights and licenses.
Programmers don't have the luxury of using inferior toolsets.
That statement is as dumb as it is non-sensical.
It'll implode but there are much larger elephants in the room - geopolitical dumbassery and the suddenly transient nature of the CHIPS Act are two biggies.
Third, high flying growth, blue sky darlings, they're flaky. In a downturn growth is worth 0 fucking dollars, throw that shit in a dumpster and rotate into staples. People can push off a phone upgrade or new TV and cut down on subscriptions, but they'll always need Pampers.
The thing propping up AI and semis is an arms race between those high flying tech companies, so this whole thing is even more prone to imploding than tech itself, since a ton of revenue comes from tech. Sensitive sector supported by an already sensitive sector. House of cards with NVDA sitting right at the tippy top. Apple, Facebook, those kinds of companies, when they start trimming back it's over.
But, it's one of those things that is anyone's guess. When you think it's not even possible for everything to still have steam one of the big guys like TSMC posts some really delightful earnings and it gets another second wind, for the 29th time.
Definitely a house of cards tho, and suddenly a lot more precarious because suddenly nobody knows how policy will affect the industry or the market as a whole
They say shipping is the bellwhether of the economy and there's a lot of truth to that. I think semis are now the bellwhether of growth. Sit back and watch the change in the wind
nvidia at least sells shovels, they already made some real profit unlike openai
True, but it's not a competition. When big tech tightens their belts NVDA starves to death
Edit: guess I forgot to point out the hyperbole. Nvidia obviously won't literally die
Death?
They got the IP my man, pipe down.
Nice, looking forward to it! So much money and time wasted on pipe dreams and hype. We need to get back to some actually useful innovation.
Fingers crossed.
AI was 99% a fad. Besides OpenAI and Nvidia, none of the other corporations bullshitting about AI have made anything remotely useful using it.
Absolutely not true. Disclaimer, I do work for NVIDIA as a forward deployed AI Engineer/Solutions Architect—meaning I don’t build AI software internally for NVIDIA but I embed with their customers’ engineering teams to help them build their AI software and deploy and run their models on NVIDIA hardware and software. edit: any opinions stated are solely my own, N has a PR office to state any official company opinions.
To state this as simply as possible: I wouldn’t have a job if our customers weren’t seeing tremendous benefit from AI technology. The companies I work with typically are very sensitive to CapX and OpX costs of AI—they self-serve in private clouds. If it doesn’t help them make money (revenue growth) or save money (efficiency), then it’s gone—and so am I. I’ve seen it happen; entire engineering teams laid off because a technology just couldn’t be implemented in a cost-effective way.
LLMs are a small subset of AI and Accelerated-Compute workflows in general.
Right because corporate management doesn't ever blindly and stupidly overinvest in fads that blow up in their faces...
You clearly have no clue what you're on about. As someone with a degrees and experience in both CS and Finance all I have to say is that's not at all how these things work. Plenty of companies lose money on these things in the hopes that their FP&A projection fever dreams will come true. And they're wrong much more often than you seem to think. FP&A is more art than science and you can get financial models to support any argument you want to make to convince management to keep investing in what you think they should. And plenty of CEOs and boards are stupid enough to buy it. A lot of the AI hype has been bought and sold that way in the hopes that it would be worthwhile eventually or that other alternatives can't be just as good or better.
This is usually what happens once they finally realize spending money on hype doesn't pay off and go back to more established business analytics, operations research, and conventional software which never makes mistakes if it's programmed correctly.
No one ever said otherwise. And we're talking about AI only, no moving the goalposts to accelerated computing, which is a mechanism through which to implement a wide range of solutions and not a specific one in and of itself.
That’s fair. I see what I see at an engineering and architecture level. You see what you see at the business level.
That said. I stand by my statement because I and most of my colleagues in similar roles get continued, repeated and expanded-scope engagements. Definitely in LLMs and genAI in general especially over the last 3-5 years or so, but definitely not just in LLMs.
“AI” is an incredibly wide and deep field; much more so than the common perception of what it is and does.
Perhaps I’m just not as jaded in my tech career.
Now this is where I push back. I spent the first decade of my tech career doing ops research/industrial engineering (in parallel with process engineering). You’d shit a brick if you knew how much “fudge-factoring” and “completely disconnected from reality—aka we have no fucking clue” assumptions go into the “conventional” models that inform supply-chain analytics, business process engineering, etc. To state that they “never make mistakes” is laughable.
I respect that. Finance was my old career and I hated it. I liked coding more so I went back got my M.S. in CS and now do embedded software which I love. I left finance specifically because of what both of us have talked about. It's all about using nunber to tell whatever story you want and it's filled with corporate politics. I hated that world. It was disgusting and people were terrible two faced assholes.
“AI” is an incredibly wide and deep field; much more so than the common perception of what it is and does.
So I think I need to amend what I said before. AI as a whole is definitely useful for various things but what makes it a fad is that companies are basically committing the hammer fallacy with it. They're throwing it at everything even things where it may not be a good solution just to say hey look we used AI. What I respect about you guys at Nvidia is that you all make really awesome AI based tools and software that actually does solve problem that other types of software and tools either cannot do or cannot do well and that's how it should be.
At the same time I'm also a gamer and I really hope Uncle Jensen doesn't forget about us and how we literally were his core market for most of Nvidia's history as a business.
What I said was that traditional software if programmed correctly doesn't make mistakes. As for operations research and supply chain optimization and all the rest of it, it's not different that what I said about finance. You can make the models tell any story you want and it's not even hard but the flip side is that the decision makers in your organization should be grilling you as an analyst on how you came up with your assumptions and why they make sense. I actually think this is an area where AI could be useful because if trained right it has no biases unlike human analysts.
The other thing to sort of take away from what I said is the "if it is programmed correctly" part which is also a big if. Humans make mistakes and we see it a lot in embedded where in some cases we need to flash our code onto a product and deploy it in a place where we won't be able to update it for a long time or maybe ever and so testing and making sure the code works right and is safe is a huge thing. Tool like Rust help to an extent but even then errors can leak through and I've actually wondered how useful AI based tools could eventually be in proving the correctness of traditional software code or finding potential bugs and sources of unsafety. I think a deep learning based tool could make formal verification of software a much cheaper and more commonplace practice and I think on the hardware side they already have that sort of thing. I know AMD/Xilinx use machine learning in their FPGA tools to synthesize designs so I don't see why we couldn't use such a thing for software that needs to be correct the first time as well.
So that's really it. My only gripe at all with AI and DL in particular is when executive who have no CS or engineering background throw around the term AI like it's the magic solution to everything or always the best option when the reality is that sometimes it is and other times it isn't and they need to have a competent technology professional make that call.
Nvidia made money, but I've not seen OpenAI do anything useful, and they are not even profitable.
ChatGPT is basically the best LLM of its kind. As for Nvidia I'm not talking about hardware I'm talking about all of the models it's trained to do everything from DLSS and ACE to creating virtual characters that can converse and respond naturally to a human being.
I would say LLMs specifically are in that ball park. Things like machine vision have been boringly productive and relatively un hyped.
There's certainly some utility to LLMs, but it's hard to see through all the crazy over estimations and being shoved everywhere by grifters.
lalal AI has made some great innovations in taking songs and separating them into vocals and instrumentals. that's a game changer for remix artists.
other than that niche utility and a handful of others, AI is largely bullshit.
The tech priests of Mars were right; death to abominable intelligence.
That's a Space Grudgin'
Sigh I hope LLMs get dropped from the AI bandwagon because I do think they have some really cool use cases and love just running my little local models. Cut government spending like a madman, write the next great American novel, or eliminate actual jobs are not those use cases.
Yay
Seems to me the rationale is flawed. Even if it isn't strong or general AI, LLM based AI has found a lot of uses. I also don't recognize the claimed ignorance among people working with it, about the limitations of current AI models.
while you may be right, one would think that the problem lies in the overestimated peception of the abilities of llms leading to misplaced investor confidence -- which in turn leads to a bubble ready to burst.
Yup. Investors have convinced themselves that this time AI development is going to grow exponentially. The breathless fantasies they’ve concocted for themselves require it. They’re going to be disappointed.
Can you name some of those uses that you see lasting in the long term or even the medium term? Because while it has been used for a lot of things it seems to be pretty bad at the overwhelming majority of them.
AI is already VERY successful in some areas, when you take a photo, it is treated with AI features to improve the image, and when editing photos on your phone, the more sophisticated options are powered by AI. Almost all new cars have AI features.
These are practical everyday uses, you don't even have to think about when using them.
But it's completely irrelevant if I can see use cases that are sustainable or not. The fact is that major tech companies are investing billions in this.
Of course all the biggest tech companies could all be wrong, but I bet they researched the issue more than me before investing.
Show me by what logic you believe to know better.
The claim that it needs to be strong AI to be useful is ridiculous.
They have literally invested billions in every single hype cycle of the last few decades that turned out to be a pile of crap in hindsight. This is a bad argument.
And which are those? There is no technology all major tech companies have invested in like AI AFAIK.
Maybe the dot com wave way back, but are you arguing the Internet came to nothing?
so long, see you all in the next hype. Any guesses?
Tradwives
AI vagina Fleshlight beds. You just find your sleep inside one and it will do you all night long! Telling you stories of any topic. Massaging you in every possible way. Playing your favorite music. It's like a living room! Oh I'm sleeping in the living room again. Yeah I'm in the dog house. But that's why you need an AI vagina Fleshlight bed!
Get a few more hours of sleep
I woke up at 4 this morning. The fridge made a big ice maker noise that sounded like a door getting slammed. Anyway here I am shit posting and reading shit posts.
Theres no bracing for this, OpenAI CEO said the same thing like a year ago and people are still shovelling money at this dumpster fire today.
It's had all the signs of a bubble for the last few years.
supermicro's accountants have just resigned 🤭
Crash? Doesn't it have to be moving at all to crash?
Nvidia shares ..
Until Open AI announces a new 5t model or something and then the hype refreshes
It's gonna crash like a self driving tesla. It's gonna fall apart like a cybertrukkk.
I'm shocked I tell you
Great!! ....I don't what chatGPT to go anywhere, I use it every day and Google has become assss.
Ya AI was never going to be it. But I wouldn’t understate its impact even in its current stage. I think it’ll be a tool that will be incredibly useful for just about every industry
There aren't many industries where results that are correct in the very common case everybody knows anyway, a bit wrong in the less common case and totally hallucinated in the actually original cases is useful. Especially if you can't distinguish between those automatically.
yep Knew ai should die some day.
🤷♂️ I only use local generators at this point,so I don't care.
Even Pied Piper didn’t scale.
I believe this about as much as I believed the "We're about to experience the AI singularity" morons.
Well classic computers will always limited and power hungry. Quantum computer is the key to AI achieving next level
The only people who say this know nothing about quantum or computers
I love the or in this sentence
Quantum computers are only good at a very narrow subset of tasks. None of those tasks are related to Neural Networks, AGI, or the emulation of neurons.
Just put another number behind it. Luddites won't know the difference.
Luddites weren't against new technology, they were against the aristocrats using new technology as a tool or excuse to oppress and kill the labor class. The problem is not the new technology, the problem is that people were dying of hunger and being laid off in droves. Destroying the machinery, which almost always they were the operators of when working on said aristocrat's factories, was an act of protest, just like a riot, or a strike. It was a form of collective bargaining.