Only if you believe in it. Many CEOs do. They're very good in magical thinking.
I have a counter argument. From an evolutionary standpoint, if you keep doubling computer capacity exponentially isn't it extraordinarily arrogant of humans to assume that their evolutionarily stagnant brains will remain relevant for much longer?
You can make the same argument about humans that you do AI, but from a biological and societal standpoint. Barring any jokes about certain political or geographical stereotypes, humans have gotten "smarter" that we used to be. We are very adaptable, and with improvements to diet and education, we have managed to stay ahead of the curve. We didn't peak at hunter-gatherer. We didn't stop at the Renaissance. And we blew right past the industrial revolution. I'm not going to channel my "Humanity, Fuck Yeah" inner wolf howl, but I have to give our biology props. The body is an amazing machine, and even though we can look at things like the current crop of AI and think, "Welp, that's it, humans are done for," I'm sure a lot of people thought the same at other pivotal moments in technological and societal advancement. Here I am, though, farting taco bell into my office chair and typing about it.
You can compare human intelligence to centuries ago on a simple linear scale. Neural density has not increased by any stretch of the imagination in the way that transistor density has. But I'm not just talking density I'm talking about scalability that is infinite. Infinite scale of knowledge and data.
Let's face it people are already not that intelligent, we are smart enough to use the technology of other smarter people. And then there are computers, they are growing intelligently with an artificial evolutionary pressure being exerted on their development, and you're telling me that that's not going to continue to surpass us in every way? There is very little to stop computers from being intelligent on a galactic scale.
Computer power doesn't scale infinitely, unless you mean building a world mind and powering if off of the spinning singularity at the center of the galaxy like a type 3 civilization, and that's sci-fi stuff. We still have to worry about bandwidth, power, cooling, coding and everything else that going into running a computer. It doesn't just "scale". There is a lot that goes into it, and it does have a ceiling. Quantum computing may alleviate some of that, but I'll hold my applause until we see some useful real world applications for it.
Furthermore, we still don't understand how the mind works, yet. There are still secrets to unlock and ways to potentially augment and improve it. AI is great, and I fully support the advancement in technology, but don't count out humans so quickly. We haven't even gotten close to human level intelligence and GOFAI, and maybe we never will.
As I said that answer seems incredibly arrogant in the face of evolutionary pressure and logarithmic growth.
You can believe whatever you want, but I don't think it's arrogant to say what I did. You are basing your view of humanity on what you think humanity has done, and basing your view on AI based on what you think it will do. Those are fundamentally different and not comparable. If you want to talk about the science fiction future of AI, we should talk about the science fiction future of humanity as well. Let's talk about augmenting ourselves, extending lifespans, and all of the good things that people think we'll do in the coming centuries. If you want to look at humans and say that we haven't evolved at all in the last 3000 years, then we should look at computers the same way. Computers haven't "evolved" at all. They still do the same thing they always have. They do a lot more of it, but they don't do anything "new". We have found ways to increase the processing power, and the storage capacity, but a computer today has the same limits that the one that sent us to the moon had. It's a computer, and incapable of original thought. You seem to believe that just because we throw more ram and processors at it that somehow that will change things, but it doesn't. It just means we can do the same things, but faster. Eventually we'll run out of things to process and data to store, but that won't bring AI any closer to reality. We are climbing the mountain, but you speak like we have already crested. We've barely left base camp in the grand scheme of artificial intelligence.
Holy wall of unparagraphed word salad, Again you are not understanding what is and isn't an evolutionary process, a disease can wipe out half a species and that is considered a process of evolution. You don't have to be intelligent about it, all you have to do is continue to increase complexity due to an external force and that is it. That's all that is needed to have an evolutionary force.
With computers we don't have to know what we are doing (to recreate consciousness), we just have to select for better more complex systems (the same way evolution did for humans) which is the inevitable result of progress. Do you think computers are going to stop improving? The road maps for chip architecture for the next ten years doesn't seem to suggest it's slowing down yet.
And like the fractalization of coastlines, facts, knowledge and data are completely unlimited, the deeper you look the more there is.
On top of all of this you have the fact that progress has constantly been accelerating in a way that human intelligence is incapable of percieving accurately.
Therefore computer intelligence is vastly going to outpace or own. And very soon too.
Holy wall of unparagraphed word salad,
Ahh, we are getting into the insult round of tonight's entertainment. I'll break this reply down for you.
Again you are not understanding what is and isn’t an evolutionary process
It seems our definitions differ slightly, yes.
You don’t have to be intelligent about it, all you have to do is continue to increase complexity due to an external force and that is it. That’s all that is needed to have an evolutionary force.
That, and the ability to self-actuate your own evolution. You see, that's what we differ on definition of evolutionary force. We didn't have some greater will forcing us down a path of evolution. There was no force. There was trial and error. The "lived long enough to fuck" survived, the rest didn't. Reproduction is a fundamental aspect of evolution. Computers can't reproduce. We have to facilitate that ourselves, though iterating on various aspects of computers. Right now we can fake it with increased processing power, increased memory, more elegant code, but at the end of the day, without some form reproductive system that doesn't rely on us, the computer can't exceed our grasp. If it could, we'd see true exponential growth, not compounding as in Moore's Law. We can't make them do more than what they already do. We can just make them do it faster.
With computers we don’t have to know what we are doing (to recreate consciousness), we just have to select for better more complex systems (the same way evolution did for humans) which is the inevitable result of progress.
Yeah, sure, and I can cram a hundred monkey's in a room with a hundred typewriters and come up with a better love story than Twilight, but it's gonna take time. Not Shakespeare time, but a few weeks at least. That's the thing, though, the evolution of any system doesn't happen overnight. We didn't wake up one day, walk out of our cave, and create TikTok. Evolution is a long process. You forget all of the things that happened before we figured out that our thumbs weren't solely for sticking up our own asses. There are millions of years that you aren't accounting for. Billions of attempts to create what we take for granted. Consciousness. You say that we don't have to know what we are doing, and you are right, we don't, but it's a crap-shoot with quadrillion to one odds.
And like the fractalization of coastlines, facts, knowledge and data are completely unlimited, the deeper you look the more there is.
Again, we can store as much data as we want, it won't make AI happen. We haven't spontaneously seen life form in libraries, but they have been storing data in them for thousands of years. Consciousness isn't data. If that's all you want, ChatGPT is passing the bar. It still can't tell me it loves me, and mean it.
On top of all of this you have the fact that progress has constantly been accelerating in a way that human intelligence is incapable of percieving accurately.
Funny, you seem to think that you perceive it pretty well...
Therefore computer intelligence is vastly going to outpace or own. And very soon too.
A well thought out conclusion I'm sure is based on all of the facts you failed to present. Bravo.
The laws of physics still apply. We already have to do all kinds of crazy tricks to make transistors as small as they are and not leak electrons all over the place due to quantum tunneling. The best thing we figured out how to do is just pile on more CPU/GPU cores.
It's also arrogant to assume we will continue on this exponential industrial-revolution growth of the last 300 years and not plateau as a species again for the next thousand. We could be looking at an eon of just burnin' away our oil while we try to cling more and more to whatever other energy impinges on this pitiful little planet, trapped in our local space unable to use our pathetic spacecraft to push us any further.
The laws of physics are no less or more applicable to our own biology in terms of complexity, density, scale, and information capacity and in most ways is far less efficient and accurate than their silicon counterparts.
There is nothing to suggest the growth in computer intelligence is going to stop occurring or it's doing anything but just getting started.
Apart from your use of infinite I agree, there is no reason we shouldn't be able to surpass nature with synthetic intelligence. The time computers have existed is a mere blip on a historic scale, and computers has surpassed us at logic games like Chess and at math already long ago.
Modern LLM models are just the current stage, before that it could be said it was pattern recognition. We had OCR in the 80's as probably the most practical example. It may seem there is long between the breakthroughs, but 40 years is nothing compared to evolution.
I have no doubt strong AI will be achieved eventually, and when we do, I have no doubt AI will surpass our intelligence in every way very quickly.
If you keep doubling the number of fruit flies exponentially, isn't it likely that humanity will find itself outsmarted?
The answer is no, it isn't. Quantity does not quality make and all our current AI tech is about ways to breed fruit flies that fly left or right depending on what they see.
As a counter argument against that, companies are trying to make self driving cars work for 20 years. Processing power has increased by a million and the things still get stuck. Pure processing power isn't everything.
Magic as in street magician, not magic as in wizard. Lots of the things that people claim AI can do are like a magic show, it's amazing if you look at it from the right angle, and with the right skill you can hide the strings holding it up, but if you try to use it in the real world it falls apart.
I wish there was actual magic
It would make science very difficult.
What if it magically made it easier?
Mmm irrational shit makes rationality harder
Look at quantum mechanics
Everything is magic if you don't understand how the thing works.
I wish. I don't understand why my stomach can't handle corn, but it doesn't lead to magic. It leads to pain.
Have you eaten hominy corn? The nixtamalisation process makes it digestible.
I don't have access to that, sadly. I'm pretty sure my body would reject it however. At least from my reading on what it is.
Sam Altman will make a big pile of investor money disappear before your very eyes.
The masses have been treating it like actual magic since the early stages and are only slowly warming up to the idea it‘s calculations. Calculations of things that are often more than the sum of it‘s parts as people start to realize. Well some people anyway.
oh the bubble's gonna burst sooner than some may think
Next week, some say
If you're a thechbro, this is the new magic shit, man! To the moooooon!
If only.
Yea, try talking to chatgpt about things that you really know in detail about. It will fail to show you the hidden, niche things (unless you mention them yourself), it will make lots of stuff up that you would not pick up on otherwise (and once you point it out, the bloody thing will "I knew that" you, sometimes even if you are wrong) and it is very shallow in its details. Sometimes, it just repeats your question back to you as a well-written essay. And that's fine...it is still a miracle that it is able to be as reliable and entertaining as some random bullshitter you talk to in a bar, it's good for brainstorming too.
It's like watching mainstream media news talk about something you know about.
Oh good comparison
Haha, definitely, it's infuriating and scary. But it also depends on what you are watching for. If you are watching TV, you do it for convenience or entertainment. LLMs have the potential to be much more than that, but unless a very open and accessible ecosystem is created for them, they are going to be whatever our tech overlords decide they want them to be in their boardrooms to milk us.
Well, if you read the article, you’ll see that’s exactly what is happening. Every company you can imagine is investing the GDP of smaller nations into AI. Google, Facebook, Microsoft. AI isn’t the future of humanity. It’s the future of capitalist interests. It’s the future of profit chasing. It’s the future of human misery. Tech companies have trampled all over human happiness and sanity to make a buck. And with the way surveillance capitalism is moving—facial recognition being integrated into insane places, like the M&M vending machine, the huge market for our most personal, revealing data—these could literally be two horsemen of the apocalypse.
Advancements in tech haven’t helped us as humans in while. But they sure did streamline profit centers. We have to wrest control of our future back from corporate America because this plutocracy driven by these people is very, very fucking dangerous.
AI is not the future for us. It’s the future for them. Our jobs getting “streamlined” will not mean the end of work and the rise of UBI. It will mean stronger, more invasive corporations wielding more power than ever while more and more people suffer, are cast out and told they’re just not working hard enough.
Sony wants photographs of my ears for "360 reality audio".
No. Just no.
Dude! I bought some Bose headphones that were amazing. But I read over the privacy policy and they wanted to “map my head movements” and they wanted permission to passively listen to audio sent through the speakers and any audio around the microphone.
I ran those fuckers back to the store as quickly as possible.
But not before having to duck and dodge agreeing to the privacy policy in their app, so I quickly deleted it. But when I started interacting with their customer service, they tried to get me to sign a different privacy policy that seemed formulated just for the information shared in the chat, but in two separate addenda I had to dig through, I saw they were tryin to get me to sign the original super invasive privacy policy.
Fuck Bose. Fuck all these fake fronts for surveillance capitalism. Fuck capitalism.
Wrong chat dude. What does that have to do with AI anyways?
I think the claim is that they can use AI to improve the sound of their headphones if you supply them with images of your ears.
I just dont like them having a database of personally identifying information like that.
How personally identifiable is your ear though? It's not connected to your thoughts, you can't use it to determine your age height and weight, which ad company would need that data? IMO, it's no different than sending a mold of your ear tube to a CIEM company to get your custom molded earphones.
Ears turn out to be a good way to recognize individuals. Ear biometrics is an evolving area.
I see. I still don't think it's cause for concern yet, but good to know. Thanks!
Northrup Grumman probably already have a prototype to identify you by your ears from a mile up and kill you with a single bullet the moment you're not inside.
it's bizarre without context, but i recognise what they mean - Sony's headphones app suggests you send them photos of your ears so they can analyse the shape to improve the noise cancellation.
Which I don't think has anything to do with GenAI. Though, I admit I'm not well educated in ear scanning and 3D audio reconstruction, so good sources are appreciated.
What worries me is how much of the AI criticism on Lemmy wants to make everything worse; not share the gains more equally. If that's what passes for left today, well...
I don't have a problem with machine learning.
I have a problem with one company getting x trillion dollars investment. Who pays when the investors want their returns? Eventually it's going to be all of us.
I don't think they have that much potential. They are just uncontrollable, it's a neat trick but totally unreliable if there isn't a human in the loop. This approach is missing all the control systems we have in our brains.
I really only use for "oh damn, I known there's a great one-liner to do that in Python" sort of thing. It's usually right and of it isn't it'll be immediacy obvious and you can move on with your day. For anything more complex the gas lighting and subtle errors make it unusable.
Oh yes, it's great for that. My google-fu was never good enough to "find the name of this thing that does this, but only when in this circumstance"
ChatGPT is great for helping with specific problems. Google search for example gives fairly general answers, or may have information that doesn't apply to your specific situation. But if you give ChatGPT a very specific description of the issue you're running into it will generally give some very useful recommendations. And it's an iterative process, you just need to treat it like a conversation.
It's also a decent writer's room brainstorm kind of tool, although it can't really get beyond the initial pitch as it's pretty terrible at staying consistent when trying to clean up ideas.
I find it incredibly helpful for breaking into new things.
I want to learn terraform today, no guide/video/docs site can do it as well as having a teacher available at any time for Q&A.
Aside from that, it's pretty good for general Q&A on documented topics, and great when provided context (ie. A full 200MB export of documentation from a tool or system).
But the moment I try and dig deeper I to something I'm an expert in, it just breaks down.
That's why I've found it somewhat dangerous to use to jump into new things. It doesn't care about bes practices and will just help you enough to let you shoot yourself in the foot.
Just wait for MeanGirlsGPT
Good. It's dangerous to view AI as magic. I've had to debate way too many people who think they LLMs are actually intelligent. It's dangerous to overestimate their capabilities lest we use them for tasks they can't perform safely. It's very powerful but the fact that it's totally non deterministic and unpredictable means we need to very carefully design systems that rely on LLMs with heavy guards rails.
Conversely, there are way too many people who think that humans are magic and that it's impossible for AI to ever do .
I've long believed that there's a smooth spectrum between not-intelligent and human-intelligent. It's not a binary yes/no sort of thing. There's basic inert rocks at one end, and humans at the other, and everything else gets scattered at various points in between. So I think it's fine to discuss where exactly on that scale LLMs fall, and accept the possibility that they're moving in our direction.
It's not linear either. Brains are crazy complex and have sub cortexes that are more specialized to specific tasks. I really don't think that LLMs alone can possibly demonstrate advanced intelligence, but I do think it could be a very important cortex for one. There's also different types of intelligence. LLMs are very knowledgeable and have great recall but lack reasoning or worldview.
Indeed, and many of the more advanced AI systems currently out there are already using LLMs as just one component. Retrieval-augmented generation, for example, adds a separate "memory" that gets searched and bits inserted into the context of the LLM when it's answering questions. LLMs have been trained to be able to call external APIs to do the things they're bad at, like math. The LLM is typically still the central "core" of the system, though; the other stuff is routine sorts of computer activities that we've already had a handle on for decades.
IMO it still boils down to a continuum. If there's an AI system that's got an LLM in it but also a Wolfram Alpha API and a websearch API and other such "helpers", then that system should be considered as a whole when asking how "intelligent" it is.
Lol yup, some people think they're real smart for realizing how limited LLMs are, but they don't recognize that the researchers that actually work on this are years ahead on experimentation and theory already and have already realized all this stuff and more. They're not just making the specific models better, they're also figuring out how to combine them to make something more generally intelligent instead of super specialized.
I find the people who think they are actually an AI are generally the people opposed to them.
People who use them as the tools they are know how limited they are.
Not being combative or even disagreeing with you - purely out of curiosity, what do you think are the necessary and sufficient conditions of intelligence?
A worldview simulation it can use as a scratch pad for reasoning. I view reasoning as a set of simulated actions to convert a worldview from state a to state b.
It depends on how you define intelligence though. Normally people define it as human like, and I think there are 3 primary sub types of intelligence needed for cognizance, being reasoning, awareness, and knowledge. I think the current Gen is figuring out the knowledge type, but it needs to be combined with the other two to be complete.
Thanks! I'm not clear on what you mean by a worldview simulation as a scratch pad for reasoning. What would be an example of that process at work?
For sure, defining intelligence is non trivial. What clear the bar of intelligence, and what doesn't, is not obvious to me. So that's why I'm engaging here, it sounds like you've put a lot of thought into an answer. But I'm not sure I understand your terms.
A worldview is your current representational model of the world around you, so for example you know you're a human on earth in a physical universe when a set of rules, you have a mental representation of your body and it's capabilities, your location and the physicality of the things in your location. It can also be abstract things too, like your personality and your relationships and your understanding of what's capable in the world.
Basically, you live in reality, but you need a way to store a representation of that reality in your mind in order to be able to interact with and understand that reality.
The simulation part is your ability to imagine manipulating that reality to achieve a goal, and if you break that down, you're trying to convert reality from your perceived current real state A, to a imagined desired state B. Reasoning is coming up with a plan to convert the worldview from state A to state B step by step, so let's say you want to brush your teeth, you a want to convert your worldview of you having dirty teeth to you having clean teeth, and to do that you reason that you need to follow a few steps to achieve that, like moving your body to the bathroom, retrieving tools (toothbrush and toothpaste) and applying mechanical action to your teeth to clean them. You created a step by step plan to change the state of your worldview to a new desired state you came up with. It doesn't need to be physical either, it could be an abstract goal, like calculating a tip for a bill. It can also be a grand goal, like going to college or creating a mathematical proof.
LLMs don't have a representational model of the world, they don't have a working memory or a world simulation to use as a scratchpad for testing out reasoning. They just take a sequence of words and retrieve the next word that is probabilistically and relationally likely to be a good next word based on its training data.
They could be a really important cortex that can assist in developing a worldview model, but in their current granular state of being a single task AI model, they cannot do reasoning on their own.
Knowledge retrieval is an important component that assists in reasoning though, so it can still play a very important role in reasoning.
I think it's a big mistake to think that because the most basic LLMs are just autocompletes, or that because LLMs can hallucinate, that what big LLMs do doesn't constitute "thinking". No, GPT4 isn't conscious, but it very clearly "thinks".
It's started to feel to me like current AIs are reasonable recreations of parts of our minds. It's like they're our ability to visualize, to verbalize, and to an extent, to reason (at least the way we intuitively reason, not formally), but separared from the "rest" of our thought processes.
Depends on how you define thinking. I agree, LLMs could be a component of thinking, specifically knowledge and recall.
Yes, as Linus Torvalds said humans are also thinking like autocomplete systems.
Those recent failures only come across as cracks for people who see AI as magic in the first place. What they're really cracks in is people's misperceptions about what AI can do.
Recent AI advances are still amazing and world-changing. People have been spoiled by science fiction, though, and are disappointed that it's not the person-in-a-robot-body kind of AI that they imagined they were being promised. Turns out we don't need to jump straight to that level to still get dramatic changes to society and the economy out of it.
Also interesting is that most people don't understand the advances it makes possible so when they hear people saying it's amazing and then try it of course they're going to think it's not lived upto hype.
The big things are going to completely change things like how we use computers especially being able to describe how you want it to lay out ui and create custom tools on the fly.
I hope it collapses in a fire and we can just keep our foss local models with incremental improvements, that way both techbros and artbros eat shit
Unfortunately for that outcome, brute forcing with more compute is pretty helpful for now
And even if local small-scale models turn out to be optimal, that wouldn't stop big business from using them. I'm not sure what "it" is being referred to with "I hope it collapses."
I was referring to the hype bubble therefore the money surrounding it all
There are quite a lot of AI-sceptics in this thread. If you compare the situation to 10 years ago, isn't it insane how far we've come since then?
Image generation, video generation, self-driving cars (Level 4 so the driver doesn't need to pay attention at all times), capable text comprehension and generation. Whether it is used for translation, help with writing reports or coding. And to top it all off, we have open source models that are at least in a similar ballpark as the closed ones and those models can be run on consumer hardware.
Obviously AI is not a solved problem yet and there are lots of shortcomings (especially with LLMs and logic where they completely fail for even simple problems) but the progress is astonishing.
I think a big obstacle to meaningfully using AI is going to be public perception. Understanding the difference between CHAT-GPT and open source models means that people like us will probably continue to find ways of using AI as it continues to improve, but what I keep seeing is botched applications, where neither the consumers nor the investors who are pushing AI really understand what it is or what it's useful for. It's like trying to dig a grave with a fork - people are going to throw away the fork and say it's useless, not realising that that's not how it's meant to be used.
I'm concerned about the way the hype behaves because I wouldn't be surprised if people got so sick of hearing about AI at all, let alone broken AI nonsense, that it hastens the next AI winter. I worry that legitimate development may be held back by all the nonsense.
I actually think public perception is not going to be that big a deal one way or the other. A lot of decisions about AI applications will be made by businessmen in boardrooms, and people will be presented with the results without necessarily even knowing that it's AI.
I've seen a weird aspect of it from the science side, where people writing grant applications or writing papers feel compelled to incorporate AI into it, because even if they know that their sub-field has no reliable use-cases for AI yet, they're feeling the pressure of the hype.
Specifically, when I say the pressure of the hype, I mean that some of the best scientists I have known were pretty bad at the academic schmoozing that facilitates better funding and more prestige. In practice, businessmen in boardrooms are often the ones holding the purse strings and sometimes it's easier to try to speak their language than to "translate" one's research to something they'll understand.
Businessmen are just the public but with money.
Fair point. I personally think that AI lives up to enough parts of the hype so that there won't be another AI winter but who knows. Some will obviously get disillusioned but not enough.
Lol. It doesn't do video generation. It just takes existing video and makes it look weird. Image generation is about the same: they just take existing works and smash them together, often in an incoherent way. Half the text generation shit is just fine by underpaid people in Kenya Ave and similar places.
There are a few areas where llm could be useful, things like trawling large data sets, etc, but every bit of the stuff that is being hyped as "AI" is just spam generators.
That's totally not how it works. Not only nobody has the need for such tools, but the technology got there much before the current state of AI
As I often mention when this subject pops up: while the current statistics-based generative models might see some application, I believe that they'll be eventually replaced by better models that are actually aware of what they're generating, instead of simply reproducing patterns. With the current models being seen as "that cute 20s toy".
In text generation (currently dominated by LLMs), for example, this means that the main "bulk" of the model would do three things:
convert input tokens into sememes (units of meaning)
perform logic operations with the sememes
convert sememes back into tokens for the output
Because, as it stands, LLMs are only chaining tokens. They might do this in an incredibly complex way, but that's it. That's obvious when you look at what LLM-fuelled bots output as "hallucination" - they aren't the result of some internal error, they're simply an undesired product of a model that sometimes outputs desirable stuff too.
Sub "tokens" and "sememes" with "pixels" and "objects" and this probably holds true for image generating models, too. Probably.
Now, am I some sort of genius for noticing this? Probably not; I'm just some nobody with a chimp avatar, rambling in the Fediverse. Odds are that people behind those tech giants already noticed the same ages ago, and at least some of them reached the same conclusion - that better gen models need more awareness. If they are not doing this already, it means that this shit would be painfully expensive to implement, so the "better models" that I mentioned at the start will probably not appear too soon.
Most cracks will stay there; Google will hide them with an obnoxious band-aid, OpenAI will leave them in plain daylight, but the magic trick will still not be perfect, at least in the foreseeable future.
And some might say "use MOAR processing power!", or "input MOAR training data!", in the hopes that the current approach will "magically" fix itself. For those, imagine yourself trying to drain the Atlantic with a bucket: does it really matter if you use more buckets, or larger buckets? Brute-forcing problems only go so far.
Just my two cents.
I agree 100%, and I think Zuckerberg's attempt at a massive 340,000 of Nvidia’s H100 GPUs AI based on LLM with the aim to create a generel AI sounds stupid. Unless there's a lot more to their attempt, it's doomed to fail.
I suppose the idea is something about achieving critical mass, but it's pretty obvious, that that is far from the only factor missing to achieve general AI.
I still think it's impressive what they can do with LLM. And it seems to be a pretty huge step forward. But It's taken about 40 years from we had decent "pattern recognition" to get here, the next step could be another 40 years?
I think that Zuckerberg's attempt is a mix of publicity stunt and "I want [you] to believe!". Trying to reach AGI through a large enough LLM sounds silly, on the same level as "ants build, right? If we gather enough ants, they'll build a skyscraper! Chrust me."
In fact I wonder if the opposite direction wouldn't be a bit more feasible - start with some extremely primitive AGI, then "teach" it Language (as a skill) and a language (like Mandarin or English or whatever).
I'm not sure on how many years it'll take for an AGI to pop up. 100 years perhaps, but I'm just guessing.
I don't know much about LLMs but latent diffusion models already have "meaning" encoded into the model. The whole concept of the u-net is that as it reduces the spacial resolution of the image, it increases the semantic resolution by adding extra dimensions of information. It came from medical image analysis where the idea of labelling something as a tumor would be really useful.
This is why you get body dysmorphic results on earlier (and even current) models. It's identified something as a human limb, but isn't quite sure on where the hand is, so it adds one on to what we know is a leg.
There was an interesting paper published just recently titled Generative Models: What do they know? Do they know things? Let's find out! (a lot of fun names and titles in the AI field these days :) ) That does a lot of work in actually analyzing what an AI image generator "knows" about what they're depicting. They seem to have an awareness of three dimensional space, of light and shadow and reflectivity, lots of things you wouldn't necessarily expect from something trained just on 2-D images tagged with a few short descriptive sentences. This article from a few months ago also delved into this, it showed that when you ask a generative AI to create a picture of a physical object the first thing the AI does is come up with the three-dimensional shape of the scene before it starts figuring out what it looks like. Quite interesting stuff.
That's perhaps why image generators are comparatively better than text generators. But there's still something off, by your example it seems that the model cannot reliably use clues like position to understand "this is a «leg»". And I don't know much about image generators but I think that they're still statistics- and probability-based.
That's a huge oversimplification of the way LLMs work. They're not statistical in the way a Markov chain is. They use neural networks, which are a decent analogy for the human brain. The way the synapses between neurons are wired is obviously different, and the way the neurons are triggered and the types of signals they can send to other neurons is obviously different. But overall, similar capabilities can in theory be achieved with either method. If you're going to call neural networks statistics based, you might as well call the human brain statistics based as well.
That’s a huge oversimplification of the way LLMs work.
I'm sticking to what matters for the sake of the argument. Anyone who wants to inform themself further has a plethora of online resources to do so.
They’re not statistical in the way a Markov chain is.
Implied: "you're suggesting that they work like Markov chains, they don't."
In no moment I mentioned or even implied Markov chains. My usage of the verb "to chain" is clearly vaguer within that context; please do not assume words onto my mouth.
They use neural networks, which are a decent analogy for the human brain. The way the synapses between neurons are wired is obviously different, and the way the neurons are triggered and the types of signals they can send to other neurons is obviously different. But overall, similar capabilities can in theory be achieved with either method.
I don't disagree with the conclusion (i.e. I believe that neural networks can achieve human-like capabilities), but the argument itself is such a fallacious babble (false equivalence) that I'm not bothering further with your comment.
And it's also an "ackshyually" given this context dammit. I'm not talking about the bloody neural network, but how it is used.
No need to get offended. Maybe I misunderstood the intent behind your original message. I think you made a lot of good points.
I brought up the Markov chain because a common misconception I've seen on the Internet and in real life is that LLMs work pretty much the same as Markov chains under the hood. And I saw no mention of neural networks in your original comment.
I found this graph very clear
Well, natural language processing is placed in the trough of disillusionment and projected to stay there for years. ChatGPT was released in November 2022...
Trying to make real and good use of AI generative models are cracks in the magic.
It's pretty useful if you know exactly what you want and how to work within it's limitations.
Coworkers around me already use ChatGPT to generate code snippets for Python, Excel VBA, etc. to good success.
Right, it's a tool with quirks, techniques and skills to use just like any other tool. ChatGPT has definitely saved me time and on at least one occasion, kept me from missing a deadline that I probably would have missed if I went about it "the old way" lmao
You mean they're using it to write boilerplate which shouldn't have been written in the first place.
Call it whatever makes you feel happy, it is allowing me to accomplish things much more quickly and easily than working without it does.
Until someone has to maintain it.
That's why I said code "snippets". I don't trust it to give me the entire answer right from the get go, because I acknowledge its limitations and review it before pasting it in. I find it works better if I tell it to generate specific code rather than everything at once.
Plus, we're not working on mission critical server stuff here. Those are code used for data analysis which probably could also be found on Stackoverflow anyway. If it works, it works.
Why? If you know how to incorporate "boilerplate" and modify it correctly into your own code, what difference does it make if its from ChatGPT or Stackoverflow?
Difference to copy and paste from stackoverflow, probably not terribly much. The latter is already bad.
It's as if the young'uns heard the term "10x developer" and decided that not understanding what you're doing is the way to get there.
"This post is for paid subscribers"
(Also that page has a script I had to override just to copy and paste that)
There's magic?
Only if you believe in it. Many CEOs do. They're very good in magical thinking.
I have a counter argument. From an evolutionary standpoint, if you keep doubling computer capacity exponentially isn't it extraordinarily arrogant of humans to assume that their evolutionarily stagnant brains will remain relevant for much longer?
You can make the same argument about humans that you do AI, but from a biological and societal standpoint. Barring any jokes about certain political or geographical stereotypes, humans have gotten "smarter" that we used to be. We are very adaptable, and with improvements to diet and education, we have managed to stay ahead of the curve. We didn't peak at hunter-gatherer. We didn't stop at the Renaissance. And we blew right past the industrial revolution. I'm not going to channel my "Humanity, Fuck Yeah" inner wolf howl, but I have to give our biology props. The body is an amazing machine, and even though we can look at things like the current crop of AI and think, "Welp, that's it, humans are done for," I'm sure a lot of people thought the same at other pivotal moments in technological and societal advancement. Here I am, though, farting taco bell into my office chair and typing about it.
You can compare human intelligence to centuries ago on a simple linear scale. Neural density has not increased by any stretch of the imagination in the way that transistor density has. But I'm not just talking density I'm talking about scalability that is infinite. Infinite scale of knowledge and data.
Let's face it people are already not that intelligent, we are smart enough to use the technology of other smarter people. And then there are computers, they are growing intelligently with an artificial evolutionary pressure being exerted on their development, and you're telling me that that's not going to continue to surpass us in every way? There is very little to stop computers from being intelligent on a galactic scale.
Computer power doesn't scale infinitely, unless you mean building a world mind and powering if off of the spinning singularity at the center of the galaxy like a type 3 civilization, and that's sci-fi stuff. We still have to worry about bandwidth, power, cooling, coding and everything else that going into running a computer. It doesn't just "scale". There is a lot that goes into it, and it does have a ceiling. Quantum computing may alleviate some of that, but I'll hold my applause until we see some useful real world applications for it.
Furthermore, we still don't understand how the mind works, yet. There are still secrets to unlock and ways to potentially augment and improve it. AI is great, and I fully support the advancement in technology, but don't count out humans so quickly. We haven't even gotten close to human level intelligence and GOFAI, and maybe we never will.
As I said that answer seems incredibly arrogant in the face of evolutionary pressure and logarithmic growth.
You can believe whatever you want, but I don't think it's arrogant to say what I did. You are basing your view of humanity on what you think humanity has done, and basing your view on AI based on what you think it will do. Those are fundamentally different and not comparable. If you want to talk about the science fiction future of AI, we should talk about the science fiction future of humanity as well. Let's talk about augmenting ourselves, extending lifespans, and all of the good things that people think we'll do in the coming centuries. If you want to look at humans and say that we haven't evolved at all in the last 3000 years, then we should look at computers the same way. Computers haven't "evolved" at all. They still do the same thing they always have. They do a lot more of it, but they don't do anything "new". We have found ways to increase the processing power, and the storage capacity, but a computer today has the same limits that the one that sent us to the moon had. It's a computer, and incapable of original thought. You seem to believe that just because we throw more ram and processors at it that somehow that will change things, but it doesn't. It just means we can do the same things, but faster. Eventually we'll run out of things to process and data to store, but that won't bring AI any closer to reality. We are climbing the mountain, but you speak like we have already crested. We've barely left base camp in the grand scheme of artificial intelligence.
Holy wall of unparagraphed word salad, Again you are not understanding what is and isn't an evolutionary process, a disease can wipe out half a species and that is considered a process of evolution. You don't have to be intelligent about it, all you have to do is continue to increase complexity due to an external force and that is it. That's all that is needed to have an evolutionary force.
With computers we don't have to know what we are doing (to recreate consciousness), we just have to select for better more complex systems (the same way evolution did for humans) which is the inevitable result of progress. Do you think computers are going to stop improving? The road maps for chip architecture for the next ten years doesn't seem to suggest it's slowing down yet.
And like the fractalization of coastlines, facts, knowledge and data are completely unlimited, the deeper you look the more there is.
On top of all of this you have the fact that progress has constantly been accelerating in a way that human intelligence is incapable of percieving accurately.
Therefore computer intelligence is vastly going to outpace or own. And very soon too.
Ahh, we are getting into the insult round of tonight's entertainment. I'll break this reply down for you.
It seems our definitions differ slightly, yes.
That, and the ability to self-actuate your own evolution. You see, that's what we differ on definition of evolutionary force. We didn't have some greater will forcing us down a path of evolution. There was no force. There was trial and error. The "lived long enough to fuck" survived, the rest didn't. Reproduction is a fundamental aspect of evolution. Computers can't reproduce. We have to facilitate that ourselves, though iterating on various aspects of computers. Right now we can fake it with increased processing power, increased memory, more elegant code, but at the end of the day, without some form reproductive system that doesn't rely on us, the computer can't exceed our grasp. If it could, we'd see true exponential growth, not compounding as in Moore's Law. We can't make them do more than what they already do. We can just make them do it faster.
Yeah, sure, and I can cram a hundred monkey's in a room with a hundred typewriters and come up with a better love story than Twilight, but it's gonna take time. Not Shakespeare time, but a few weeks at least. That's the thing, though, the evolution of any system doesn't happen overnight. We didn't wake up one day, walk out of our cave, and create TikTok. Evolution is a long process. You forget all of the things that happened before we figured out that our thumbs weren't solely for sticking up our own asses. There are millions of years that you aren't accounting for. Billions of attempts to create what we take for granted. Consciousness. You say that we don't have to know what we are doing, and you are right, we don't, but it's a crap-shoot with quadrillion to one odds.
Again, we can store as much data as we want, it won't make AI happen. We haven't spontaneously seen life form in libraries, but they have been storing data in them for thousands of years. Consciousness isn't data. If that's all you want, ChatGPT is passing the bar. It still can't tell me it loves me, and mean it.
Funny, you seem to think that you perceive it pretty well...
A well thought out conclusion I'm sure is based on all of the facts you failed to present. Bravo.
The laws of physics still apply. We already have to do all kinds of crazy tricks to make transistors as small as they are and not leak electrons all over the place due to quantum tunneling. The best thing we figured out how to do is just pile on more CPU/GPU cores.
It's also arrogant to assume we will continue on this exponential industrial-revolution growth of the last 300 years and not plateau as a species again for the next thousand. We could be looking at an eon of just burnin' away our oil while we try to cling more and more to whatever other energy impinges on this pitiful little planet, trapped in our local space unable to use our pathetic spacecraft to push us any further.
The laws of physics are no less or more applicable to our own biology in terms of complexity, density, scale, and information capacity and in most ways is far less efficient and accurate than their silicon counterparts.
There is nothing to suggest the growth in computer intelligence is going to stop occurring or it's doing anything but just getting started.
Apart from your use of infinite I agree, there is no reason we shouldn't be able to surpass nature with synthetic intelligence. The time computers have existed is a mere blip on a historic scale, and computers has surpassed us at logic games like Chess and at math already long ago.
Modern LLM models are just the current stage, before that it could be said it was pattern recognition. We had OCR in the 80's as probably the most practical example. It may seem there is long between the breakthroughs, but 40 years is nothing compared to evolution.
I have no doubt strong AI will be achieved eventually, and when we do, I have no doubt AI will surpass our intelligence in every way very quickly.
If you keep doubling the number of fruit flies exponentially, isn't it likely that humanity will find itself outsmarted?
The answer is no, it isn't. Quantity does not quality make and all our current AI tech is about ways to breed fruit flies that fly left or right depending on what they see.
As a counter argument against that, companies are trying to make self driving cars work for 20 years. Processing power has increased by a million and the things still get stuck. Pure processing power isn't everything.
Magic as in street magician, not magic as in wizard. Lots of the things that people claim AI can do are like a magic show, it's amazing if you look at it from the right angle, and with the right skill you can hide the strings holding it up, but if you try to use it in the real world it falls apart.
I wish there was actual magic
It would make science very difficult.
What if it magically made it easier?
Mmm irrational shit makes rationality harder
Look at quantum mechanics
Everything is magic if you don't understand how the thing works.
I wish. I don't understand why my stomach can't handle corn, but it doesn't lead to magic. It leads to pain.
Have you eaten hominy corn? The nixtamalisation process makes it digestible.
I don't have access to that, sadly. I'm pretty sure my body would reject it however. At least from my reading on what it is.
Sam Altman will make a big pile of investor money disappear before your very eyes.
The masses have been treating it like actual magic since the early stages and are only slowly warming up to the idea it‘s calculations. Calculations of things that are often more than the sum of it‘s parts as people start to realize. Well some people anyway.
oh the bubble's gonna burst sooner than some may think
Next week, some say
If you're a thechbro, this is the new magic shit, man! To the moooooon!
If only.
Yea, try talking to chatgpt about things that you really know in detail about. It will fail to show you the hidden, niche things (unless you mention them yourself), it will make lots of stuff up that you would not pick up on otherwise (and once you point it out, the bloody thing will "I knew that" you, sometimes even if you are wrong) and it is very shallow in its details. Sometimes, it just repeats your question back to you as a well-written essay. And that's fine...it is still a miracle that it is able to be as reliable and entertaining as some random bullshitter you talk to in a bar, it's good for brainstorming too.
It's like watching mainstream media news talk about something you know about.
Oh good comparison
Haha, definitely, it's infuriating and scary. But it also depends on what you are watching for. If you are watching TV, you do it for convenience or entertainment. LLMs have the potential to be much more than that, but unless a very open and accessible ecosystem is created for them, they are going to be whatever our tech overlords decide they want them to be in their boardrooms to milk us.
Well, if you read the article, you’ll see that’s exactly what is happening. Every company you can imagine is investing the GDP of smaller nations into AI. Google, Facebook, Microsoft. AI isn’t the future of humanity. It’s the future of capitalist interests. It’s the future of profit chasing. It’s the future of human misery. Tech companies have trampled all over human happiness and sanity to make a buck. And with the way surveillance capitalism is moving—facial recognition being integrated into insane places, like the M&M vending machine, the huge market for our most personal, revealing data—these could literally be two horsemen of the apocalypse.
Advancements in tech haven’t helped us as humans in while. But they sure did streamline profit centers. We have to wrest control of our future back from corporate America because this plutocracy driven by these people is very, very fucking dangerous.
AI is not the future for us. It’s the future for them. Our jobs getting “streamlined” will not mean the end of work and the rise of UBI. It will mean stronger, more invasive corporations wielding more power than ever while more and more people suffer, are cast out and told they’re just not working hard enough.
Sony wants photographs of my ears for "360 reality audio". No. Just no.
Dude! I bought some Bose headphones that were amazing. But I read over the privacy policy and they wanted to “map my head movements” and they wanted permission to passively listen to audio sent through the speakers and any audio around the microphone.
I ran those fuckers back to the store as quickly as possible.
But not before having to duck and dodge agreeing to the privacy policy in their app, so I quickly deleted it. But when I started interacting with their customer service, they tried to get me to sign a different privacy policy that seemed formulated just for the information shared in the chat, but in two separate addenda I had to dig through, I saw they were tryin to get me to sign the original super invasive privacy policy.
Fuck Bose. Fuck all these fake fronts for surveillance capitalism. Fuck capitalism.
Wrong chat dude. What does that have to do with AI anyways?
I think the claim is that they can use AI to improve the sound of their headphones if you supply them with images of your ears. I just dont like them having a database of personally identifying information like that.
How personally identifiable is your ear though? It's not connected to your thoughts, you can't use it to determine your age height and weight, which ad company would need that data? IMO, it's no different than sending a mold of your ear tube to a CIEM company to get your custom molded earphones.
Ears turn out to be a good way to recognize individuals. Ear biometrics is an evolving area.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7594944/
I see. I still don't think it's cause for concern yet, but good to know. Thanks!
Northrup Grumman probably already have a prototype to identify you by your ears from a mile up and kill you with a single bullet the moment you're not inside.
it's bizarre without context, but i recognise what they mean - Sony's headphones app suggests you send them photos of your ears so they can analyse the shape to improve the noise cancellation.
Which I don't think has anything to do with GenAI. Though, I admit I'm not well educated in ear scanning and 3D audio reconstruction, so good sources are appreciated.
What worries me is how much of the AI criticism on Lemmy wants to make everything worse; not share the gains more equally. If that's what passes for left today, well...
I don't have a problem with machine learning. I have a problem with one company getting x trillion dollars investment. Who pays when the investors want their returns? Eventually it's going to be all of us.
I don't think they have that much potential. They are just uncontrollable, it's a neat trick but totally unreliable if there isn't a human in the loop. This approach is missing all the control systems we have in our brains.
I really only use for "oh damn, I known there's a great one-liner to do that in Python" sort of thing. It's usually right and of it isn't it'll be immediacy obvious and you can move on with your day. For anything more complex the gas lighting and subtle errors make it unusable.
Oh yes, it's great for that. My google-fu was never good enough to "find the name of this thing that does this, but only when in this circumstance"
ChatGPT is great for helping with specific problems. Google search for example gives fairly general answers, or may have information that doesn't apply to your specific situation. But if you give ChatGPT a very specific description of the issue you're running into it will generally give some very useful recommendations. And it's an iterative process, you just need to treat it like a conversation.
It's also a decent writer's room brainstorm kind of tool, although it can't really get beyond the initial pitch as it's pretty terrible at staying consistent when trying to clean up ideas.
I find it incredibly helpful for breaking into new things.
I want to learn terraform today, no guide/video/docs site can do it as well as having a teacher available at any time for Q&A.
Aside from that, it's pretty good for general Q&A on documented topics, and great when provided context (ie. A full 200MB export of documentation from a tool or system).
But the moment I try and dig deeper I to something I'm an expert in, it just breaks down.
That's why I've found it somewhat dangerous to use to jump into new things. It doesn't care about bes practices and will just help you enough to let you shoot yourself in the foot.
Just wait for MeanGirlsGPT
Good. It's dangerous to view AI as magic. I've had to debate way too many people who think they LLMs are actually intelligent. It's dangerous to overestimate their capabilities lest we use them for tasks they can't perform safely. It's very powerful but the fact that it's totally non deterministic and unpredictable means we need to very carefully design systems that rely on LLMs with heavy guards rails.
Conversely, there are way too many people who think that humans are magic and that it's impossible for AI to ever do .
I've long believed that there's a smooth spectrum between not-intelligent and human-intelligent. It's not a binary yes/no sort of thing. There's basic inert rocks at one end, and humans at the other, and everything else gets scattered at various points in between. So I think it's fine to discuss where exactly on that scale LLMs fall, and accept the possibility that they're moving in our direction.
It's not linear either. Brains are crazy complex and have sub cortexes that are more specialized to specific tasks. I really don't think that LLMs alone can possibly demonstrate advanced intelligence, but I do think it could be a very important cortex for one. There's also different types of intelligence. LLMs are very knowledgeable and have great recall but lack reasoning or worldview.
Indeed, and many of the more advanced AI systems currently out there are already using LLMs as just one component. Retrieval-augmented generation, for example, adds a separate "memory" that gets searched and bits inserted into the context of the LLM when it's answering questions. LLMs have been trained to be able to call external APIs to do the things they're bad at, like math. The LLM is typically still the central "core" of the system, though; the other stuff is routine sorts of computer activities that we've already had a handle on for decades.
IMO it still boils down to a continuum. If there's an AI system that's got an LLM in it but also a Wolfram Alpha API and a websearch API and other such "helpers", then that system should be considered as a whole when asking how "intelligent" it is.
Lol yup, some people think they're real smart for realizing how limited LLMs are, but they don't recognize that the researchers that actually work on this are years ahead on experimentation and theory already and have already realized all this stuff and more. They're not just making the specific models better, they're also figuring out how to combine them to make something more generally intelligent instead of super specialized.
I find the people who think they are actually an AI are generally the people opposed to them.
People who use them as the tools they are know how limited they are.
Not being combative or even disagreeing with you - purely out of curiosity, what do you think are the necessary and sufficient conditions of intelligence?
A worldview simulation it can use as a scratch pad for reasoning. I view reasoning as a set of simulated actions to convert a worldview from state a to state b.
It depends on how you define intelligence though. Normally people define it as human like, and I think there are 3 primary sub types of intelligence needed for cognizance, being reasoning, awareness, and knowledge. I think the current Gen is figuring out the knowledge type, but it needs to be combined with the other two to be complete.
Thanks! I'm not clear on what you mean by a worldview simulation as a scratch pad for reasoning. What would be an example of that process at work?
For sure, defining intelligence is non trivial. What clear the bar of intelligence, and what doesn't, is not obvious to me. So that's why I'm engaging here, it sounds like you've put a lot of thought into an answer. But I'm not sure I understand your terms.
A worldview is your current representational model of the world around you, so for example you know you're a human on earth in a physical universe when a set of rules, you have a mental representation of your body and it's capabilities, your location and the physicality of the things in your location. It can also be abstract things too, like your personality and your relationships and your understanding of what's capable in the world.
Basically, you live in reality, but you need a way to store a representation of that reality in your mind in order to be able to interact with and understand that reality.
The simulation part is your ability to imagine manipulating that reality to achieve a goal, and if you break that down, you're trying to convert reality from your perceived current real state A, to a imagined desired state B. Reasoning is coming up with a plan to convert the worldview from state A to state B step by step, so let's say you want to brush your teeth, you a want to convert your worldview of you having dirty teeth to you having clean teeth, and to do that you reason that you need to follow a few steps to achieve that, like moving your body to the bathroom, retrieving tools (toothbrush and toothpaste) and applying mechanical action to your teeth to clean them. You created a step by step plan to change the state of your worldview to a new desired state you came up with. It doesn't need to be physical either, it could be an abstract goal, like calculating a tip for a bill. It can also be a grand goal, like going to college or creating a mathematical proof.
LLMs don't have a representational model of the world, they don't have a working memory or a world simulation to use as a scratchpad for testing out reasoning. They just take a sequence of words and retrieve the next word that is probabilistically and relationally likely to be a good next word based on its training data.
They could be a really important cortex that can assist in developing a worldview model, but in their current granular state of being a single task AI model, they cannot do reasoning on their own.
Knowledge retrieval is an important component that assists in reasoning though, so it can still play a very important role in reasoning.
I think it's a big mistake to think that because the most basic LLMs are just autocompletes, or that because LLMs can hallucinate, that what big LLMs do doesn't constitute "thinking". No, GPT4 isn't conscious, but it very clearly "thinks".
It's started to feel to me like current AIs are reasonable recreations of parts of our minds. It's like they're our ability to visualize, to verbalize, and to an extent, to reason (at least the way we intuitively reason, not formally), but separared from the "rest" of our thought processes.
Depends on how you define thinking. I agree, LLMs could be a component of thinking, specifically knowledge and recall.
Yes, as Linus Torvalds said humans are also thinking like autocomplete systems.
Those recent failures only come across as cracks for people who see AI as magic in the first place. What they're really cracks in is people's misperceptions about what AI can do.
Recent AI advances are still amazing and world-changing. People have been spoiled by science fiction, though, and are disappointed that it's not the person-in-a-robot-body kind of AI that they imagined they were being promised. Turns out we don't need to jump straight to that level to still get dramatic changes to society and the economy out of it.
I get strong "everything is amazing and nobody is happy" vibes from this sort of thing.
Also interesting is that most people don't understand the advances it makes possible so when they hear people saying it's amazing and then try it of course they're going to think it's not lived upto hype.
The big things are going to completely change things like how we use computers especially being able to describe how you want it to lay out ui and create custom tools on the fly.
Here is an alternative Piped link(s):
everything is amazing and nobody is happy
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I'm open-source; check me out at GitHub.
I hope it collapses in a fire and we can just keep our foss local models with incremental improvements, that way both techbros and artbros eat shit
Unfortunately for that outcome, brute forcing with more compute is pretty helpful for now
And even if local small-scale models turn out to be optimal, that wouldn't stop big business from using them. I'm not sure what "it" is being referred to with "I hope it collapses."
I was referring to the hype bubble therefore the money surrounding it all
There are quite a lot of AI-sceptics in this thread. If you compare the situation to 10 years ago, isn't it insane how far we've come since then?
Image generation, video generation, self-driving cars (Level 4 so the driver doesn't need to pay attention at all times), capable text comprehension and generation. Whether it is used for translation, help with writing reports or coding. And to top it all off, we have open source models that are at least in a similar ballpark as the closed ones and those models can be run on consumer hardware.
Obviously AI is not a solved problem yet and there are lots of shortcomings (especially with LLMs and logic where they completely fail for even simple problems) but the progress is astonishing.
I think a big obstacle to meaningfully using AI is going to be public perception. Understanding the difference between CHAT-GPT and open source models means that people like us will probably continue to find ways of using AI as it continues to improve, but what I keep seeing is botched applications, where neither the consumers nor the investors who are pushing AI really understand what it is or what it's useful for. It's like trying to dig a grave with a fork - people are going to throw away the fork and say it's useless, not realising that that's not how it's meant to be used.
I'm concerned about the way the hype behaves because I wouldn't be surprised if people got so sick of hearing about AI at all, let alone broken AI nonsense, that it hastens the next AI winter. I worry that legitimate development may be held back by all the nonsense.
I actually think public perception is not going to be that big a deal one way or the other. A lot of decisions about AI applications will be made by businessmen in boardrooms, and people will be presented with the results without necessarily even knowing that it's AI.
I've seen a weird aspect of it from the science side, where people writing grant applications or writing papers feel compelled to incorporate AI into it, because even if they know that their sub-field has no reliable use-cases for AI yet, they're feeling the pressure of the hype.
Specifically, when I say the pressure of the hype, I mean that some of the best scientists I have known were pretty bad at the academic schmoozing that facilitates better funding and more prestige. In practice, businessmen in boardrooms are often the ones holding the purse strings and sometimes it's easier to try to speak their language than to "translate" one's research to something they'll understand.
Businessmen are just the public but with money.
Fair point. I personally think that AI lives up to enough parts of the hype so that there won't be another AI winter but who knows. Some will obviously get disillusioned but not enough.
Lol. It doesn't do video generation. It just takes existing video and makes it look weird. Image generation is about the same: they just take existing works and smash them together, often in an incoherent way. Half the text generation shit is just fine by underpaid people in Kenya Ave and similar places.
There are a few areas where llm could be useful, things like trawling large data sets, etc, but every bit of the stuff that is being hyped as "AI" is just spam generators.
That's totally not how it works. Not only nobody has the need for such tools, but the technology got there much before the current state of AI
Confidently incorrect.
As I often mention when this subject pops up: while the current statistics-based generative models might see some application, I believe that they'll be eventually replaced by better models that are actually aware of what they're generating, instead of simply reproducing patterns. With the current models being seen as "that cute 20s toy".
In text generation (currently dominated by LLMs), for example, this means that the main "bulk" of the model would do three things:
Because, as it stands, LLMs are only chaining tokens. They might do this in an incredibly complex way, but that's it. That's obvious when you look at what LLM-fuelled bots output as "hallucination" - they aren't the result of some internal error, they're simply an undesired product of a model that sometimes outputs desirable stuff too.
Sub "tokens" and "sememes" with "pixels" and "objects" and this probably holds true for image generating models, too. Probably.
Now, am I some sort of genius for noticing this? Probably not; I'm just some nobody with a chimp avatar, rambling in the Fediverse. Odds are that people behind those tech giants already noticed the same ages ago, and at least some of them reached the same conclusion - that better gen models need more awareness. If they are not doing this already, it means that this shit would be painfully expensive to implement, so the "better models" that I mentioned at the start will probably not appear too soon.
Most cracks will stay there; Google will hide them with an obnoxious band-aid, OpenAI will leave them in plain daylight, but the magic trick will still not be perfect, at least in the foreseeable future.
And some might say "use MOAR processing power!", or "input MOAR training data!", in the hopes that the current approach will "magically" fix itself. For those, imagine yourself trying to drain the Atlantic with a bucket: does it really matter if you use more buckets, or larger buckets? Brute-forcing problems only go so far.
Just my two cents.
I agree 100%, and I think Zuckerberg's attempt at a massive 340,000 of Nvidia’s H100 GPUs AI based on LLM with the aim to create a generel AI sounds stupid. Unless there's a lot more to their attempt, it's doomed to fail.
I suppose the idea is something about achieving critical mass, but it's pretty obvious, that that is far from the only factor missing to achieve general AI.
I still think it's impressive what they can do with LLM. And it seems to be a pretty huge step forward. But It's taken about 40 years from we had decent "pattern recognition" to get here, the next step could be another 40 years?
I think that Zuckerberg's attempt is a mix of publicity stunt and "I want [you] to believe!". Trying to reach AGI through a large enough LLM sounds silly, on the same level as "ants build, right? If we gather enough ants, they'll build a skyscraper! Chrust me."
In fact I wonder if the opposite direction wouldn't be a bit more feasible - start with some extremely primitive AGI, then "teach" it Language (as a skill) and a language (like Mandarin or English or whatever).
I'm not sure on how many years it'll take for an AGI to pop up. 100 years perhaps, but I'm just guessing.
I don't know much about LLMs but latent diffusion models already have "meaning" encoded into the model. The whole concept of the u-net is that as it reduces the spacial resolution of the image, it increases the semantic resolution by adding extra dimensions of information. It came from medical image analysis where the idea of labelling something as a tumor would be really useful.
This is why you get body dysmorphic results on earlier (and even current) models. It's identified something as a human limb, but isn't quite sure on where the hand is, so it adds one on to what we know is a leg.
There was an interesting paper published just recently titled Generative Models: What do they know? Do they know things? Let's find out! (a lot of fun names and titles in the AI field these days :) ) That does a lot of work in actually analyzing what an AI image generator "knows" about what they're depicting. They seem to have an awareness of three dimensional space, of light and shadow and reflectivity, lots of things you wouldn't necessarily expect from something trained just on 2-D images tagged with a few short descriptive sentences. This article from a few months ago also delved into this, it showed that when you ask a generative AI to create a picture of a physical object the first thing the AI does is come up with the three-dimensional shape of the scene before it starts figuring out what it looks like. Quite interesting stuff.
That's perhaps why image generators are comparatively better than text generators. But there's still something off, by your example it seems that the model cannot reliably use clues like position to understand "this is a «leg»". And I don't know much about image generators but I think that they're still statistics- and probability-based.
That's a huge oversimplification of the way LLMs work. They're not statistical in the way a Markov chain is. They use neural networks, which are a decent analogy for the human brain. The way the synapses between neurons are wired is obviously different, and the way the neurons are triggered and the types of signals they can send to other neurons is obviously different. But overall, similar capabilities can in theory be achieved with either method. If you're going to call neural networks statistics based, you might as well call the human brain statistics based as well.
I'm sticking to what matters for the sake of the argument. Anyone who wants to inform themself further has a plethora of online resources to do so.
Implied: "you're suggesting that they work like Markov chains, they don't."
In no moment I mentioned or even implied Markov chains. My usage of the verb "to chain" is clearly vaguer within that context; please do not assume words onto my mouth.
I don't disagree with the conclusion (i.e. I believe that neural networks can achieve human-like capabilities), but the argument itself is such a fallacious babble (false equivalence) that I'm not bothering further with your comment.
And it's also an "ackshyually" given this context dammit. I'm not talking about the bloody neural network, but how it is used.
No need to get offended. Maybe I misunderstood the intent behind your original message. I think you made a lot of good points.
I brought up the Markov chain because a common misconception I've seen on the Internet and in real life is that LLMs work pretty much the same as Markov chains under the hood. And I saw no mention of neural networks in your original comment.
I found this graph very clear
Well, natural language processing is placed in the trough of disillusionment and projected to stay there for years. ChatGPT was released in November 2022...
Trying to make real and good use of AI generative models are cracks in the magic.
It's pretty useful if you know exactly what you want and how to work within it's limitations.
Coworkers around me already use ChatGPT to generate code snippets for Python, Excel VBA, etc. to good success.
Right, it's a tool with quirks, techniques and skills to use just like any other tool. ChatGPT has definitely saved me time and on at least one occasion, kept me from missing a deadline that I probably would have missed if I went about it "the old way" lmao
You mean they're using it to write boilerplate which shouldn't have been written in the first place.
Call it whatever makes you feel happy, it is allowing me to accomplish things much more quickly and easily than working without it does.
Until someone has to maintain it.
That's why I said code "snippets". I don't trust it to give me the entire answer right from the get go, because I acknowledge its limitations and review it before pasting it in. I find it works better if I tell it to generate specific code rather than everything at once.
Plus, we're not working on mission critical server stuff here. Those are code used for data analysis which probably could also be found on Stackoverflow anyway. If it works, it works.
Why? If you know how to incorporate "boilerplate" and modify it correctly into your own code, what difference does it make if its from ChatGPT or Stackoverflow?
Difference to copy and paste from stackoverflow, probably not terribly much. The latter is already bad.
It's as if the young'uns heard the term "10x developer" and decided that not understanding what you're doing is the way to get there.
(Also that page has a script I had to override just to copy and paste that)
It's well worth reading the longer newsletter the above link quotes: https://www.wheresyoured.at/sam-altman-fried/
I kinda agree we are probably cresting the peak of the hype cycle right now.
darkaltrax