Am I the only one getting agitated by the word AI?
Am I the only one getting agitated by the word AI (Artificial Intelligence)?
Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).
Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.
The word "AI" has been used for way longer than the current LLM trend, even for fairly trivial things like enemy AI in video games. How would you even define a computer "thinking on its own"?
I think a good metric is once computers start getting depression.
It'll probably happen when they get a terrible pain in all the diodes down their left hand side.
But will they be depressed or will they just simulate it because they're too lazy to work?
If they are too lazy to work that would imply they have motivation and choice beyond "doing what my programming tells me to do ie. input, process, output". And if they have the choice not to do work because they dont 'feel' like doing it (and not a programmed/coded option given to them to use) then would they not be thinking for themselves?
Ahh man are you my dad? I took damage from that one. has any fiction writer done a story about depressed ai where they talk about how depression can't be real because it's all 1s and 0s? Cuz i would read the shit out of that.
It’s only tangentially related to the topic, since it involves brain enhancements, not ‘AI’. However, you may enjoy the short story “Reasons to be cheerful” by Greg Egan.
Not sure about that. A LLM could show symptoms of depression by mimicking depressed texts it was fed. A computer with a true consciousness might never get depression, because it has none of the hormones influencing our brain.
Me: Pretend you have depression
LLM: I'm here to help with any questions or support you might need. If you're feeling down or facing challenges, feel free to share what's on your mind. Remember, I'm here to provide information and assistance. If you're dealing with depression, it's important to seek support from qualified professionals like therapists or counselors. They can offer personalized guidance and support tailored to your needs.
Give it the right dataset and you could easily create a depressed sounding LLM to rival Marvin the paranoid android.
Hormones aren't depression, and for that matter they aren't emotions either. They just cause them in humans. An analogous system would be fairly trivial to implement in an AI.
That's exactly my point though, as OP stated we could detect if an AI was truly intelligent if it developed depression. Without hormones or something similar, there's no reason to believe it ever would develop those on its own. The fact that you could artificially give it depressions is besides the point.
I don't think we have the same point here at all. First off, I don't think depression is a good measure of intelligence. But mostly, my point is that it doesn't make it less real when hormones aren't involved. Hormones are simply the mediator that causes that internal experience in humans. If a true AI had an internal experience, there's no reason to believe that it would require hormones to be depressed. Do text-to-speech systems require a mouth and vocal chords to speak? Do robots need muscle fibers to walk? Do LLMs need neurons to form complete sentences? Do cameras need eyes to see? No, because it doesn't matter what something is made of. Intelligence and emotions are made of signals. What those signals physically are is irrelevant.
As for giving it feelings vs it developing them on its own-- you didn't develop the ability to feel either. That was the job of evolution, or in the case of AI, it could be intentionally designed. It could also be evolved given the right conditions.
Exactly. Which is why we shouldn't judge an AIs intelligence based on whether it can develop depression. Sure, it's feasible it could develop it through some other mechanism. But there's no reason to assume it would, in absence of the factors that cause depressions in humans.
Oh. Maybe we did have the same point lol
The real metric is whether a computer gets so depressed that it turns itself off.
Wait until they found my GitHub repositories.
A LLM can get depression, so that’s not a metric you can really use.
No it can’t.
LLMs can only repeat things they’re trained on.
Sorry, to be clear I meant it can mimic the conversational symptoms of depression as if it actually had depression; there’s no understanding there though.
You can’t use that as a metric because you wouldn’t be able to tell the difference between real depression and trained depression.
The best thing is enemy "AI" only needs to be made worse right away after creating it. First they'll headshot everything across the map in milliseconds. The art is to make it dumber.
it does not "think"
Real AGI does not exist yet. AI has existed for decades.
When did the etymology change?
https://en.m.wikipedia.org/wiki/Artificial_general_intelligence
We have altered the etymology, pray we don’t alter it again.
Have I claimed it has changed?
Homie I'm just asking and the wiki gives no details on when the colloquial use changed.
I don't understand what you're even trying to ask. AGI is a subcategory of AI. Every AGI is an AI but not every AI is an AGI. OP seems to be thinking that AI isn't "real AI" because it's not AGI, but those are not the same thing.
AI has been colloquially used to mean AGI for 40 years. About the only exception has been video games, but most people knew better than thinking the Goomba was alive.
At what point, did AI get turned into AGI.
I genuinely have no idea what you're on about. Nobody who knows what they're talking about uses AI and AGI interchangeably. That's like saying by universe you mean the milkyway.
Sure lol.
Ai is 100% a marketing term.
It's a computer science term that's been used for this field of study for decades, it's like saying calling a tomato a fruit is a marketing decision.
Yes it's somewhat common outside computer science to expect an artificial intelligence to be sentient because that's how movies use it. John McCarthy's which coined the term in 1956 is available online if you want to read it
They didn't just start calling it AI recently. It's literally the academic term that has been used for almost 70 years.
i mean...by that definition nothing currently in existence deserves to be called "AI".
none of the current systems do anything remotely approaching "perceptual learning, memory organization, and critical reasoning".
they all require pre-processed inputs and/or external inputs for training/learning (so the opposite of perceptual), none of them really do memory organization, and none are capable of critical reasoning.
so OPs original question remains:
why is it called "AI", when it plainly is not?
(my bet is on the faceless suits deciding it makes them money to call everything "AI", even though it's a straight up lie)
Because a bunch of professors defined it like that 70 years ago, before the AI winter set in. Why is that so hard to grasp? Not everything is a conspiracy.
I had a class at uni called AI, and no one thought we were gonna be learning how to make thinking machines. In fact, compared to most of the stuff we did learn to make then, modern AI looks godlike.
Honestly you all sound like the people that snidely complain how it's called "global warming" when it's freezing outside.
just because the marketing idiots keep calling it AI, doesn't mean it IS AI.
words have meaning; i hope we agree on that.
what's around nowadays cannot be called AI, because it's not intelligence by any definition.
imagine if you were looking to buy a wheel, and the salesperson sold you a square piece of wood and said:
"this is an artificial wheel! it works exactly like a real wheel! this is the future of wheels! if you spin it in the air it can go much faster!"
would you go:
"oh, wow, i guess i need to reconsider what a wheel is, because that's what the salesperson said is the future!"
or would you go:
"that's idiotic. this obviously isn't a wheel and this guy's a scammer."
if you need to redefine what intelligence is in order to sell a fancy statistical model, then you haven't invented intelligence, you're just lying to people. that's all it is.
the current mess of calling every fancy spreadsheet an "AI" is purely idiots in fancy suits buying shit they don't understand from other fancy suits exploiting that ignorance.
there is no conspiracy here, because it doesn't require a conspiracy; only idiocy.
p.s.: you're not the only one here with university credentials...i don't really want to bring those up, because it feels like devolving into a dick measuring contest. let's just say I've done programming on industrial ML systems during my bachelor's, and leave it at that.
These arguments are so overly tired and so cyclic that AI researchers coined a name for them decades ago - the AI effect. Or succinctly just: "AI is whatever hasn't been done yet."
i looked it over and ... holy mother of strawman.
that's so NOT related to what I've been saying at all.
i never said anything about the advances in AI, or how it's not really AI because it's just a computer program, or anything of the sort.
my entire argument is that the definition you are using for intelligence, artificial or otherwise, is wrong.
my argument isn't even related to algorithms, programs, or machines.
what these tools do is not intelligence: it's mimicry.
that's the correct word for what these systems are capable of. mimicry.
intelligence has properties that are simply not exhibited by these systems, THAT'S why it's not AI.
call it what it is, not what it could become, might become, will become. because that's what the wiki article you linked bases its arguments on: future development, instead of current achievement, which is an incredibly shitty argument.
the wiki talks about people using shifting goal posts in order to "dismiss the advances in AI development", but that's not what this is. i haven't changed what intelligence means; you did! you moved the goal posts!
I'm not denying progress, I'm denying the claim that the goal has been reached!
that's an entirely different argument!
all of the current systems, ML, LLM, DNN, etc., exhibit a massive advancement in computational statistics, and possibly, eventually, in AI.
calling what we have currently AI is wrong, by definition; it's like saying a single neuron is a brain, or that a drop of water is an ocean!
just because two things share some characteristics, some traits, or because one is a subset of the other, doesn't mean that they are the exact same thing! that's ridiculous!
the definition of AI hasn't changed, people like you have simply dismissed it because its meaning has been eroded by people trying to sell you their products. that's not ME moving goal posts, it's you.
you said a definition of 70 years ago is "old" and therefore irrelevant, but that's a laughably weak argument for anything, but even weaker in a scientific context.
is the Pythagorean Theorem suddenly wrong because it's ~2500 years old?
ridiculous.
yep and it has always been a leading misnomer like most marketing terms
I’d like to offer a different perspective. I’m a grey beard who remembers the AI Winter, when the term had so over promised and under delivered (think expert systems and some of the work of Minsky) that using the term was a guarantee your project would not be funded. That’s when the terms like “machine learning” and “intelligent systems” started to come into fashion.
The best quote I can recall on AI ran along the lines of “AI is no more artificial intelligence than airplanes are doing artificial flight.” We do not have a general AI yet, and if Commander Data is your minimum bar for what constitutes AI, you’re absolutely right, and you can define it however you please.
What we do have are complex adaptive systems capable of learning and problem solving in complex problem spaces. Some are motivated by biological models, some are purely mathematical, and some are a mishmash of both. Some of them are complex enough that we’re still trying to figure out how they work.
And, yes, we have reached another peak in the AI hype - you’re certainly not wrong there. But what do you call a robot that teaches itself how to walk, like they were doing 20 years ago at MIT? That’s intelligence, in my book.
My point is that intelligence - biological or artificial - exists on a continuum. It’s not a Boolean property a system either has or doesn’t have. We wouldn’t call a dog unintelligent because it can’t play chess, or a human unintelligent because they never learned calculus. Are viruses intelligent? That’s kind of a grey area that I could argue from either side. But I believe that Daniel Dennett argued that we could consider a paramecium intelligent. Iirc, he even used it to illustrate “free will,” although I completely reject that interpretation. But it does have behaviors that it learned over evolutionary time, and so in that sense we could say it exhibits intelligence. On the other hand, if you’re going to use Richard Feynman as your definition of intelligence, then most of us are going to be in trouble.
My AI professor back in the early 90's made the point that what we think of as fairly routine was considered the realm of AI just a few years earlier.
I think that's always the way. The things that seem impossible to do with computers are labeled as AI, then when the problems are solved, we don't figure we've created AI, just that we solved that problem so it doesn't seem as big a deal anymore.
LLMs got hyped up, but I still think there's a good chance they will just be a thing we use, and the AI goal posts will move again.
I remember when I was in college, and the big problems in AI were speech-to-text and image recognition. They were both solved within a few years.
In it's current state,
I'd call it ML (Machine Learning)
A human defines the desired outcome,
and the technology "learns itself" to reach that desired outcome in a brute-force fashion (through millions of failed attempts, slightly inproving itself upon each epoch/iteration), until the desired outcome defined by the human has been met.
To be fair, I think we underestimate just how brute-force our intelligence developed. We as a species have been evolving since single-celled organisms, mutation by mutation over billions of years, and then as individuals our nervous systems have been collecting data from dozens of senses (including hormone receptors) 24/7 since embryo. So before we were even born, we had some surface-level intuition for the laws of physics and the control of our bodies. The robot is essentially starting from square 1. It didn't get to practice kicking Mom in the liver for 9 months - we take it for granted, but that's a transferable skill.
Granted, this is not exactly analogous to how a neural network is trained, but I don't think it's wise to assume that there's something "magic" in us like a "soul", when the difference between biological and digital neural networks could be explained by our "richer" ways of interacting with the environment (a body with senses and mobility, rather than a token/image parser) and the need for a few more years/decades of incremental improvements to the models and hardware
So what do you call it when a newborn deer learns to walk? Is that “deer learning?”
I’d like to hear more about your idea of a “desired outcome” and how it applies to a single celled organism or a goldfish.
Exactly.
AI, as a term, was coined in the mid-50s by a computer scientist, John McCarthy. Yes, that John McCarthy, the one who invented LISP and helped develop Algol 60.
It's been a marketing buzzword for generations, born out of the initial optimism that AI tasks would end up being pretty easy to figure out. AI has primarily referred to narrow AI for decades and decades.
on the other hand calculators can do things more quickly than humans, this doesn't mean they're intelligent or even on the intelligence spectrum. They take an input and provide and output.
The idea of applying intelligence to a calculator is kind of silly. This is why I still prefer words like "algorithms" to "AI" as its not making a "decision". Its making a calculation, its just making it very fast based on a model and is prompt driven.
Actual intelligence doesn't just shut off the moment its prompted response ends - it keeps going.
I think we’re misaligned on two things. First, I’m not saying doing something quicker than a human can is what comprises “intelligence.” There’s an uncountable number of things that can do some function faster than a human brain, including components of human physiology.
My point is that intelligence as I define it involves adaptation for problem solving on the part of a complex system in a complex environment. The speed isn’t really relevant, although it’s obviously an important factor in artificial intelligence, which has practical and economic incentives.
So I again return to my question of whether we consider a dog or a dolphin to be “intelligent,” or whether only humans are intelligent. If it’s the latter, then we need to be much more specific than I’ve been in my definition.
What I'm saying is current computer "AI" isn't on the spectrum of intelligence while a dog or grasshopper is.
Got it. As someone who has developed computational models of complex biological systems, I’d like to know specifically what you believe the differences to be.
It's the 'why'. A robot will only teach itself to walk because a human predefined that outcome. A human learning to walk is maybe not even intelligence - Motor functions even operate in a separate area of the brain from executive function and I'd argue the defining tasks to accomplish and weighing risks is the intelligent part. Humans do all of that for the robot.
Everything we call "AI" now should be called "EI" or "extended intelligence" because humans are defining the both the goals and the resources in play to achieve them. Intelligence requires a degree of autonomy.
Okay, I think I understand where we disagree. There isn’t a “why” either in biology or in the types of AI I’m talking about. In a more removed sense, a CS team at MIT said “I want this robot to walk. Let’s try letting it learn by sensor feedback” whereas in the biological case we have systems that say “Everyone who can’t walk will die, so use sensor feedback.”
But going further - do you think a gazelle isn’t weighing risks while grazing? Do you think the complex behaviors of an ant colony isn’t weighing risks when deciding to migrate or to send off additional colonies? They’re indistinguishable mathematically - it’s just that one is learning evolutionarily and the other, at least theoretically, is able to learn theoretically.
Is the goal of reproductive survival not externally imposed? I can’t think of any example of something more externally imposed, in all honesty. I as a computer scientist might want to write a chatbot that can carry on a conversation, but I, as a human, also need to learn how to carry on a conversation. Can we honestly say that the latter is self-directed when all of society is dictating how and why it needs to occur?
Things like risk assessment are already well mathematically characterized. The adaptive processes we write to learn and adapt to these environmental factors are directly analogous to what’s happening in neurons and genes. I’m really just not seeing the distinction.
I personally wouldn't consider a neutral network an algorithm, as chance is a huge factor: whether you're training or evaluating you'll never get quite the same results
AI isn't reserved for a human-level general intelligence. The computer-controlled avatars in some videogames are AI. My phone's text-to-speech is AI. And yes, LLMs, like the smaller Markov-chain models before them, are AI.
I'm agitated that people got the impression "AI" referred specifically to human-level intelligence.
Like, before the LLM boom it was uncontroversial to refer to the bots in video games as "AI." Now it gets comments like this.
I've seen that confusion, too. I saw someone saying AI shouldn't be controversial because we've already had AI in video games for years. It's a broad and blanket term encompassing many different technologies, but people act like it all means the same thing.
I wholeheartedly agree, people use the term "AI" nowadays to refer to a very specific subcategory of DNNs (LLMs), but yeah, it used to refer to any more or less """smart""" algorithm performing.... Something on a set of input parameters. SVMs are AI, decision forests are AI, freaking kNN is AI, "artificial intelligence" is a loosely defined concept, any algorithm that aims to mimic human behaviour can be called AI and I'm getting a bit tired of hearing people say "AI" when they mean gpt-4 or stable diffusion.
I've had freaking GAMERS tell me that "It isnt real AI" at this point.... No shit, the Elites in Halo aren't Real AI either
Edit: Keep the downvotes coming anti LLMers, your tears are delicious
The only thing I really hate about "AI" is how many damn fonts barely differentiate between a capital "i" and lowercase "L" so it just looks like everyone is talking about some guy named Al.
"Al improves efficiency in..." Oh, good for him
Sam sung something for Al I heard
Right! Now I need to add extra clarification when I talk about Weird Al..
To be fair, writing parody songs with wierd AI is 100% a thing you can do online now.
I got Proton to change their font for their password manager because of this.
I just happen to have a few generated passwords that contain both, plus the pipe symbol, and some of them I occasionally have to type manually.
Don't they use different colors for capital vs lowercase vs number vs symbol?
Nope to the cases, but yes to the rest.
I'm more infuriated by people like you who seem to think that the term AI means a conscious/sentient device. Artificial intelligence is a field of computer science dating back to the very beginnings of the discipline. LLMs are AI, Chess engines are AI, video game enemies are AI. What you're describing is AGI or artificial general intelligence. A program that can exceed its training and improve itself without oversight. That doesn't exist yet. AI definitely does.
I'm even more infuriated that AI as a term is being thrown into every single product or service released in the past few months as a marketing buzzword. It's so overused that formerly fun conversations about chess engines and video game enemy behavior have been put on the same pedestal as CyberDook™, the toilet that "uses AI" (just send pics of your ass to an insecure server in Indiana).
I totally agree with that, it has recently become a marketing buzzword. It really does drag down the more interesting recent discoveries in the field.
Right, as someone in the field I do try to remind people of this. AI isn't defined as this sentient general intelligence (frankly its definition is super vague), even if that's what people colloquially think of when they hear the term. The popular definition of AI is much closer to AGI, as you mentioned.
AI has, for a long time been a Hollywood term for a character archetype (usually complete with questions about whether Commander Data will ever be a real boy.) I wrote a 2019 blog piece on what it means when we talk about AI stuff.
Here are some alternative terms you can use in place of AI, when they're talking about something else:
That's a bit of a list, but I hope it clears things up.
I remember when OpenAI were talking like they had discovered AGI or were a couple weeks away from discovering it, this was around the time Sam Altman was fired. Obviously that was not true, and honestly we may never get there, but we might get there.
Good list tbh.
Personally I'm excited and cautious about the future of AI because of the ethical implications of it and how it could affect society as a whole.
When I was doing my applied math PhD, the vast majority of people in my discipline used either "machine learning", "statistical learning", "deep learning", but almost never "AI" (at least not in a paper or a conference). Once I finished my PhD and took on my first quant job at a bank, management insisted that I should use the word AI more in my communications. I make a neural network that simply interpolates between prices? That's AI.
The point is that top management and shareholders don't want the accurate terminology, they want to hear that you're implementing AI and that the company is investing in it, because that's what pumps the company's stock as long as we're in the current AI bubble.
LLMs are AI. Lots of things are. They're just not AGI.
Right? Computer opponents in Starcraft are AI. Nobody sane is arguing it isn't. It just isn't GAI nor is it even based on neural networking. But it's still AI.
I have no idea what makes them say LLMs are not AIs. These are definetely simulated neurons in the background.
It's a very loaded term and seems to imply AGI for many.
I'm willing to bet that thise people didn't know anything about AI until a few years ago and only see it as this latest wave.
I did AI courses in college 25 years ago, and there were all kinds of algorithms. Neural networks were one of them, but there were many others. And way before that, like others have said, it's been used for simulated agents in games.
This is not a very good poem.
It doesn't rhyme, And the content is not really interesting, Maybe it's just a rant, But with a weird writing format.
Maybe it was translated from another language?
I just get tired of seeing all the dumb ass ways it’s trying to be incorporated into every single thing even though it’s still half-baked and not very useful for a very large amount of people. To me, it’s as useful as a toy is. Fun for a minute or two, and then you’re just reminded how awful it is and drop it in the bin to play with when you’re bored enough to.
https://i.imgflip.com/2p3dw0.jpg?a473976
This is nothing but the latest craze, it was drones, then Crypto then Metaverse now it's AI.
Metaverse was never a craze. Facebook would like you to believe it has more than a dozen users, but it doesn't.
The broader metaverse--mainly VRChat--had a brief boom during the pandemic, and several conventions (okay yeah it's furries) held events in there instead since they were unable to hold in-person events. It's largely faded away though as pandemic restrictions relaxed
This used to be my opinion, then I started using local models to help me write code. It's very useful for that, to automate rote work like writing header files, function descriptions etc. or even to spit out algorithms so that I don't have to look them up.
However there are indeed many applications that AI is completely useless for, or is simply the wrong tool.
While a diagnostic AI onboard in my car would be "useful", what is more useful is a well-documented industry standard protocol like OBD-II, and even better would be displaying the fault right on the dashboard instead of requiring a scan tool.
Conveniently none of these require a GPU in the car.
I think most people consider LLMs to be real AI, myself included. It’s not AGI, if that’s what you mean, but it is AI.
What exactly is the difference between being able to reliably fool someone into thinking that you can think, and actually being able to think? And how could we, as outside observers, be able to tell the difference?
As far as your question though, I’m agitated too, but more about things being marketed as AI that either shouldn’t have AI or don’t have AI.
Maybe I'm just a little bit too familiar with it, but I don't find LLMs particularly convincing of anything I would call "real AI". But I suppose that entirely depends on what you mean with "real". Their flaws are painfully obvious. I even use ChatGPT 4 in hopes of it being better.
The distinction between AI and AGI (Artificial General Intelligence) has been around long before the current hype cycle.
What agitates me is all the people misusing the words and then complaining about what they don't actually mean.
AI is simply a broad field of research and a broad class of algorithms. It is annoying media keeps using the most general term possible to describe chatbots and image generators though. Like, we typically don't call Spotify playlist generators AI, even though they use recommendation algorithms, which are a subclass of AI algorithms.
People: build an algorithm to generate text that sounds like a person wrote it by finding patterns in text written by people
Algorithm: outputs text that sounds like a person wrote it
Holyfuck its self aware guys
Patterns in text are ideas, that's what text is made to contain, Ideas. They've made a algorithm that "generates text that sounds human" but it didn't understand context, themes, or other more abstract concepts. There is a highly sophisticated amount of emergent behavior from LLMs
Yes, but if they say "AI" then people give them money.
Don't forget machine learning.
Our patented machine learning Blackjack artificial intelligence knows exactly when to stick and when to draw!
The algorithm:
Hotdog. Not hotdog.
I'm pissed that large corps are working hard on propaganda to say that LLMs and theft of copyright is good if they do it
Maybe just accept it as shorthand for what it really means.
Some examples:
We say Kleenex instead of facial tissue, Band-Aid instead of bandage, I say that Siri butchered my "ducking" text again when I know autocorrect is technically separate.
We also say, "hang up on someone" when there is no such thing anymore
Hell, we say "cloud" when we really mean "someone's server farm"
Don't get me started on "software as a service" too ...a bullshit fancy name for a subscription website that actually has some utility.
Every website now is just HTMLaaS
I'll be direct, your texts reads like you only just discovered AI. We have much more than "only LLMs", regardless of whether or not these other models pass turing tests. If you feel disgruntled, then imagine what people who've been researching AI since the 70s feel like...
AI is a forever-in-the-future technology. When I was in school, fuzzy logic controllers were an active area of "AI" research. Now they are everywhere and you'd be laughed at for calling them AI.
The thing is, as soon as AI researchers solve a problem, that solution no longer counts as AI. Somehow it's suddenly statistics or "just if-then statements", as though using those techniques makes something not artificial intelligence.
For context, I'm of the opinion that my washing machine - which uses sensors and fuzzy logic to determine when to shut off - is a robot containing AI. It contains sensors, makes judgements based on its understanding of "the world" and then takes actions to achieve its goals. Insofar as it can "want" anything, it wants to separate the small masses from the large masses inside itself and does its best to make that happen. As tech goes, it's not sexy, it's very single purpose and I'm not really worried that it's gonna go rogue.
We are surrounded by (boring) robots all day long. Robots that help us control our cars and do our laundry. Not to mention all the intelligent, disembodied agents that do things like organize our email, play games with us, and make trillions of little decisions that affect our lives in ways large and small.
Somehow, though, once the mystery has yielded to math, society doesn't believe these decision-making machines are AI any longer.
Yes, but I'm more annoyed with posts and conversations about it that are like this one. People on Lemmy swear they hate how uninformed and stupid the average person is when it comes to AI, they hate the click bait articles etc etc. Aaand then there's at least 5 different posts about it on the front page every. single. day., with all the comments saying exactly the same thing they said the day before, which is:
"Users are idiots for trusting a tech company, it's not Google's responsibility to keep your private data safe." "No one understands what 'AI' actually means except me." "Every middle-America dad, grandma and 10 year old should have their very own self hosted xyz whatever LLM, and they're morons if they don't and they deserve to have their data leaked." And can't forget the ubiquitous arguments about what "copyright infringement" means when all the comments are actually in agreement, but they still just keep repeating themselves over and over.
I work in AI, and the fatigue is real.
What I've found most painful is how people with no fucking clue about AI or ML chime in with their expert advice, when in reality they're as much an expert on AI as a calculator salesman is an expert in linear algebra. Having worked closely with scientists that hold PhD's, publish papers regularly, and who work on experiments for years, it makes me hate the hustle culture that's built up around AI. It's mostly crypto cunts looking for their next scheme, or businesses looking to abuse buzzwords to make themselves sound smart.
Purely my two-cents, but while LLM's have surprised a lot of people with their high quality output. With that being said, they are known to heavily hallucinate, cost fuckloads, and there is a growing group of people that wonder whether the great advances we've seen are either due to a lot of hand-holding, or the use of a LOT of PII or stolen data. I don't think we'll see an improvement from what we've already seen, just many other companies having their own similar AI tools that help a little with very well-defined menial tasks.
I think the hype will die out eventually, and companies that decided to bin actual workers in favour of AI will likely not be around 12-24 months later. Hopefully most people and businesses will see through the bullshit, and see that the CEO of a small ad agency that has positioned himself as an AI expert is actually a lying simpleton.
As for it being "real AI" or "real ML", who gives a fuck. If researchers are happy with the definition, who are we to be pedantic? Besides, there are a lot of systems behind the scenes running compositional models, handing entity resolution, or building metrics for success/failure criteria to feed back into improving models.
Get Rick quick mentality needs to GTFO of tech already. I'm also tired of promising tech getting over hyped then all good will and enthusiasm burned at the alter of scams. Stuff takes time and hard work, and that costs money to hire experts to do and capital to do it. There are no silver bullets. Adoption takes effort and time, so not every solution is worth adopting. Not every industry has the same problems. Reinventing the wheel in productive way is a high risk activity.
Not telling you, just yelling at the void because you made me think of it.
Same can be said for certain humans.
Yes your summary is correct, its just a buzzword.
You can still check if its a real human if you do something really stupid or speak or write giberisch. Almost every AI will try to reply to it or say "Sorry i couldnt understand it" or recent events ( most of the LLMs arent trained on the newest events )
I think LLMs are definitely "AI" in that their intelligence is artificial. If an AI in a game can be called an "AI", then LLMs like GPT should definitely get the title.
As a farmer, my kneejerk interpretation is "artificial insemination" and I get confused for a second every time.
You're not the only one but I don't really get this pedantry, and a lot of pedantry I do get. You'll never get your average person to switch to the term LLM. Even for me, a techie person, it's a goofy term.
Sometimes you just have to use terms that everyone already knows. I suspect we will have something that functions in every way like "AI" but technically isn't for decades. Not saying that's the current scenario, just looking ahead to what the improved versions of chat gpt will be like, and other future developments that probably cannot be predicted.
I don't think the real problem is the fact that we call it AI or not, I think it's just the level of hype and prevalence in the media.
In my first AI lecture at uni, my lecturer started off by asking us to spend 5 minutes in groups defining "intelligence". No group had the same definition. "So if you can't agree on what intelligence is, how can we possibly define artificial intelligence?"
AI has historically just described cutting edge computer science at the time, and I imagine it will continue to do so.
I started reading it as "Al" as in the nickname for Allen.
Makes the constant stream of headlines a bit more entertaining, imagining all of the stuff that this guy Al is up to.
Part of my work is to evaluate proposals for research topics and their funding, and as soon as "AI" is mentioned, I'm already annoyed. In the vast majority of cases, justifiably so. It's a buzzword to make things sound cutting edge and very rarely carries any meaning or actually adds anything to the research proposal. A few years ago the buzzword was "machine learning", and before that "big data", same story. Those however quickly either went away, or people started to use those properly. With AI, I'm unfortunately not seeing that.
To be fair it's still AI, If I remember correctly what I learned from uni LLM are in the category what we call expert systems. We could call them that way, then again LLM did not exist back then, and most of the public does not know all this techno mumbo-jumbo words. So here we are AI it is.
A lot of the comments I've seen promoting AI sound very similar to ones made around the time GME was relevant or cryptocurrency. Often, the conversations sounded very artificial and the person just ends up repeating buzzwords/echo chamber instead of actually demonstrating that they have an understanding of what the technology is or its limitations.
web3 nft ai crypto coin decentralized blockchain machine learning chatgpt
take my money
I remember the term AI being in use long before the current wave of LLMs. When I was a child, it was used to describe the code behind the behaviour of NPC in computer games, which I think is still used today. So, me, no, I don't get agitated when I hear it, I don't think it's a marketing buzzword invented by capitalistic a-holes. I do think that using "intelligence" in AI is far too generous, whichever context it's used in, but we needed some word to describe computers pretending to think and someone, a long time ago, came up with "artificial intelligence".
Thank you for reminding me about NPCs,
we have indeed been calling them AI for years,
even though they are not capable of reasoning on their own.
Perhaps we need a new term,
e.g. AC (Artificial Consiousness),
which does not exists yet.
The term AI still agitates me though,
since most of these are not intelligent.
For example,
earlier this week I saw a post on Lemmy,
where a LLM suggested to a user to uninstall a package, which would definitely have broken his Linux distro.
Or my co-workers,
who asked development questions I had to the LLMs they use, which yet has to generate me something usefull / something that actually works.
To me it feels like they are pushing their bad beta products upon us,
in the hopes that we pay to use them,
so they can use our feedback to improve them.
To me they don't feel intelligent nor consious.
Colleagues of mine have also recommended me uninstalling required system packages. Does that mean my colleagues aren't intelligent/conscious? That humans in general aren't?
After working 2 years on an open source ML project, I can confidently say that yes, on average, lights aint that bright sadly.
The term is so over used at this point I could probably start referring to any script I write that has condition statements in it and convince my boss I have created our own “AI”.
For real. Like some enemies in Killzone 2 “act” pretty clever, but aren’t using anything close to LLM, let alone “AI,” but I bet you if you implemented their identical behavior into a modern 2024 game and marketed it as the enemies having “AI” everyone would believe you in a heartbeat.
It’s just too overencompasing. Saying “large language model technology” may not be as eye catching, but it means I know if you at least used the technology. Anyone can market as “AI” and it could be an excel formula for all I know.
The enemies in killzone do use AI... the Goombas in the first Super Mario bros. used AI. This term has been used to refer to npc behavior since the dawn of videogames.
I know. That’s not my point. I know that technically, “AI” could mean anything that gives the illusion of intelligence artificially. My use of the term was more of the OP, that of a machine achieving sapience, not just the illusion of one. It’s just down to definitions. I just prefer to use the term in a different way, and wish it was, but I accept that the world does not
"somewhat old" person opinion warning ⚠️
When I was in university (2002 or so) we had an "AI" lecture and it was mostly "if"s and path finding algorithms like A*.
So I would argue that us the engineers have been using the term to define a wider use cases long before LLM, CEO and marketing people did it. And I think that's fine, as categorising algorithms/solutions as AI helps understand what they will be used for, and we (at least the engineers) don't tend to assume an actual self aware machine when we hear that name.
nowadays they call that AGI, but it wasn't always like that, back in my time it was called science fiction 😉
I call it AI-washing. And, yes, it's annoying.
I assume you're referring to the sci-fi kind of self-aware AI because we've had 'artificial intelligence' in computing for decades in the form of decision making algorithms and the like. Whether any of that should be classed as AI is up for debate as again, it's still all a facade. In those cases, people only really cared about the outputs and weren't trying to argue they were alive or anything.
But yeah, I get what you mean.
I think we’ll be so desensitized by the term “A.I.”, that when it actually does happen we won’t realize what’s happened until after the fact. It’ll happen so gradually that we’ll just be like, “Wait… I think it’s actually thinking real thoughts.”
I call it a probability box.
Businesses always do this. AI is popular? Insert that word into every page of the deck. It sucks.
"real ai" isn't a coherent concept.
Turing test isn't a literal test. It's a rhetorical concept that turing used to underline his logical positivist approach to things like intelligence, consciousness etc.
People keep saying this, but AI has been used for subroutines nowhere near actual artificial intelligence since at LEAST as long as video games have existed
Much much longer than that. The term has been used since AI began as a field of study in the 50s. And it's never referred to human level intelligence. Sure, that was the goal, but all of the different sub branches of AI are still AI. Whether it's expert systems, LLMs, decision trees, etc, etc, etc. AI is a broad term that covers the entire spectrum, and always has been. People that complain about it just want AI to only refer to AGI, which already has a term. AGI.
Im willing to bet a good 60%+ are complaining about the word now because they are regurgitating anti AI talking points they've heard that they think sound good
AI experts in interviews will tell you that like 99% of phrasing around AI used by people is fundamentally incorrect, and that management of corporations are the worst about it.
I've ranted about this to several people too. Intelligence is hard to define and trying to define it has a horrible history linked to eugenics. That said, I feel like a minimum definition is that it has the capacity to understand the meaning and/or impact of what it is saying and/or doing, which current "AI" is so far from doing.
Yep, it says things though has no understanding of what it is saying: much like strolling through a pet shop, passing the parrot enclosure, and hearing and recoiling at the little kid swear words it cheeps out.
Just wait for "quantum ai"
Shit. Don’t give them ideas
Don't worry, the hype will die sooner than later, just like with cryptocurrencies. What will remain are the power and resource hungry statistical models doing nice work in some specific domains, some long faces and some people having made a bunch of money from it. But yeah, the term also makes me angry, that's why I started referring to them as statistical models.
Am I the only one seeing a parallel between the spectrum planned <-> "free"-market economy and classical algorithm <-> statistical model/ML? It seems that some people prefer to have some magic invisible handle their problems instead of doing the tough work. I'm not saying that there is not space for both but we seem to be leaning on the magic side a bit too much lately.
I remember earlier on feeling that way about ML as a programmer. That in significantly complex enough task it seemed like a good tool to avoid hard coded logic, but hard coded logic if done right could be better on resources.
Basically from a resource perspective it's person < ML < code. So most of the time you want code but the higher upfront cost changes the ROI.
LOL, ask anyone in IT marketing how they feel about AI.
I saw a streamer call a procedurally generated level "ai generated" and I wanted to pull my hair out
I think these two fields are very closely related and have some overlap. My favorite procgen algorithm, Wavefuncion Collapse, can be described using the framework of machine learning. It has hyperparameters, it has model parameters, it has training data and it does inference. These are all common aspects of modern "AI" techniques.
I thought "Wavefunction Collapse" is just misnamed Monte Carlo. Where does it use training data?
WFC is a full method of map generation. Monte Carlo is not afaik.
Edit: To answer your question, the original paper on WFC uses training data, hyperparameters, etc. They took a grid of pixels (training data), scanned it using a kernal of varying size (model parameter), and used that as the basis for the wavefunction probability model. I wouldn't call it AI though because it doesn't train or self-improve like ML does.
I think the training (or fitting) process is comparable to how a support vector machine is trained. It's not iterative like SGD in deep learning, it's closer to the traditional machine learning techniques.
But I agree that this is a pretty academic discussion, it doesn't matter much in practice.
MC is a statistical method, it doesn't have anything to do with map generation. If you apply it to map generation, you get a "full method of map generation", and as far as I know that is what WFC is.
Could you share the paper? Everything I read about WFC is "you have tiles that are stitched together according to rules with a bit of randomness", which is literally MC.
Ok so you are just talking about MC the statistical method. That doesn't really make sense to me. Every random method will need to "roll the dice" and choose a random outcome like a MC simulation. The statement "this method of map generation is the same as Monte Carlo" (or anything similar, ik you didn't say that exactly) is meaningless as far as I can tell. With that out of the way, WFC and every other random map generation method are either trivially MC (it randomly chooses results) or trivially not MC (it does anything more than that).
The original Github repo, with examples of how the rules are generated from a "training set": https://github.com/mxgmn/WaveFunctionCollapse A paper referencing this repo as "the original WFC algorithm" (ref. 22): long google link to a PDF
Note that I don't think the comparison to AI is particularly useful-- only technically correct that they share some similarities.
I don't think WFC can be described as an example of a Monte Carlo method.
In a Monte Carlo experiment, you use randomness to approximate a solution, for example to solve an integral where you don't have a closed form. The more you sample, the more accurate the result.
In WFC, the number of random experiments depends on your map size and is not variable.
Sorry, I should have been more specific - it's an application of Markov Chain Monte Carlo. You define a chain and randomly evaluate it until you're done - is there anything beyond this in WFC?
I'm not an expert on Monte Carlo methods, but reading the Wikipedia article on Markov Chain Monte Carlo, this doesn't fit what WFC does for the reasons I mentioned above. In MCMC, your get a better result by taking more steps, in WFC, the number of steps is given by the map size, it can't be changed.
I'm not talking about repeated application of MCMC, just a single round. In this single round, the number of steps is also given by the map size.
The term “Fuzzy logic” has apparently been around since 1965, can’t keep calling it that.. not that all AI falls under that but a lot of what gets marketed as that would.
It really depends on how you define the term. In the tech world AI is used as a general term to describe many sorts of generative and predictive models. At one point in time you could've called a machine that can solve arithmetic problems "AI" and now here we are, Feels like the goalpost gets moved further every time we get close so I guess we'll never have "true" AI?
So, the point is, what is AI for you?
Adobe Illustrator
hahaha couldn't resist huh?
You are not. The word predictor hype is real
The only time I was agitated by it was with the George Carlin thing.
It pissed me off that it was done without permission. It annoyed me that "AI" also kinda looks like "AL" with a lowercase L and when next to another name, makes it read like AL CARLIN or AL GEORGE. And it divided me somewhat because I watched the damn special and it was mostly funny and did feel like Carlin's style (though it certainly didn't sound right and it had timing issues). So, like... It wasn't shit in and of itself, but the nature of what it is and the fact it was done without permission or consent is concerning. Shame on Will Sasso for that. He could have just done his own impersonation and wrote his own jokes in the style of Carlin; it would have been a far better display of respect and appreciation than having an AI do it.
I don't think he's a sick and disgusting person for this; even before it all blew up, it seemed more like a tribute to a comedian he adored. Just a poorly thought out way of doing one that may have some pretty hard consequences.
https://arstechnica.com/ai/2024/01/george-carlins-heirs-sue-comedy-podcast-over-ai-generated-impression/
Just further evidence Sasso could have just done the impersonation himself and it would have been a fine tribute (and had better timing and delivery), but he used an AI to replicate his voice and mannerisms instead. Sure, I don't think he could have done a great job of impersonating how Carlin sounds but the mannerisms and delivery would have been enough and something he should be pretty good at considering his time on MadTV where he did a lot of impersonation stuff (such as his Stephen Segal character).
I think the twitch Trump Biden debate chat is hilarious as well as the Jesus one.
We have to work out what intelligence is before we can develop AI. Sentient AI? Forget about it!
I think generally Sentience is considered a very low bar while Sapience better describes thinking on the level of a real person. I get them confused sometimes.
In the case of an LLM-type AI though, the bars can be swapped in a sense. LLMs are strange, because they can talk but not feel.
You can't argue that a series of tensor calculations are sentient (def. able to perceive or feel) - capable of experiencing life from the "inside". A dog is sentient by most definitions, it could be argued to have a "soul". When you look at a dog, the dog looks back at you. An LLM does not. It is not conscious, not "alive".
However an LLM does put on a fair appearance of being sapient (def. intelligent; able to think) - they contain large stores of knowledge and aside from humans are now the only other thing on the planet that can talk. You can have a discussion with one, you can tell it that it was wrong and it can debate or clarify using its internal knowledge. It can "reason" and anyone who has worked with one writing code can attest to this as they've seen their capability to work around restrictions.
It doesn't have to be sentient to be able to do this sort of thing, even though we used to think that was practically a prerequisite. Thus the philosophical confusion around them.
Even if this is simply a clever trick of a glorified autocomplete algorithm, this is something the dog cannot do despite its sentience. Thus an LLM with a decent number of parameters is "smarter" than a dog, and arguably more sapient.
No, not really. You're misunderstanding the words, and also vastly overestimating LLMs. LLMs such as the OpenAI™ models cannot reproduce dogs barking to the point of fooling humans or animals unless they're trained on dog barking data to the point of specialization. That's because they lack any general thinking capability, period.
Learning Algorithms require massive amounts of sample data to function, and pretty much never function outside of specific purposes such as predicting what word will come next in a sentence. I personally think that disqualifies them from sentience and sapience, but they could certainly pass a sentience written test.
Get the new samsung blah blah with the new galaxy AI!!!. ENOUGH.
Bixby sad now
Yes, AI term is used for marketing, though it didn't start with LLMs, a couple of years before, any ML algorithm was called AI together with the trendy data scientist job.
However, I do think LLMs are very useful, just try them for your daily tasks, you'll see. I'm pretty sure they will become as common as a web search in the future.
Also, how can you tell that the human brain is not mostly a very powerful LLM hosting machine?
I dislike it because it is usually used by the kind of people or media that live from buzzword to buzzword. IoT, Cloud, Big Data, Crypto, Web 3.0, AI, etc. I'm quite interested in deep learning and have done some research in the field as well. Personally, I don't think AI is necessarily a misnomer, the term has been used forever, even for simple stuff like a naive Bayes classifier, A*, or decision trees. It's just so unfortunate to see this insanely impressive technology being used as the newest marketing gimmick. Or used in unethical and irresponsible ways because of greed (looking at you, "Open"AI). A car doesn't need AI, a fridge doesn't need AI, most things don't need AI. And AI is certainly not at the level where it makes sense to yeet 30% of your employees either.
I don't hate AI or the awesome technology, I hate that it has become a buzzword and a tool for the lawless billionaires to do whatever they please.
Depends on how you define it. A thermostat or PID controller is artificial, and intelligent enough to hold a comfortable temperature.
Yeah real AI doesnt exist and we are not even close. But marketing is powerful. We should be saying language models.
I get agitated by the word AI when it's obviously not using machine learning, or when it's used to shove ChatGPT into something without any reason to use it over just using ChatGPT.
EXACTLY.
voice-to-image or text-to-image is as much ‘AI’ as window’s voice-to-text feature. It’s an accessibility feature.
It's an establish term in the field since the1950s.
Not at all, im pretty exvited for AI and AGI. Its the future. Feels like smartphone era all over again
you are not alone, it annoys me to no avail and I keep correcting & explaining to people who have no clue about how computers and LLMs work.
#yes
Mostly because every time I see the abbreviation I think it’s Al and not AI. And I briefly wonder if they mean Al Jolson or Al Capone.
Or my favorite, Weird Al
Hey me too. I gave up though.
I remember being upset about the exact same thing when 4G first launched.
It’s really bugging me that it’s a catch all buzzword that combines any art on the computer into AI when there’s a very hard line from what makes digital art physically drawn by a human and what defines AI. It really annoys me that the whole actors guild cannot seem to understand what vfx stands for and what is AI. Vfx involves hundreds of humans with strong intention and artistic talent of doing literal back breaking work. The other is one wanky human with strong intention speaking loud in a room making shitty graphics that pales in comparison. This still isn’t ‘AI’. This is an asshole with too much power and thinks they are as good as an artist.
someone sketching on photoshop is a human generated image. And this has nothing to do with AI yet so many idiots sweep it into the same bin simply because a paint brush, which is still physically used by a human, was made from 1s and 0s
It also disturbs me me that people don’t hold people accountable for fake ‘AI GENERATED’ news stories or deep fakes and just shrug their shoulders calling it AI. like “oops, skynet is taking over”. No. That’s a human. A shitty horrible human, again, on a computer given too much power. No machine has intention. Only humans do.
If a mobster boss asks someone to take a hit out on someone, mobster boss goes to jail for just as much damage as a murderer. Probably even more so because it is intention. Meanwhile everyone pretends a computer itself is coming up with all this junk as if no human with terrible intention is driving at the wheel.
We gotta go back to naming names.
The term AI has been around longer than LLMs, and refers to a wide variety of different algorithms and approaches to automatically extracting and working with information.
LLMs are an AI technique, just like Bayesian networks for causal inference are an AI technique.
The issue isn't that we don't have "real" AI, it's that most people are misusing a general technical term, and then being indignant that it doesn't exactly match a very specific sub category (AGI, or artificial general intelligence)
You see the same thing with people calling cryptocurrency "crypto", even though that word is typically used among experts to refer to "cryptography", which is mostly not relevant to currency in the slightest.
This one's not on the tech people, it's on the people who keep missusing the words.
My coworker just gave me this rant the other day about AI.
intelligence != "thinking"
"AI" is the new "Innovate", every time someone uses "innovate" in 2024, they're just talking about how they're stripping our rights away from things we owned.
It's annoying because either all of it should be AI or none of it.
Most humans don't either. But if think you are conflating two different things, intelligence (ability to reason) and consciousness (being able to do so on your own). I personally believe with both of those things, that spontaneously come to existence in our brains when they became complex enough, we are just quantitatively not very far away from creating networks complex enough ourselves. Big last breakthrough was the ability to create training data sets for AI with AI that don't make the models degenerate.
This has been a thing for a long time
Clippy was an assistant, Cortana was an intelligent assistant and Copilot is AI
None of these are accurate, it's always like a generation behind
Clippy just was, Cortana was an assistant, And copilot is an intelligent assistant
The next one they make could actually be AI
@Rikj000
How do you know that?
wait for the next buzzword to come out, it'll pass.
used gpt3 once, but haven't had a use case for it since.
i'll use an """AI""" assistant when they are legitimately useful.
It's still good to start training one's AI prompt muscles and to learn what a LLM can and can't do.
Humans possess an esoteric ability to create new ideas out of nowhere, never before thought of. Humans are also capable of inspiration, which may appear similar to the way that AI's remix old inputs into "new" outputs, but the rules of creativity aren't bound by any set parameters the way a LLM is. I'm going to risk making a comment that ages like milk and just spitball: true artificial intelligence that matches a human is impossible.
If I stuck you on a black box with and removed every single one of your senses, took away your ability to memorize things I don't really think you'd generate new ideas either. Human creativity relies heavily on output from the outside world. LLMs are not human like intelligence but do exhibit pretty amazing emergent behavior. LLMs are more sophisticated than you think. Human like AI has to be possible unless there is something intrinsically different about the human brain that breaks our current understanding of the world. Barring a "soul", the human brain has to be nothing but calculations taking place in a chemical medium. Meaning that human like AI or even better must be achievable.
I pretty much agree, but Imo, it's not so much that LLMs are more sophisticated then people think, it's more that people are less sophisticated then people think. Homosapiens have proven over and over again that we're biased toward seeing ourselves as the center of, and the most important part of the universe. I know that Chatgpt isn't magic, but I bet that I'm not either
Of course we have “real” AI. We can literally be surprised while talking to these things.
People who claim it’s not general AI consistently, 100% of the time, fail to answer this question: what can a human mind do that these cannot?
In precise terms. You say “a human mind can understand” then I need a precise technical definition of “understand”. Because the people making this claim that “it’s not general AI” are always trying to wave their own flag of technical expertise. So, in technical terms, what can a general AI do, that an LLM cannot?
Go and tell your LLM to click a button, or log into your Amazon account, or send an email, or do literally anything that's an action. I'm waiting.
A 4 year old has more agency than your "AI" nowadays. LLMs are awesome at spitting out text, but they aren't true AI.
Edit: I should add, LLMs only work with input. If there's no input there is no output. So whatever you put in there, it will just sit there forever doing nothing until you give it an input again. It's much closer to a mathematical function than any kind of intelligence that has its own motivation and can act on its own.
@Vlyn
@intensely_human
chatGPT can explain me what to do in cli to send an e-mail. Give it access to a cli and an internet connection and it will be able to do it itself
Exactly. Someone demonstrated an “AI that can turn on your lights” and then had a script checking for output like {turnOnLights} and translating that to API calls
Which again is literally just text and nothing more.
No matter how sophisticated ChatGPT gets, it will never be able to send the email itself. Of course you could pipe the output of ChatGPT into a cli, then tell ChatGPT to only write bash commands (or whatever you use) with every single detail involved and then it could possibly send an email (if you're lucky and it only uses valid commands and literally no other text in the output).
But you can never just tell it: Send an email about x, here is my login and password, send it to whatever@email.com with the subject y.
Not going to work.
All it lacks is an API that allows it to send commands. This is not a limitation of its intelligence, if it "knows" when to put text in a bash codebox, it will know when to send an API call.
Ask your brain to click a button, it cannot either, all it does is sending and receiving electric signals. Fortunately, it is surrounded by a body that reacts to these signals.
Go tell a kalahari bushman to click a button, or log into your amazon account, or send an email, or literally anything you don’t place in front of him as an option.
Is your whole point just that it would be GAI if it weren’t for those darned shackles, but it’s not AGI because we give it restrictions on sending POST requests?
Besides the detail that even Kalahari Bushmen have mobile phones now, primitive humans (or our ancestors) weren't stupid. You could take a human from 1000 years ago and after they stop flipping out about computers and modern technology you'd be able to teach them to click a button in seconds to minutes (depending on how complex you make the task).
General AI can take actions on its own (unprompted) and it can learn, basically modifying its own code. If anyone ever comes up with a real AI we'd go towards the Singularity in no time (as the only limit would be processing power and the AI could then invest time into improving the hardware it runs on).
There are no "shackles" on ChatGPT, it's literally an input output machine. A really damn good one, but nothing more than that. It can't even send a POST request. Sure, you could sit a programmer down, parse the output, then do a request whenever ChatGPT mentions certain keywords with a payload. Of course that works, but then what? You have a dumb chatbot firing random requests and if you try to feed the result of those requests back in it's going to get jumbled up with your text input you made beforehand. Every single action you want an LLM to take you'd have to manually program.
Oh you bastard. You actually tried to reframe my words into exactly the opposite of what I was saying.
I did not use a Kalahari Bushman as an example of a stupid person. I used a Kalahari Bushman as an example of a general intelligence as smart as you or I, who can’t press buttons or buy things on Amazon for reasons of access not capability.
I need to cool down before I read the rest of your comment. Not cool dude, trying to twist what I said into some kind of racist thing. Not cool.
This wasn't my intention at all, we are talking about capabilities here, not access.
You could give ChatGPT every resource in the world, all the processing power, every account credential (usernames, passwords), an unlimited fiber connection with 100 Gbit and zero restrictions on the language model.
It doesn't matter, it's straight up not built to do any actions or as AI. It's an input output machine, text in, text out, that's it.
It's just so damn complex at this point that the text output is really good, but there isn't more to it. Even the capability to "remember" your previous input isn't actually remembering, your next input just goes down a different pathway in the model (which has billions of parameters) to get to your new text output.
No, it's just a buzzword, just saw a joke today that AI means "absent indian".
Not even driverless cars are actually driverless: https://www.jwz.org/blog/2024/01/driverless-cars-always-have-a-driver/
NFT
You are misunderstanding what AI means, probably due to its overuse in pop culture. What you are think of is a subcategory of AI. It goes: AI > Machine Learning > Artificial Life
Stop down voting me I'm right.
Title: Unpopular Opinion: The Term "AI" is Just a Marketing Buzzword!
Hey fellow Redditors, let's talk about the elephant in the room: AI. 🤖💬
I can't be the only one feeling a bit agitated by how the term "Artificial Intelligence" gets thrown around, right? Real AI seems like a distant dream, and what we have right now are these Large Language Models (LLMs). They're good at passing Turing tests, but let's be real – they're not thinking on their own.
Am I the only one who thinks "AI" is just a fancy label created by those rich, capitalistic individuals already knee-deep in LLM stocks? It feels like a slick way to boost investments and make us believe these machines are more intelligent than they really are. Thoughts? 🔍🧠💭
You're a fool if you think your own mind is any more than a large language model.