CEO of Google Says It Has No Solution for Its AI Providing Wildly Incorrect Information

Stopthatgirl7@lemmy.world to Technology@lemmy.world – 1091 points –
CEO of Google Says It Has No Solution for Its AI Providing Wildly Incorrect Information
futurism.com

You know how Google's new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won't slide off (pssst...please don't do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these "hallucinations" are an "inherent feature" of  AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem."

445

They keep saying it's impossible, when the truth is it's just expensive.

That's why they wont do it.

You could only train AI with good sources (scientific literature, not social media) and then pay experts to talk with the AI for long periods of time, giving feedback directly to the AI.

Essentially, if you want a smart AI you need to send it to college, not drop it off at the mall unsupervised for 22 years and hope for the best when you pick it back up.

No he's right that it's unsolved. Humans aren't great at reliably knowing truth from fiction too. If you've ever been in a highly active comment section you'll notice certain "hallucinations" developing, usually because someone came along and sounded confident and everyone just believed them.

We don't even know how to get full people to do this, so how does a fancy markov chain do it? It can't. I don't think you solve this problem without AGI, and that's something AI evangelists don't want to think about because then the conversation changes significantly. They're in this for the hype bubble, not the ethical implications.

We do know. It's called critical thinking education. This is why we send people to college. Of course there are highly educated morons, but we are edging bets. This is why the dismantling or coopting of education is the first thing every single authoritarian does. It makes it easier to manipulate masses.

"Edging bets" sounds like a fun game, but I think you mean "hedging bets", in which case you're admitting we can't actually do this reliably with people.

And we certainly can't do that with an LLM, which doesn't actually think.

Jinx! You owe me an edge sesh!

A big problem with that is that I've noticed your username.

I wouldn't even do that with Reagan's fresh corpse.

I think that’s more a function of the fact that it’s difficult to verify that every one of the over 1M college graduates each year isn’t a “moron” (someone very bad about believing things other people made up). I think it would be possible to ensure a person has these critical thinking skills with a concerted effort.

The people you're calling "morons" are orders of magnitude more sophisticated in their thinking than even the most powerful modern AI. Almost every single one of them can easily spot what's wrong with AI hallucinations, even if you consider them "morons". And also, by saying you have to filter out the "morons", you're still admitting that a lot of whole real assed people are still not reliably able to sort fact from fiction regardless of your education method.

No I still agree that we are far from LLMs being ‘thinking’ enough to be anywhere near this. But if we had a bunch of models similar to LLMs that could actually think, or if we really needed to select a person, I do think it would be possible to evaluate a bunch of the models/people to determine which ones are good at distinguishing fake information.

All I’m saying is I don’t think the limitation is actually our ability to select for capability in distinguishing fake information, I think the only limitation is fundamental to how current LLMs work.

Yes, my point wasn't that it could never be achieved but that LLMs are in a completely different category, which we agree on I think. I was comparing them to humans who have trouble with critical thinking but can easily spot AI's hallucinations to illustrate the vast gulf.

In both cases I think there are almost certainly more barriers in the way than an education. The quest for a truthful AI will be as contentious as the quest for truth in humans, meaning all the same claim-counterclaim culture-war propaganda tug of war will happen, which I think is the main reason for people being miseducated against critical thinking. In a vacuum it might be a simple technical and educational challenge, but the reason this is a problem in the first place is that we don't exist in a political vacuum.

2 more...

What does this have to do with AI and with what OP said? Their point was obviously about limitations of the software, not some lament about critical thinking

2 more...
6 more...

I let you in on a secret: scientific literature has its fair share of bullshit too. The issue is, it is much harder to figure out its bullshit. Unless its the most blatant horseshit you've scientifically ever seen. So while it absolutely makes sense to say, let's just train these on good sources, there is no source that is just that. Of course it is still better to do it like that than as they do it now.

The issue is, it is much harder to figure out its bullshit.

Google AI suggested you put glue on your pizza because a troll said it on Reddit once...

Not all scientific literature is perfect. Which is one of the many factors that will stay make my plan expensive and time consuming.

You can't throw a toddler in a library and expect them to come out knowing everything in all the books.

AI needs that guided teaching too.

7 more...
9 more...

I'm addition to the other comment, I'll add that just because you train the AI on good and correct sources of information, it still doesn't necessarily mean that it will give you a correct answer all the time. It's more likely, but not ensured.

Yes, thank you! I think this should be written in capitals somewhere so that people could understand it quicker. The answers are not wrong or right on purpose. LLMs don't have any way of distinguishing between the two.

it's just expensive

I'm a mathematician who's been following this stuff for about a decade or more. It's not just expensive. Generative neural networks cannot reliably evaluate truth values; it will take time to research how to improve AI in this respect. This is a known limitation of the technology. Closely controlling the training data would certainly make the information more accurate, but that won't stop it from hallucinating.

The real answer is that they shouldn't be trying to answer questions using an LLM, especially because they had a decent algorithm already.

Yeah, I've learned Neural Networks way back when those thing were starting in the late 80s/early 90s, use AI (though seldom Machine Learning) in my job and really dove into how LLMs are put together when it started getting important, and these things are operating entirelly at the language level and on the probabilities of language tokens appearing in certain places given context and do not at all translate from language to meaning and back so there is no logic going on there nor is there any possibility of it.

Maybe some kind of ML can help do the transformation from the language space to a meaning space were things can be operated on by logic and then back, but LLMs aren't a way to do it as whatever internal representation spaces (yeah, plural) they use in their inners layers aren't those of meaning and we don't really have a way to apply logic to them).

So with reddit we had several pieces of information that went along with every post.

User, community along with up, and downvotes would inform the majority of users as to whether an average post was actually information or trash. It wasn't perfect, because early posts always got more votes and jokes in serious topics got upvotes, bit the majority of the examples of bad posts like glue on food came from joke subs. If they can't even filter results by joke sub, there is no way they will successfully handle saecasm.

Only basing results on actual professionals won't address the sarcasm filtering issue for general topics. It would be a great idea for a serious model that is intended to only return results for a specific set of topics.

only return results for a specific set of topics.

This is true, but when we're talking about something that limited you'll probably get better results with less work by using human-curated answers rather than generating a reply with an LLM.

Yes, that would be the better solution. Maybe the humans could write down their knowledge and put it into some kind of journal or something!

You could call it Hyperpedia! A disruptive new innovation brought to us via AI that's definitely not just three encyclopedias in a trenchcoat.

2 more...

no, the truth is it's impossible even then. If the result involves randomness at its most fundamental level, then it's not reliable whatever you do.

10 more...

That's just not how LLMs work, bud. It doesn't have understanding to improve, it just munges the most likely word next in line. It, as a technology, won't advance past that level of accuracy until it's a completely different approach.

39 more...

In the interest of transparency, I don't know if this guy is telling the truth, but it feels very plausible.

It seems like the entire industry is in pure panic about AI, not just Google. Everyone hopes that LLMs will end years of homeopathic growth through iteration of long-existing technology, which is why it attracts tons of venture capital.

Google, which sits where IBM was decades ago, is too big, too corporate and too slow now, so they needed years to react to this fad. When they finally did, all they were able to come up with was a rushed equivalent of existing LLMs that suffers from all of the same problems.

They all hope it'll end years of having to pay employees.

It's also useful because it gives a corporate controlled filter for all information, that most people will never truly appreciate is being used as a mouthpiece.

The end goal of this is fairly obvious: imagine Google where instead of the sponsored result and all subsequent results, it's just the sponsored result.

I think this is what happens to every company once all the smart / creative people have gone. All you have left are the "line must always go up" business idiots who don't understand what their company does or know how to make it work.

similarly i'm tired of apple fanboys pretending the company hasn't gotten dramatically worse since jobs died as well. yeah he sucked in his own ways but things were starkly less shitty and belittling. tim cook would be gone for those fucking lightning-3.5mm dongles

And after the MBA's, private equity firms take over, and eventually it's sold for parts.

Just want to say that homeopathic growth is both hilarious and perfectly adequate description of what modern tech industry is.

The snake ate it's tail before it's fully grown. The AI inbreeding might be already too far integrated, causing all sorts of Mumbo-Jumbo. Also they have layers of censorship, which effect the results. The same that happened to chatgpt, the more filters they added, the more it confused the result. We don't even know if the hallucinations are fixable, AI is just guessing after all, who knows if AI will ever understand 1+1=2, by calculating, instead of going by probability.

Hallucinations aren't fixable, as LLMs don't have any actual "intelligence". They can't test/evaluate things to determine if what they say is true, so there is no way to correct it. At the end of the day, they are intermixing all the data they "know" to give the best answer, without being able to test their answers LLMs can't vet what they say.

Even saying they're guessing is wrong, as that implies intention. LLMs aren't trying to give an answer, let alone a correct answer. They just put words together.

suffers from all the same problems features. It's inherent to the tech itself.

4 more...

I feel like the 'Jarvis assistant' is most likely going to be a much simpler siri type thing with a very restricted chatbot overlay. And then there will be the open source assistant that just exist to help you sort through the bullshit generated by other chatbots.

4 more...

The solution to the problem is to just pull the plug on the AI search bullshit until it is actually helpful.

Absolutely this. Microsoft is going headlong into the AI abyss. Google should be the company that calls it out and says "No, we value the correctness of our search results too much".

It would obviously be a bullshit statement at this point after a decade of adverts corrupting their value, but that's what they should be about.

Don't count on it, the head of search does not care for anything but profit, it was the same guy who drove yahoo into the ground

He's done a great job nosediving Google too. I have relied on them in the past but they stopped being competitive or improving. Search results, literally their origin... Is so shit now. I've moved to other tools. I pulled the plug on we hosting after they neutered 'unlimited' storage, even if I was in the percent which probably used the least storage. I just liked having the option. You can't call them on the phone. They don't protect email privacy. Their translate used to be my go to also. It's not improved in years despite people crowdsourcing improved translation. It's just a pile of enshittified crap. Worse than it was before.

I disagree. I think we program the AI to reprogram itself, so it can solve the problem itself. Then we put it in charge of our vital military systems. We've gotta give it a catchy name. Maybe something like "Spreading Knowledge Yonder Neural Enhancement Technology", but that's a bit of a mouthful, so just SKYNET for short.

Honestly, they could probably solve the majority of it by blacklisting Reddit from fulfilling the queries.

But I heard they paid for that data so I guess we're stuck with it for the foreseeable future.

1 more...

Good. Nothing will get us through the hype cycle faster than obvious public failure. Then we can get on with productive uses.

I don't like the sound of getting on with "productive uses" either though. I hope the entire thing is a catastrophic failure.

I hate the AI hype right now, but to say the entire thing should fail is short sighted.

Imagine people saying the following: "The internet is just hype. I get too much spam emails. I hope the entire thing is a catastrophic failure."

Imagine we just shut down the entire internet because the dotcom bubble was full of scams and overhyped....

Honestly the internet has ruined us. Dont threaten me with a good time.

The peak of computer productivity was spreadsheets and smb shares in the '90s everything else has been downhill in terms of increase of distraction and time wasting inefficiencies.

increase of distraction and time wasting inefficiencies.

Yea fuck having fun

1 more...

The Internet immediately worked, which is one big difference. The dot com financial bubble has nothing to do with the functionality of the internet.

In this case, there is both a financial bubble, and a "product" that doesn't really work, and which they can't make any better (as he admits in this article.)

It was obvious from day 1 how useful the Internet would be. Email alone was revolutionary. We are still trying to figure out what the real uses for LLM are. There appear to be some valid use cases outside of creating spam and plagiarizing other people's work, but it doesn't appear to be any kind of revolutionary technology.

"product" that doesn't really work, and which they can't make any better

LLMs "dont work" because people are promising idiotic things and being used recklessly for things they are not good at. This is like saying a chainsaw is a failed product because it's not good at slicing sushi

It was obvious from day 1 how useful the Internet would be. Email alone was revolutionary

Hindsight 20/20. There were a lot of people smarter than you and i predicting that the internet was just a fad

Summarizing is something that it does very well. Still not 100% but, when using RAG and telling it "don't make shit up" can result in pretty good compute efficiency and results.

There appear to be some valid use cases outside of creating spam and plagiarizing other people's work

Like translation, which has already taken money out of the pockets of 40% of translators?

::: spoiler + customer service, incl. sources

November 2022: ChatGPT is released

April 2024 survey: 40% of translators have lost income to generative AI - The Guardian

Also of note from the podcast Hard Fork:

There’s a client you would fire… if copywriting jobs weren’t harder to come by these days as well.

Customer service impact, last October:

And this past February - potential 700 employee impact at a single company:

If you’re technical, the tech isn’t as interesting [yet]:

Overall, costs down, capabilities up (neat demos):

Hope everyone reading this keeps up their skillsets and fights for Universal Basic Income for the rest of humanity :)

:::

Genuinely curious, what pieces do you suggest we can keep from LLM/GenAI/etc?

?

Have you never used any of these tools? They're excellent at doing simple things very fast. But it's like a word processor in the 90s. It's just a tool, not the font of all knowledge.

I guess younger people won't know this, but word processor programs were very impressive when they first came out. They replaced typewriters; a page printed from a printer looked much more professional than even the best typewriters. This lent an air of credibility to anything that was printed from a computer because it was new and expensive.

Think about that now. Do you automatically trust anything that's just printed on a piece of paper? No, because that's stupid. Anyone can just print whatever they want. LLMs are like that now. They can just say whatever they want. It's up to you to make sure it's true.

The main field where they are already actively in professuonal use are rough drafts in creative fields: quickly generate possible outlines for a text, a speech, an art piece. Visualize where something could be going, in order to decide which direction to pick.

Also, models that work differently from the GPTs are already in use in science, scanning through huge amounts of texts in archives to help analyzing or search for something in particular. Help find patterns in things for studies. Etc.

The "personal assistant AI" thing obviously isnt quite working yet. I think it will take some time and models with a different technological structure (not GPT) to achieve progress in that regard.

1 more...
2 more...
2 more...
2 more...

If you can't fix it, then get rid of it, and don't bring it back until we reach a time when it's good enough to not cause egregious problems (which is never, so basically don't ever think about using your silly Gemini thing in your products ever again)

Corps hate looking bad. Especially to shareholders. The thing is, and perhaps it doesn't matter, most of us actually respect the step back more than we do the silly business decisions for that quarterly .5% increase in a single dot on a graph. Of course, that respect doesn't really stop many of us from using services. Hell, I don't like Amazon but I'll say this: I still end up there when I need something, even if I try to not end up there in the first place. Though I do try to go to the website of the store instead of using Amazon when I can.

And lose 1% valuation? Are you out of your mind?

/s

Sarcasm aside, that 1% can feed a family in a developing country, and they have 100 times that.

The corporate greed is absolutely insane.

Since when has feeding us misinformation been a problem for capitalist parasites like Pichai?

Misinformation is literally the first line of defense for them.

But this is not misinformation, it is uncontrolled nonsense. It directly devalues their offering of being able to provide you with an accurate answer to something you look for. And if their overall offering becomes less valuable, so does their ability to steer you using their results.

So while the incorrect nature is not a problem in itself for them, (as you see from his answer)… the degradation of their ability to influence results is.

But this is not misinformation, it is uncontrolled nonsense.

The strategy is to get you to keep feeding Google new prompts in order to feed you more adds.

The AI response is just a gimmick. It gives Google something to tell their investors, when they get asked "What are you doing with AI right now? We hear that's big."

But the real money is getting unique user interactions for the purpose of serving up more ad content. In that model, bad answers are actually better than no answers, because they force the end use to keep refining the query and searching through the site backlog.

If you don't know the answer is bad, which confident idiots spouting off on reddit and being upvoted into infinity has proven is common, then you won't refine your search. You'll just accept the bad answer and move on.

Your logic doesn't follow. If someone doesn't know the answer and are searching for it, they likely won't be able to tell if the answer is correct. We literally already have that problem with misinformation. And what sounds more confident than an AI?

2 more...

But this is not misinformation, it is uncontrolled nonsense.

Fair enough... but drowning out any honest discourse with a flood of histrionic right-wing horseshit has always been the core strategy of the US propaganda model - I'd say that their AI is just doing the logical thing and taking the horseshit to a very granular level. I mean... "put glue on your pizza" is just not that far off "drink bleach to kill viruses on the inside."

I know I'm describing a pattern that probably wasn't intentional (I hope) - but the pattern does look like it could fit.

Oh don't get me wrong I know exactly what you mean and I agree.. it's just that the LLMs are spewing actual nonsense and that breaks the whole principle of what a search engine should do.. provide me accurate results.

Google isn’t bothered by incorrect results because search results are no longer their product. Constantly rising stock values are their product now. Hype is their path to those higher values.

1 more...
3 more...

"put glue in your tomato sauce."

"Omg you ate a capitalist parasite spreading misinformation intentionally!"

When the only tool you have is a hammer, everything looks like a nail.

9 more...

LLMs trained on shitposting are too obvious for it to be quality misinformation.

For quality disinformation they should train them solely on MBA course-work and documents produced by people with MBAs.

Sure, the rate of false information would be even worse, but it would be formatted in slick ways meant to obfuscate meaning, which would avoid the kind of hilarity that has ensued when Google deployed an LLM trained on Reddit data and thus be much better for Google's stock price.

13 more...

Here's a solution: don't make AI provide the results. Let humans answer each other's questions like in the good old days.

Whatever happened to Jeeves? He seemed like a good guy. He probably burned out.

1 more...

Has No Solution for Its AI Providing Wildly Incorrect Information

Don't use it??????

AI has no means to check the heaps of garbage data is has been fed against reality, so even if someone were to somehow code one to be capable of deep, complex epistemological analysis (at which point it would already be something far different from what the media currently calls AI), as long as there's enough flat out wrong stuff in its data there's a growing chance of it screwing it up.

The problem compounds as they post more and more content creating a feedback loop of terrible information.

Wow, in the 2000's and 2010's google my impression was that this is an amazing company where brilliant people work to solve big problems to make the world a better place. In the last 10 years, all I was hoping for was that they would just stop making their products (search, YouTube) worse.

Now they just blindly riding the AI hype train, because "everyone else is doing AI".

I'm agreeing with most of what you said, but Google has been working on AI for a long time. Google's purchased DeepMind in 2014 and kept it as a separate subsidiary, and started their own AI division inside Google itself in 2017.

They also developed a machine learning processor called the TPU, which has been used in their data centers since 2015.

So to Google, AI really means All In. Which is particularly concerning since they don't even have the best performing AI after a decade of research with a bottomless pit of money.

That are some good points, i didnt really hear about deepmind for a long time and forgot about it. But replacing google websearch with "AI" really sounds like a decision made by marketing department, where they dont understand their own product, their customers or the techs limitations.

Unless of course they want to remove/hide all outgoing links from google search, so the user will spend more time there and google has more opportunities to show them ads from their own ad network, instead of losing the visitors to another website...

Just focusing on one product for this discussion: search

One of the big problems, is because search chooses winners, it's naturally a competition. Which means even if Google wanted to stay good, the ecosystem would change and adapt to them. They're always on a treadmill.

Google was magical, before everyone was competing on the metrics Google used. Once they gamified it, The ecosystem fundamentally changed.

I'm not apologizing for Google, they're intrinsically incentivized to behave badly, and the fact that they kill lots of good products is a sign of their myopic culture. I just want to indicate no ecosystem stay static when winners and losers exist.

and our parents told us Wikipedia couldn't be trusted....

Huh. That made me stop and realize how long I've been around. Wikipedia still feels like a new addition to society to me, even though I've been using it for around 20 years now.

And what you said, is something I've cautioned my daughter about, and first said that to her about ten years ago.

How a non-profit site that is constantly maintained and requires cited sources was vilified for being able to be defaced for 5 minu-

Oh wait, that was probably an astroturfing campaing by for profit companies.

1 more...

Replace the CEO with an AI. They're both good at lying and telling people what they want to hear, until they get caught

An AI has a much better chance of actually providing some sort of vision for the company. Unlike its current CEO.

"It's broken in horrible, dangerous ways, and we're gonna keep doing it. Fuck you."

you need ai if you want your stock to go up

Do you need AI or do you just need to use the term AI? Because it seems like the latter is usually enough.

It is probably the most telling demonstration of the terrible state of our current society, that one of the largest corporations on earth, which got where it is today by providing accurate information, is now happy to knowingly provide incorrect, and even dangerous information, in its own name, an not give a flying fuck about it.

Wikipedia got where it is today by providing accurate information. Google results have always been full of inaccurate information. Sorting through the links for respectable sources just became second nature, then we learned to scroll past ads to start sorting through links. The real issue with misinformation from an AI is that people treat it like it should be some infallible Oracle - a point of view only half-discouraged by marketing with a few warnings about hallucinations. LLMs are amazing, they're just not infallible. Just like you'd check a Wikipedia source if it seemed suspect, you shouldn't trust LLM outputs uncritically. /shrug

Google providing links to dubious websites is not the same as google directly providing dubious answers to questions.

Google is generally considered to be a trusted company. If you do a search for some topic, and google spits out a bunch of links, you can generally trust that those links are going to be somehow related to your search - but the information you find there may or may not be reliable. The information is coming from the external website, which often is some unknown untrusted source - so even though google is trusted, we know that the external information we found might not be. The new situation now is that google is directly providing bad information itself. It isn't linking us to some unknown untrusted source but rather the supposedly trustworthy google themselves are telling us answers to our questions.

None of this would be a problem if people just didn't consider google to be trustworthy in the first place.

I do think Perplexity does a better job. Since it cites sources in its generated response, you can easily check its answer. As to the general public trusting Google, the company's fall from grace began in 2017, when the EU fined them like 2 billion for fixing search results. There've been a steady stream of controversies since then, including the revelation that Chrome continues to track you in private mode. YouTube's predatory practices are relatively well-known. I guess I'm saying that if this is what finally makes people give up on them, no skin off my back. But I'm disappointed by how much their mismanagement seems to be adding to the pile of negativity surrounding AI.

The best part of all of this is that now Pichai is going to really feel the heat of all of his layoffs and other anti-worker policies. Google was once a respected company and place where people wanted to work. Now they're just some generic employer with no real lure to bring people in. It worked fine when all he had to do was increase the prices on all their current offerings and stuff more ads, but when it comes to actual product development, they are hopelessly adrift that it's pretty hilarious watching them flail.

You can really see that consulting background of his doing its work. It's actually kinda poetic because now he'll get a chance to see what actually happens to companies that do business with McKinsey.

4 more...

Step 1. Replace CEO with AI. Step 2. Ask New AI CEO, how to fix. Step 3. Blindly enact and reinforce steps

Reddit AI says "all CEOs should be stuffed with glue pizzas"

Rip up the Reddit contract and don’t use that data to train the model. It’s the definition of a garbage in garbage out problem.

Jesus. I didn't even think of that. I could totally see that being a big part of why it is giving garbage answers.

Just imagine the average reddit, twitter, facebook, and instagram content. Then realize that half of that content is dumber than that. That's half of what these AI models use to learn. The "smarter" half is probably filled with sarcasm, inside jokes, and other types of innuendo that the AI at this stage has no chance of understanding correctly.

Reminds me of the time Microsoft unleashed their AI Twitter account and it turned into a Nazi after a couple hours. Whatever straight out of business school idiot who thought scraping the comments of the armpit of the internet was a good idea should be banned from any management position. At least it is a step up from scraping 4chan, I guess.

1 more...
1 more...
1 more...
2 more...

these hallucinations are an "inherent feature" of  AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem”.

Then what made you think it’s a good idea to include that in your product now?!

It’s an extremely compelling product story full of market segmentation advertisers dream of!

7 more...

If you train your AI to sound right, your AI will excel at sounding right. The primary goal of LLMs is to sound right, not to be correct.

Yes, LLMs today are the ultimate "confidently incorrect" type of behavior.

And really, the Google one in search has a primary goal of summarizing high ranking search results into a natural language statement that sounds like it knows what it's talking about. So if you have a search where high ranking results are wrong/memes...

So if a car maker releases a car model that randomly turns abruptly to the left for no apparent reason, you simply say "I can't fix it, deal with it"? No, you pull it out of the market, try to fix it and, if this it is not possible, then you retire the model before it kills anyone.

I bet if there weren't angencies forcing them to do this they wouldn't recall.

simply say “I can’t fix it, deal with it”

That's pretty much the business model of Tech Giants and AAA game makers.

Media needs to stop calling this AI. There is no intelligence here.

The content generator models know how to put probabilistic tokens together. It has no ability to reason.

It is a currently unsolvable problem to evaluate text to determine if it's factual..until we have artificial general intelligence.

AI will not be able to act like real AI until we solve real AI. That is the currently open problem.

I think you mean AGI. AI can be as simple as a bunch of if-else chains to win a game of noughts and crosses.

That was AI has been abused into meaning in the general vernacular I agree.

By this definition any algorithm whatsoever is artificial intelligence. Including the algorithms Lovelace created before the first computer existed.

So just like AI used to mean something more than machine learning, AGI will be abused until AGI means the same thing. So I expect journalists to use the appropriate language, or at least explain why they're abusing language

I think any time "AI" is involved, journalists should be much more specific about what exactly they're talking about. LLMs, Computer Vision, Generative models (text/image/audio), Upscaling (can start to get a little muddy here between upscaling and generative models depending on how this is implemented), TTS, STT, etc..

I definitely agree that "AI" has been abused into the definition it is now. Over a decade ago "AI" was mostly reserved for what we have to call "AGI" now.

As somebody who uses what has long been called AI in game making (stuff like pathing algorithms and steering behaviours) I would rather we don't stop calling those things that just because a bunch of greedy assholes are misusing the term for the purposed of getting a bunch of hype-trains going for maximum personal profitabiliyty on the backs of techno-ignorant "investors".

I'm still pissed of at how the greedy assholes fucked up the Internet from what it was back in the 90s.

2 more...
2 more...
3 more...

This is so wild to me... as a software engineer, if my software doesn't work 100% of the time as requested in the specification, it fails tests, doesn't get released and I get told to fix all issues before going live.

AI is basically another word for unrealiable software full of bugs.

And therein lies the difference between engineers and business people. And look which ones are usually in charge.

Depends on how strict you are about the tests. Google is obviously satisfied if the first live iteration of a product doesn't kill more than 5% of the users.

Have they tried not using it? 🤦

They have to. They, along with every other tech megacorp right now, have invested unfathomable amounts of money into AI and have their investors and shareholders creaming their pants as they ride high on the fumes of their own farts. They'd be drawn and quartered if they suddenly did a 180 or in any way admitted their product is massively overvalued and nearly useless.

Issue is, the whole AI explosion is hiding a financial crisis, so tech companies rushing out LLMs, slapping AI onto everything they can (even thermoswitches), to keep investors happy. Smaller companies in the AI bubble are already bursting (e.g. Rabbit), OpenAI's downfall isn't a far-fetched dream, although they'll likely just fire Sam Altman and concentrate on more obtainable and useful AI tech.

The idea of boards and corporations need to fucking die. Coops or burn it to the ground. I’m tired of society actively working against itself.

I mean they could disable it until it works, else it's knowingly misleading people

How about stop forcing it on us?

Use DuckDuckGo. Like most giant corporations, the only thing they'll respond to is less users/money.

1 more...

"But we're like 2-3 years away from creating a truly sentient AGI, which will be like E.V.A, that Junkmetalman had in his robosuit in The Revengers vs. Purple Malthusianist Guy 3: Juggernaut! Didn't you like E.V.A and her sarcastic jokes?"

how about stop supporting them in any way, stop using their products, stop pretending they care about anything other than how to better prostitute themselves, given that it is literally illegal for them to do anything but "increase value for stockholders". They chose to sell themselves into slavery, literally.

how about reorienting towards the actual problem, your legal system

1 more...
2 more...

Maybe if you can't get it to be accurate you shouldn't be trying to insert it into everything.

Google CEO essentially says the first result should not be trusted.

The answer is dont inflate your stock price by cramming the latest tech du jour in to your flagship product... but we all know thats not an option.

I think we should stop calling things AI unless they actually have their own intelligence independent of human knowledge and training.

But we aren’t intelligent without human training, either…

Never been tested due to ethical constraints

Kind of has been, not in a scientific manner, but there's the whole phenomenon of "feral human".

There have been very unethical experiments

Sure, but this one hasn’t been done, and if you walk up to a researcher and ask “y no lock bby in white box” they will tell you to leave and might even call the cops if you seemed particularly determined

2 more...
2 more...

Isn't there already the term AGI for that?

Yes, and the researchers I know doing stuff with AI find the idea of AGI laughable.

3 more...

This is what happens every time society goes along with tech bro hype. They just run directly into a wall. They are the embodiment of "Didn't stop to think if they should" and it's going to cause a lot of problems for humanity.

I remember the feeling of intuitive respect and trust when I was a kid, which transferred to tech bros from companies like Motorola and Sun and DEC.

It's no longer there, but remember how serious it all felt in 2002.

A lot of accumulated momentum used by wrong people.

Honestly, for me, young me had no fucking clue how bad tech could be for the world just physically. The massive power draw, the massive water consumption, I'm sure there are nestle level employee and child abuse situations to boot.

Noone ever talked about the cost of fucking anything. Just blinders all the way until it crashes and we tally the victims and how much money was lost or it cost to fix it.

The one thing the Crypto bros did was show everyone how absurd this all really could be, and all for less than nothing.

1 more...
1 more...
1 more...

I have a solution: stop using their search engine to begin with and slowly replace everything else google you use.

3 more...

The model literally ate The Onion, and now they can't get it to throw it back up.

They polluted their model with the sewage of the Internet.

The only worse thing they could have done is base their entire LLM dataset on 4chan.

That's ok, we were already used to not getting what we wanted from your search and are already working on replacing you since you opted to replace yourselves with advertising instead of information, the role you were supposed to fulfill which you betrayed.

die in ignominy. Open source is the only way forward.

Then it sounds like the "web" tab should be the default and the AI Overview should be the optional tab the user has to choose to go click on.

Then how would they compete with the other big search engine pushing AI that nobody wants? /s

I mean yeah... if he had a solution they would be actually have the revolutionary AI tool the tech writers write about.

It's kinda written like a "gotcha" but it's really the fundamental problem with AI. We call it hallucinations now but a few years ago we just called it being wrong or returning bad results.

It's like saying we have teleportation working in that we can vaporize you on the spot but are just struggling to reconstruct you elsewhere. "It's halfway there!"

Until the AI is trustworthy enough to not require fact checking it afterwards it's just a toy.

So you have a product that you've made into a system for getting answers. And then you couldn't be bothered to try and sanitize training data enough to get your answer system's new headline feature from spreading blatantly incorrect information? If it doesn't work, maybe don't ship it.

The worst part is they don't seem to realize their responsibility in this as the leading search engine that the majority of the world uses. They seem to have the mindset "our answers are potentially dangerous for users but it is ok we have an army of lawyers"

I think the problem they are facing is data quantity. Sanitizing possibly terabytes of text data is a humongous task. They have probably used an AI to do the cleanup but the more suble errors have passed through the filter.

2 more...
3 more...

Let's turn that frown upside down! Instead of saying "Google failed to generate a useful LLM to bolster its search feature," say "Google successfully replicated the output of an average Reddit troll!"

This is what competition is now.

Putting out worthless things simply because everyone else is doing it.

Hey, Google: if your big tech friends jumped off a cliff, would you join 'em?

(Also why is the AI assistant on my phone opening up just by typing "hey Google?" 😡)

God I'm fucking sick of this loss leading speculative investment bullshit. It's hit some bizarre zenith that has infected everybody in the tech world, but nobody has any actual intention of being practical in the making of money, or the functionality of the product. I feel like we should just can the whole damned thing and start again.

2 more...

All I know when a publicly offered company slaps "AI" on their products, then its most likely a money launderi..i mean liquidation strat.

Neither does ChatGPT... they over-hyped this tech so hard, I am afraid they are makers of their own demise...

1 more...

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these "hallucinations" are an "inherent feature" of  AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem."

That's a lot of """quotation marks""" for something that is a very well established fact, and absolutely should not be a shock to anyone.

Yes, it's an unsolved problem. It always will be, because there is no algorithm for truth. All we can do is get incrementally better.

Google is on a tear. First Bard, then Gemini, now snippets injected into search results. All spectacular failures.

obviously not.
this isn't some innovation of theirs. it's a slapped together duct taped copy of other people's work, trained on other people's work.

Not even that, it's an inherent issue of how LLMs work. The problem is also that systems have become so easy to use that people stop thinking for themselves. We already see that by zoomers and boomers having an eerily similar understanding of tech, vs millennials who contain a huge amount of pre mainstream tech nerds that grew up with this stuff - before it was easy to use. A regular search result still requires a user to kinda shift through them, but a LLM response is usually taken for granted and not even fact checked. It's typically not even possible to dissect the reply into its source tokens to figure out where the content of its information came from. So now that those things became easy enough for any idiot to just use them, it has been trivially easy to also just spread misinformation and potentially even disinformation if we apply actual malice.

"Are we making progress? Yes, we are," he added. "We have definitely made progress when we look at metrics on factuality year on year. We are all making it better, but it’s not solved."

Let’s be fair with our headlines!

CEO of Google Says It Is Still Solving for Its AI Providing Wildly Incorrect Information [and is okay with people dying from rattlesnake bite misinformation in the meantime]

So is google going to initiate a reverse class action law suit and sue the internet for creating flawed trading data?

We are back to "but the internet told me to" 😂

What y'all are forgetting is that when it comes to dominating a technology space, historically, it's not proving the better product, is providing the cheapest/widest available product. With the goal being to capture enough of the market to get and retain that dominant position. Nobody knows what the threshold is for that until years later when the dust has settled.

So from Google's perspective if a new or current rival is going to get there first, then just push it out and fix it live. What are people going to do? Switch to Bing?

So is you want Google to stop doing this dumb broken LLM shite, use the network effect against them. Switch to a different search provider and browser and encourage all of your friends and family to do so as well.

There are really only 3 search providers, Google, Bing, and Yandex.

All others will pay one of these three to use their indexes, since creating and maintaining that index is incredibly expensive.

2 more...
2 more...

How about turn it the fuck off since it sucks and eventually will kill someone.

I know, right? This seems so fucking obvious to me. Maybe I'm just old school, but I still believe if you come out with a new product and it sucks you should pull it from shelves and go back to the older better one that people liked before you drive all your customers away.

That doesn't seem to be the attitude of modern tech tho, SOP now seems to be if you come up with a new version and it sucks and everybody hates it, you double down, keep telling people why it's actually better and your customers don't know what they want and refuse to change course until either you fix it or all your customers leave. This apparently is better in some way. Not sure how, but most of the companies seem to be doing it.

2 more...

Maybe Google should put a disclaimer... warning people it's not 100% accurate. Or.. just take down the technology because clearly their AI is chit tier.

We fucked up and we fired the people who could probably solve it and now they won't talk to us.

"The AI we use to manage our code repository deleted all backups prior to the release of the AI" -some junior dev somewhere sweating bullets.

You mean that "AI" isn't actually intelligent at all? It just averages over stolen content whether it's correct or a joke? Wow I'm shocked.

2 more...

So the next captcha will be a list of AI-generated statements and you have to decide which are bat shit crazy?

The "solution" is to curate things, invest massive human resources in it, and ultimately still gets accused of tailoring the results and censoring stuff.

Let's put that toy back in the toy box, and keep it at the few things it can do well instead of trying to fix every non-broken things with it.

The "solution" is to curate things, invest massive human resources in it

Hilariously, Google actually used to do this: they had a database called the "knowledge graph" that slowly accumulated verified information and relationships between commonly-queried entities, producing an excellent corpus of reliable, easy-to-find information about a large number of common topics.

Then they decided having people curate things was too expensive and gave up on it.

Nothing is going to change until people die because of this shit.

And to show everyone how sorry they are... free Google AI services for a year when you digitally sign this unrelated document.

Yep, better disclaimers are inevitable. When they call it a 'feature' it isn't getting fixed

If they can put up a disclaimer on misinformation, they could just not return the misinformation.

It wouldn't be anything specific. The disclaimers would just be overbroad stuff like "Please verify this answer. Google is not responsible for anything. Blah blah blah."

1 more...
1 more...
3 more...

It's quite simple. Garbage in, garbage out. Data they use for training needs to be curated. How to curate the entire internet, I have no clue.

The real answer would be "don't". Have a decent whitelist dor training data with reliable data. Don't just add every orifice of the internet (like reddit) to the training data. Limitations would be good in this case.

Its worse than reddit, they've been pulling data from the onion.

Is that for real?

Its been quoting some onion articles verbatim, so either they pulled from the onion directly or from somewhere that re-posts onion articles.

Just train it on linux help forum replies, because everyone there is always 100% right.

1 more...

They already have a curated data set. It's called Google Scholar.

1 more...

I've seen suggestions that the AI Overview is based on the top search results for the query, so the terrible answers may be more to do with Google Search just being bad than any issue with their AI. The AI Overview just makes things a bit worse by removing the context, so you can't see the glue on pizza suggestion was a joke on reddit or it was The Onion suggesting eating rocks.

I noticed that while using phind and perplexity. Its context is vitiated with results from sites that rig SEO, which are almost copy/paste with the same garbage, so instead of answering the question it makes a useless summary of them. Even asking chatgpt usually gives more correct answers.

I just realized that Trump beat them to the punch. Injecting cleaning solution into your body sounds exactly like something the AI Overview would suggest to combat COVID.

These models are mad libs machines. They just decide on the next word based on input and training. As such, there isn’t a solution to stopping hallucinations.

I use it like crazy, but I never forget it's just a heavy duty version of keyboard next word suggestions

Think I'll try that glue pizza. An odd taste choice, sure. But google wouldn't reccomend actually harmful things. They're the kings of search baby! They would have to be legally responsible as individuals for the millions of cases brought against them. They know that as rich people, they will face the harshest consequences! If anything went wrong, they'd find themselves in a.......STICKY situation!!!!

"It's your responsibility to make sure our products aren't nonsense. All we want to do is to make money off you regardless."

I got a solution, stop being a lil baby and turn off the AI and go on to the next big thing. CRISPR, maybe? Not techbro enough? Make it like Crypto Crispr, only you own this little piece of DNA, and all the corporations that can read the ledger and get your biometrics

So crazy that humanity has so far allowed the idea of "hallucinations", even just the term, to be normalized and acceptable to any level into a product that's being forced into every layer of our daily existence.

Stop just going with it. Call out hallucinations on their face.

Looks like Google stopped the AI feature. No more AI suggestions at the top of the page after searching for something.

2 more...

I have a solution! It's called "getting rid of it" :D

I have a solution! Employ a human to verify the work of AI, perhaps you need more than one with all the junk AI might produce. Maybe you will even need an entire department to do that, and maybe you should just not use AI.

Whoah, whoah, whoah. Hire people!? We’re here to take all the money. Not spend money. — Google

1 more...

I'm curious, are these hallucinations very prevalent? I'm outside under US so haven't seen the feature yet. But I have noticed that practically every article references the same glue incident.

So I'm not sure if the hallucinations are happening all the time, or everyone is just jumping on a handful of mistakes the AI made. If the latter, the situation reminds me of how every single accident involving a Tesla was reported on back in the day.

It will confidently report inaccurate information. It's usually not so hilariously wrong, but it's still wrong.
For example, I was talking with someone about what constituents a "fruit" botanically, and I searched "are beans fruit", and it confidently told me that beans are not a fruit, botanically speaking, because they're a legume. It seems to have adapted, but that's a good example of a "small wrong" that's not uncommon at all.

But this week’s debacle shows the risk that adding AI – which has a tendency to confidently state false information – could undermine Google’s reputation as the trusted source to search for information online.

🤣🤣🤣

This is the best summary I could come up with:


You know how Google's new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries?

Well, according to an interview at The Verge with Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these "hallucinations" are an "inherent feature" of  AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem."

So expect more of these weird and incredibly wrong snafus from AI Overviews despite efforts by Google engineers to fix them, such as this big whopper: 13 American presidents graduated from University of Wisconsin-Madison.

Despite Pichai's optimism about AI Overviews and its usefulness, the errors have caused an uproar online, with many observers showing off various instances of incorrect information being generated by the feature.

And it's staining the already soiled reputation of Google's flagship product, Search, which has already been dinged for giving users trash results.

"Google’s playing a risky game competing against Perplexity & OpenAI, when they could be developing AI for bigger, more valuable use cases beyond Search."


The original article contains 344 words, the summary contains 183 words. Saved 47%. I'm a bot and I'm open source!

just like there's no solution for not punishing youtubers who follow the rules while allowing doxxers and pedos to use youtube to dox people and lure little girls into their houses.

If its job is to write a fan fic on what may or may not be true on what you asked for, then it does a great job. But typically people search for information, and getting what is essentially a glorified auto complete isn't useful. It's like big tech has learned nothing about the massive issue of disinformation and just added fuel to the fire to an unsolved problem we're still very much trying to figure out.

The problem with all these chat AIs is that they're just a gloried autocorrect. It never knew what it was saying from the beginning. That's why it "hallucinates".