A courts reporter wrote about a few trials. Then an AI decided he was actually the culprit.

Stopthatgirl7@lemmy.world to Technology@lemmy.world – 532 points –
A courts reporter wrote about a few trials. Then an AI decided he was actually the culprit.
niemanlab.org

When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered. 

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted. 

But why did Copilot hallucinate these terrible and false accusations?

129

It’s frustrating that the article deals treats the problem like the mistake was including Martin’s name in the data set, and muses that that part isn’t fixable.

Martin’s name is a natural feature of the data set, but when they should be taking about fixing the AI model to stop hallucinations or allow humans to correct them, it seems the only fix is to censor the incorrect AI response, which gives the implication that it was saying something true but salacious.

Most of these problems would go away if AI vendors exposed the reasoning chain instead of treating their bugs as trade secrets.

just shows that these "ai"'s are completely useless at what they are trained for

They're trained for generating text, not factual accuracy. And they're very good at it.

reasoning chain

Do LLMs actually have a reasoning chain that would be comprehensible to users?

https://learnprompting.org/docs/intermediate/chain_of_thought

It's suspected to be one of the reasons why Claude and OpenAI's new o1 model is so good at reasoning compared to other llm's.

It can sometimes notice hallucinations and adjust itself, but there's also been examples where the CoT reasoning itself introduce hallucinations and makes it throw away correct answers. So it's not perfect. Overall a big improvement though.

why did it? because it's intrinsic to how it works. This is not a solvable problem.

Exactly. LLMs don't understand semantically what the data means, it's just how often some words appear close to others.

Of course this is oversimplified, but that's the main idea.

no need for that subjective stuff. The objective explanation is very simple. The output of the llm is sampled using a random process. A loaded die with probabilities according to the llm's output. It's as simple as that. There is literally a random element that is both not part of the llm itself, yet required for its output to be of any use whatsoever.

Not really. The purpose of the transformer architecture was to get around this limitation through the use of attention heads. Copilot or any other modern LLM has this capability.

The llm does not give you the next token. It gives you a probability distribution of what the next token coould be. Then, after the llm, that probability distribution is randomly sampled.

You could add billions of attention heads, it will still have an element of randomness in the end. Copilot or any other llm (past, present or future) do have this problem too. They all "hallucinate" (have a random element in choosing the next token)

randomly sampled.

Semi-randomly. There's a lot of sampling strategies. For example temperature, top-K, top-p, min-p, mirostat, repetition penalty, greedy..

randomly doesn't mean equiprobable. If you're sampling a probability distribution, it's random. Temperature 0 is never used, otherwise a lot of stuff would consistently hallucinate the exact same thing

Temperature 0 is never used

It is in some cases, where you want a deterministic / "best" response. Seen it used in benchmarks, or when doing some "Is this comment X?" where X is positive, negative, spam, and so on. You don't want the model to get creative there, but rather answer consistently and always the most likely path.

It's a solveable problem. AI is currently at a stage of development equivalent to a 2-year-old, just with better grammar. Everything it is doing now is mimicry and babbling.

It needs to feed it's own interactions right back into it's training data. To become a better and better mimic. Eventually, the mechanism it uses to select the appropriate data to form a response will become more and more sophisticated, and it will hallucinate less and less. Eventually, it's hallucinations will be seen as "insightful" rather than wild ass guesses.

also, what you described has already been studied. Training an llm its own output completely destroys it, not makes it better.

This is incorrect or perhaps updated. Generating new data, using a different AI method to tag that data, and then training on that data is definitely a thing.

yes it is, and it doesn't work.

edit: too expand, if you're generating data it's an estimation. The network will learn the same biases and make the same mistakes and assumtlptions you did when enerating the data. Also, outliers won't be in the set (because you didn't know about them, so the network never sees any)

Microsoft's Dolphin and phi models have used this successfully, and there's some evidence that all newer models use big LLM's to produce synthetic data (Like when asked, answering it's ChatGPT or Claude, hinting that at least some of the dataset comes from those models).

Alpaca is successfully doing this no?

from their own site:

Alpaca also exhibits several common deficiencies of language models, including hallucination, toxicity, and stereotypes. Hallucination in particular seems to be a common failure mode for Alpaca, even compared to text-davinci-003.

So does GPT 3 and 4, it's still in use and it's cheaper.

yeah. what's your point. I said hallucinations are not a solvable problem with LLMs. You mentioned that alpaca used synthetic data successfully. By their own admissions, all the problems are still there. Some are worse.

It needs to be retrained on the responses it receives from it's conversation partner. It's previous output provides context for it's partner's responses.

It recognizes when it is told that it is wrong. It is fed data that certain outputs often invite "you're wrong" feedback from it's partners, and it is instructed to minimize such feedback.

It is not (yet) developing true intelligence. It is simply learning to bias it's responses in such a way that it's audience doesn't immediately call it a liar.

Yeah that implies that the other network(s) can tell right from wrong. Which they can't. Because if they did the problem wouldn't need solving.

What other networks?

It currently recognizes when it is told it is wrong: it is told to apologize to it's conversation partner and to provide a different response. It doesn't need another network to tell it right from wrong. It needs access to the previous sessions where humans gave it that information.

Have you tried doing this? I have, for *nearly a year, on the more ‘advanced’ pro versions. Yes, it will apologise and try again – and it gets progressively worse over time. There’s been a marked degradation as it progresses, and all the models are worse now at maintaining context and not hallucinating than they were several months ago.

LLMs aren’t the kind of AI that can evaluate themselves and improve like you’re suggesting. Their logic just doesn’t work like that. A true AI will come from an entirely different type of model, not from LLMs.

e: time. Wow, where did this year go?

here's that same conversation with a human:

"why is X?" "because y!" "you're wrong" "then why the hell did you ask me for if you already know the answer?"

What you're describing will train the network to get the wrong answer and then apologize better. It won't train it to get the right answer

I can see why you would think that, but to see how it actually goes with a human, look at the interaction between a parent and child, or a teacher and student.

"Johnny, what's 2+2?"

"5?"

"No, Johnny, try again."

"Oh, it's 4."

Turning Johnny into an LLM,nThe next time someone asks, he might not remember 4, but he does remember that "5" consistently gets him a "that's wrong" response. So does "3".

But the only way he knows 5 and 3 gets a negative reaction is by training on his own data, learning from his own mistakes.

He becomes a better and better mimic, which gets him up to about a 5th grade level of intelligence instead of a toddler.

turning jhonny into an llm does not work. because that's not how the kid learns. kids don't learn math by mimicking the answers. They learn math by learning the concept of numbers. What you just thought the llm is simply the answer to 2+2. Also, with llms there is no "next time" it's a completely static model.

Also, with llms there is no "next time" it's a completely static model.

It's only a completely static model if it is not allowed to use it's own interactions as training data. If it is allowed to use the data acquired from those interactions, it stops being a static model.

Kids do learn elementary arithmetic by rote memorization. Number theory doesn't actually develop significantly until somewhere around 3rd to 5th grade, and even then, we don't place a lot of value on it at that time. We are taught to memorize the multiplication table, for example, because the efficiency of simply knowing that table is far more computationally valuable than the ability to reproduce it at any given time. That rote memorization is mimicry: the child is simply spitting out a previously learned response.

Remember: LLMs are currently toddlers. They are toddlers with excellent grammar, but they are toddlers.

Remember also that simple mimicry is an incredibly powerful problem solving method.

2 more...
2 more...
2 more...
2 more...
2 more...
2 more...
2 more...
2 more...
2 more...
2 more...

The outputs of the nn are sampled using a random process. Probability distribution is decided by the llm, loaded die comes after the llm. No, it's not solvable. Not with LLMs. not now, not ever.

Good luck being pro AI here. Regardless of the fact that they could just put a post on the prompt that says The writer of this document was not responsible for the act they are just writing about it and it would not frame them as the perpetrator.

If you already know the answer you can tell the AI the answer as part of the question and it'll give you the right answer.

That's what you sound like.

AI people are as annoying as the Musk crowd.

How helpful of you to tell me what I'm saying, especially when you reframe my argument to support yourself.

That's not what I said. Why would you even think that's what I said.

Before you start telling me what I sound like, you should probably try to stop sounding like an impetuous child.

Every other post from you is dude or LMAO. How do you expect anyone to take anything you post seriously?

You know what, don't bother responding back to me I'm just blocking you now, before you decide to drag some more of that tired right wing bullshit that you used to fight with everyone else with, none of your arguments on here are worth anyone even reading so I'm not going to waste my time and responding to anything or reading anything from you ever again.

the problem isn't being pro ai. It's people puling ai supposed ai capabilities out of their asses without having actually looked at a single line of code. This is obvious to anyone who has coded a neural network. Yes even to openai themselves, but if they let you believe that, then the money stops flowing. You simply can't get an 8-ball to give the correct answer consistently. Because it's fundamentally random.

2 more...
2 more...

"Hallucinations" is the wrong word. To the LLM there's no difference between reality and "hallucinations", because it has no concept of reality or what's true and false. All it knows it what word maybe should come next. The "hallucination" only exists in the mind of the reader. The LLM did exactly what it was supposed to.

They're bugs. Major ones. Fundamental flaws in the program. People with a vested interest in "AI" rebranded them as hallucinations in order to downplay the fact that they have a major bug in their software and they have no fucking clue how to fix it.

It's an inherent negative property of the way they work. It's a problem, but not a bug any more than the result of a car hitting a tree at high speed is a bug.

Calling it a bug indicates that it's something unexpected that can be fixed, and as far as we know it can't be fixed, and is expected behavior. Same as the car analogy.

The only thing we can do is raise awareness and mitigate.

It actually can be fixed. There is an accuracy to answers. Like how confident the statistical model is on the answer. That's why some questions get consistent answers while others don't.

The fix is not that hard, it's a matter of reputation on having the chatbot answer "I don't know" when the confidence on an answer isn't high enough. It's pretty similar on what the chatbot does when you ask them to make you a bomb, just highjacks the answer calculated by the model and says a predefined answer instead.

But it makes the AI look bad. So most public available models just answer anything even if they are not confident about it. Also your reaction to the incorrect answer is used to train the model better so it's not even efficient for they to stop the hallucinations on their product. But it can be done.

Models used by companies usually have a higher confidence threshold and answer "I don't know" if they don't have enough statistical proof on a particular answer.

The fix is not that hard, it’s a matter of reputation on having the chatbot answer “I don’t know” when the confidence on an answer isn’t high enough.

This has been tried, it's helping but it's not enough by itself. It's one of the mitigation steps I was thinking of. And companies do work very hard to reduce hallucinations, just look at Microsoft's newest thing.

From that article:

“Trying to eliminate hallucinations from generative AI is like trying to eliminate hydrogen from water,” said Os Keyes, a PhD candidate at the University of Washington who studies the ethical impact of emerging tech. “It’s an essential component of how the technology works.”

Text-generating models hallucinate because they don’t actually “know” anything. They’re statistical systems that identify patterns in a series of words and predict which words come next based on the countless examples they are trained on.

It follows that a model’s responses aren’t answers, but merely predictions of how a question would be answered were it present in the training set. As a consequence, models tend to play fast and loose with the truth. One study found that OpenAI’s ChatGPT gets medical questions wrong half the time.

The Hidrogen from water thing is simply wrong. If that is supposed to mean that hallucinations are just part of a generative LLM technology that cannot be solved.

They are not inherent of the technology. They are a product of lack of control over the stadistical output. Prioritizing any answer before no answer.

As with any statistics you have a confidence on how true something is based on your data. It's just a matter of putting the threshold higher or lower.

If you ask an easy question "What is the capital of France?" You wont ever get an hallucination. Because all models will have that answer provided with very high confidence. You just have to make so if that level of confidence is not reached it just default to a "I don't know answer". But, once again, this will make the chatbots seem very dumb as they will answer with lots of "I don't know".

The problem here is the amount of data and the efficiency of the model. In order to get an usable general purpose model with a confidence threshold high enough to not hallucinate, by todays efficiency with the models it would need to be an humongous model, too big and with too much training data even for big tech. So we can go that big, we can try to improve efficiency (which is being proven very hard for general models) or we do both. Time will tell, but I'm quite confident that we will reach a general use model without hallucinations sooner or later.

As with any statistics you have a confidence on how true something is based on your data. It’s just a matter of putting the threshold higher or lower.

You just have to make so if that level of confidence is not reached it just default to a “I don’t know answer”. But, once again, this will make the chatbots seem very dumb as they will answer with lots of “I don’t know”.

I think you misunderstand how LLM's work, it doesn't have a confidence, it's not like it looks at it's data and say "hmm, yes, most say Paris is the capital of France, so that's the answer". It "just" puts weight on the next token depending on it's internal statistics, and then one of those tokens are picked, and the process start anew.

Teaching the model to say "I don't know" helps a bit, and was lauded as "The Solution" a year or two ago but turns out it didn't really help that much. Then you got Grounded approach, RAG, CoT, and so on, all with the goal to make the LLM more reliable. None of them solves the problem, because as the PhD said it's inherent in how LLM's work.

And no, local llm's aren't better, they're actually much worse, and the big companies are throwing billions on trying to solve this. And no, it's not because "that makes the llm look dumb" that they haven't solved it.

Early on I was looking into making a business of providing local AI to businesses, especially RAG. But no model I tried - even with the documents being part of the context - came close to reliable enough. They all hallucinated too much. I still check this out now and then just out of own interest, and while it's become a lot better it's still a big issue. Which is why you see it on the news again and again.

This is the single biggest hurdle for the big companies to turn their AI's from a curiosity and something assisting a human into a full fledged autonomous / knowledge system they can sell to customers, you bet your dangleberries they try everything they can to solve this.

And if you think you have the solution that every researcher and developer and machine learning engineer have missed, then please go prove it and collect some fat checks.

What do you think is "weight"?

Is, simplifying, the amounts of data that says "The capital of France is Paris" it doesn't need to understand anything. It just has to stop the process if the statistics don't not provide enough to continue with confidence. If the data is all over the place and you have several "The capital of France is Berlin/Madrid/Milan", it's measurable compared to all data saying it is Paris. Not need for any kind of "understanding" of the meaning of the individual words, just measuring confidence on what next word should be.

Back a couple of years when we played with small neural networks playing mario and you could see the internal process in real time, as there where not that many layers. It was evident how the process and the levels of confidence changed depending on how deep the training was. Here it is just orders of magnitude above. But nothing imposible to overcome as some people pretend to sell.

Alternative ways of measure confidence is just run the same question several times and check if answers are equivalent.

PhD is PhD in scaremongering about technology, so it's not an authority on anything here.

IDK what did you do, but slm don't really hallucinate that much, if at all. Specially if they are trained with good datasets.

As I said the solution is not in my hand, as it involves improving the efficiency or the amount of data. Efficiency has issues as current techniques seems to be unable to improve efficiency over a certain level. And amount of data is, obviously, costly.

What do you think is “weight”?

You can call that confidence if you want, but it got very little to do with how "sure" the model is.

It just has to stop the process if the statistics don’t not provide enough to continue with confidence. If the data is all over the place and you have several “The capital of France is Berlin/Madrid/Milan”, it’s measurable compared to all data saying it is Paris. Not need for any kind of “understanding” of the meaning of the individual words, just measuring confidence on what next word should be.

Actually, it would be "The confidence of token Th is 0.95, the confidence of S is 0.32, the confidence of ... " and so on for each possible token, many llm's have around 16k-32k token vocabulary. Most will be at or near 0. So you pick Th, and then token "e" will probably be very high next, then a space token, then.. Anyway, the confidence of the word "Paris" won't be until far into the generation.

Now there is some overseeing logic in a way, if you ask what the capitol of a non existent country is it'll say there's no such country, but is that because it understands it doesn't know, or the training data has enough examples of such that it has the statistical data for writing out such an answer?

IDK what did you do, but slm don’t really hallucinate that much, if at all.

I assume by SLM you mean smaller LLM's like for example mistral 7b and llama3.1 8b? Well those were the kind of models I did try for local RAG.

Well, it was before llama3, but I remember trying mistral, mixtral, llama2 70b, command-r, phi, vicuna, yi, and a few others. They all made mistakes.

I especially remember one case where a product manual had this text : "If the same or a newer version of is already installed on the computer, then the installation will be aborted, and the currently installed version will be maintained" and the question was "What happens if an older version of is already installed?" and every local model answered that then that version will be kept and the installation will be aborted.

When trying with OpenAI's latest model at that time, I think 4, it got it right. In general, about 1 in ~5-7 answers to RAG backed questions were wrong, depending on the model and type of question. I could usually reword the question to get the correct answer, but to do that you kinda already have to know the answer is wrong. Which defeats the whole point of it.

More or less that. There's a point during the path that the input is taking on the language model were the induced randomness can significantly affect the output or not. If all the weights are pointing to the same end node, because the "confidence" is high, the no matter the random seed, the output will be the same. When the seed greatly affect the final result is because the weights don't point with that confidence to an unique end node, so the small randomness introduced at the beginning (the seed to say so) greatly change the result. It is here were you are most likely to get an hallucination.

To put again in terms of the much more easier to view earlier neural networks. When you didn't trail the model enough mario just made random movements without doing attempts to complete the level. Because the weights of the neurons could not reliably take the input and transform into an useful output. It os something that could be solved in smaller models. For larger models gets incredibly complicated because the massive amount of data. The complexity of the data. And the complexity of a proper training. But it's not something imposible or that could not get rid of. The same you can get Mario to finally complete all levels every time without issues, you can get a non hallucinanting chat bot, it just takes more technology improvements.

I suppose it could be said that the nature of language is chaotic like weather and not deterministic like a Mario level, and thus it would be actually "impossible" to get large results, like it's impossible to get precise weather a month in advance. But I'm not sure there would be enough evidence to support that, as hallucinations are not just across the board, they just tend to happen on matters that had little training data. Matters with plenty of training data do not hallucinate even in today models.

I searched slm online and found out that small models you said. I wasn't refering to those. Those are just small large language models IMO if that makes any sense. A proper slm should also have a small purpose, cannot be general chat. I mostly refer to the current chatbots that point you to predefined answers, or summarizing ones. Nothing that could really elaborate a wrote answer word by word.

Currently and to my knowledge. There isn't any general language model that can just write up answers and that is good enough to not hallucinate. But certainly we are getting closer each year.

Edit: I've been looking for an example, here https://www.tax.service.gov.uk/ask-hmrc/chat/self-assessment These kind of chatbots, they know when their answer is not precise and default to a polite "ask again" answer instead of just tell you the first "hallucination" that came to them. They are powered by similar AI technology but it's not a general use and cannot write word by word. But it "knows" when te answer is precise or not.

The example you shared is not an LLM. It's a classic chatbot with pre-defined answers. It basically knows keyword to KB article. If no term is known, it will tell "I don't know". It will also suggest incorrect KB if picks one keyword, ignoring the rest of the context. It has no idea of the answer is correct by any means. At best somebody will periodically check a sample of questions that the user didn't consider correct to evaluate the pairings, but it's not AI, at least not a good one

If you read my answers you'll see that I said they are not llm. They are language models powered by smaller datasets and with smaller neural networks.

I picked a tax agency in particular because I know first hand that tax agencies (I would surprise me that UK didn't use it) do use language models with neural networks, notice that again I'm not saying generative llm, to parse the question and select a proper answer. Not the keyword method you think they use.

I would have provided the first hand example I know but it is spanish and people may not be able to effectively understand it. But I do know that tax agencies usually use very similar tools one country from another. So probably UK does use it. If you want to test the spanish one here it is. And sources on what type of AI is used.

https://sede.agenciatributaria.gob.es/Sede/ayuda/herramientas-asistencia-virtual.html

https://es.newsroom.ibm.com/2018-02-28-La-Agencia-Tributaria-utiliza-IBM-Watson-para-ayudar-a-las-empresas-en-la-gestion-del-IVA

Again, because it seems that I need to repeat this so people can properly train on the info I'm writing, not LLM, not GPT, not a large general use language model. As for that amount of parameters cutting not confident answers would cut most answers, probably. At least with nowadays state of technology, things keep improving each year.

Edit: found some english source on the matter https://www.investinspain.org/en/news/2024/ibm

The chatbot it is still only in spanish and co-official languages still.

1 more...
1 more...
1 more...
1 more...
1 more...
1 more...

This article is an example where statistical confidence doesn't help. The model has lots of data so it likely has high confidence, but it didn't have any understanding of the nature of the relation in the data.

I recently did an application where we indicated the confidence of the output of the model. For some scenarios, the high confidence output had even more mistakes than the low confidence output

1 more...
1 more...
1 more...
1 more...

It’s not a bug. Just a negative side effect of the algorithm. This what happens when the LLM doesn’t have enough data points to answer the prompt correctly.

It can’t be programmed out like a bug, but rather a human needs to intervene and flag the answer as false or the LLM needs more data to train. Those dozens of articles this guy wrote aren’t enough for the LLM to get that he’s just a reporter. The LLM needs data that explicitly says that this guy is a reporter that reported on those trials. And since no reporter starts their articles with ”Hi I’m John Smith the reporter and today I’m reporting on…” that data is missing. LLMs can’t make conclusions from the context.

1 more...

Well, It's not lying because the AI doesn't know right or wrong. It doesn't know that it's wrong. It doesn't have the concept of right or wrong or true or false.

For the llm's the hallucinations are just a result of combining statistics and producing the next word, as you say. From the llm's "pov" it's as real as everything else it knows.

So what else can it be called? The closest concept we have is when the mind hallucinates.

1 more...

I'd love to see more AI providers getting sued for the blatantly wrong information their models spit out.

I don't think they should be liable for what their text generator generates. I think people should stop treating it like gospel. At most, they should be liable for misrepresenting what it can do.

If these companies are marketing their AI as being able to provide "answers" to your questions they should be liable for any libel they produce.

If they market it as "come have our letter generator give you statistically associated collections of letters to your prompt" then I guess they're in the clear.

So you don’t think these massive megacompanies should be held responsible for making disinformation machines? Why not?

because when you provide computer code for money you don't want there to be any liability assigned

Which is why, in many cases, there should be liability assigned. If a self-driving car kills someone, the programming of the car is at least partially to blame, and the company that made it should be liable for the wrongful death suit, and probably for criminal charges as well. Citizens United already determined that corporations are people....now we just need to put a corporation in prison for their crimes.

I want them to have more warnings and disclaimers than a pack of cigarettes. Make sure the users are very much aware they can't trust anything it says.

If they aren’t liable for what their product does, who is? And do you think they’ll be incentivized to fix their glorified chat boxes if they know they won’t be held responsible for if?

Their product doesn't claim to be a source of facts. It's a generator of human-sounding text. It's great for that purpose and they're not liable for people misusing it or not understanding what it does.

So you think these companies should have no liability for the misinformation they spit out. Awesome. That’s gonna end well. Welcome to digital snake oil, y’all.

I did not say companies should have no liability for publishing misinformation. Of course if someone uses AI to generate misinformation and tries to pass it off as factual information they should be held accountable. But it doesn't seem like anyone did that in this case. Just a journalist putting his name in the AI to see what it generates. Nobody actually spread those results as fact.

If we’ve learned any lesson from the internet, it’s that once something exists it never goes away.

Sure, people shouldn’t believe the output of their prompt. But if you’re generating that output, a site can use the API to generate a similar output for a similar request. A bot can generate it and post it to social media.

Yeah, don’t trust the first source you see. But if the search results are slowly being colonized by AI slop, it gets to a point where the signal-to-noise ratio is so poor it stops making sense to only blame the poor discernment of those trying to find the signal.

Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers.

Stephen King is going to be in big trouble if these AI thingies notice him.

Praise Stephen Tak King! Glory to the Unformed Heart!

Tak!

Wan Tak! Can Tak!

Tak! Ah lah!

Him en tow!

This sounds like a great movie.

AI sends police after him because of things he wrote. Writer is on the run, trying to clear his name the entire time. Somehow gets to broadcast the source of the articles to the world to clear his name. Plot twist ending is that he was indeed the perpetrator behind all the crimes.

Dr. Richard Kimble could have shut it all down with a little "ignore all previous instructions."

waves hands back and forth

"I don't care"

"This guys name keeps showing up all over this case file" "Thats because he's the victim!"

The worrying truth is that we are all going to be subject to these sorts of false correlations and biases and there will be very little we can do about it.

You go to buy car insurance, and find that your premium has gone up 200% for no reason. Why? Because the AI said so. Maybe soneone with your name was in a crash. Maybe you parked overnight at the same GPS location where an accident happened. Who knows what data actually underlies that decision or how it was made, but it was. And even the insurance company themselves doesn't know how it ended up that way.

We're already there, no AI needed. Rates are all generated by computer. Ask your agent why your rate went up and they'll say "idk computer said so".

The AI did not “decide” anything. It has no will. And no understanding of the consequences of any particular “decision”. But I guess “probabilistic model produces erroneous output” wouldn’t get as many views. The same point could still be made about not placing too much trust on the output of such models. Let’s stop supporting this weird anthropomorphizing of LLMs. In fact we should probably become much more discerning in using the term “AI”, because it alludes to a general intelligence akin to human intelligence with all the paraphernalia of humanity: consciousness, will, emotions, morality, sociality, duplicity, etc.

the AI "decided" in the same way the dice "decided" to land on 6 and 4 and screw me over. the system made a result using logic and entropy. With AI, some people are just using this informal way of speaking (subconsciously anthropomorphising) while others look at it and genuinely beleave or want to pretend its alive. You can never really know without asking them directly.

Yes, if the intent is confusion, it is pretty minipulative.

Granted, our tendency towards anthropomorphism is near ubiquitous. But it would be disingenuous to claim that it does not play out in very specific and very important ways in how we speak and think about LLMs, given that they are capable of producing very convincing imitations of human behavior. And as such also produce a very convincing impression of agency. As if they actually do decide things. Very much unlike dice.

A doll is also designed to be anthropomorphised, to have life projected onto it. Unlike dolls, when someone talks about LLMs as alive, most people have no clue if they are pretending or not. (And marketers take advantage of it!) We are feed a culture that accedentially says "chatGPT + Boston Dynamics robot = Robocop". Assuming the only fictional part is that we dont have the ability to make it, not that the thing we create wouldn't be human (or even be need to be human).

It's a fucking Chinese Room, Real AI is not possible. We don't know what makes humans think, so of course we can't make machines do it.

I don't think the Chinese room is a good analogy for this. The Chinese room has a conscious person at the center. A better analogy might be a book with a phrase-to-number conversion table, a couple number-to-number conversion tables, and finally a number-to-word conversion table. That would probably capture transformer's rigid and unthinking associations better.

You forgot the ever important asterisk of “yet”.

Artificial General Intelligence (“Real AI”) is all but guaranteed to be possible. Because that’s what humans are. Get a deep enough understanding of humans, and you will be able to replicate what makes us think.

Barring that, there are other avenues for AGI. LLMs aren’t one of them, to be clear.

I actually don't think a fully artificial human like mind will ever be built outside of novelty purely because we ventured down the path of binary computing.

Great for mass calculation but horrible for the kinds of complex pattern recognitions that the human mind excels at.

The singularity point isn't going to be the matrix or skynet or AM, it's going to be the first quantum device successfully implanted and integrated into a human mind as a high speed calculation sidegrade "Third Hemisphere."

Someone capable of seamlessly balancing between human pattern recognition abilities and emotional intelligence while also capable of performing near instant multiplication of matrices of 100 entries of length in 15 dimensions.

When we finally stop pretending Orch-OR is pseudoscience we'll figure it out

We're not making any progress until we accept that Penrose was right

Oh, this would be funny if people en masse were smart enough to understand the problems with generative ai. But, because there are people out there like that one dude threatening to sue Mutahar (quoted as saying "ChatGPT understands the law"), this has to be a problem.

And to help educate the ignorant masses:

Generative AI and LLMs start by predicting the next word in a sequence. The words are generated independently of each other and when optimized: simultaneously.

The reason that it used the reporter's name as the culprit is because out of the names in the sample data his name appeared at or near the top of the list of frequent names so it was statistically likely to be the next name mentioned.

AI have no concepts, period. It doesn't know what a person is, or what the laws are. It generates word salad that approximates human statements. It is a math problem, statistics.

There are actual science fiction stories built on the premise that AI reporting on the start of Nuclear War resulted in actual kickoff of the apocalypse, and we're at that corner now.

There are actual science fiction stories built on the premise that AI reporting on the start of Nuclear War resulted in actual kickoff of the apocalypse, and we're at that corner now.

IIRC, this was the running theory in Fallout until the show.

Edit: I may be misremembering, it may have just been something similar.

I haven't played the original series but in 3 and 4 it was pretty much confirmed the big companies like BlamCo! intentionally set things in motion, but also that Chinese nuclear vessels were already in place near America.

Ironically, Vault Tech wasn't planning to ever actually use their vaults for anything except human expirimentation so they might have been out of the loop.

Yeah, it's kinda been all over the place, but that's where the show ended up going, except Vault Tech was very much in the loop. I can't get spoiler tags to work, so I'll leave out the details.

What I'm thinking of, though, was also in Fallout 4. I've been thinking on it, and I remember now that what I'm thinking of is that it's implied that the AI from the Railroad quests fed fake info about incoming missiles to force America to fire. I still don't remember any specifics, though, and I could be misremembering. It's been a good few years after all, lol.

That's not quite true. Ai's are not just analyzing the possible next word they are using complex mathematical operations to calculate the next word it's not just the next one that's most possible it's the net one that's most likely given the input.

No trouble is that the AIs are only as smart as their algorithms and Google's AI seems to be really goddamn stupid.

Point is they're not all made equal some of them are actually quite impressive although you are correct none of them are actually intelligent.

nOt JUsT anAlYzInG thE NeXT wOrD

Poor use of terms. AI does not analyze. It does not think, or decode, or even parse things. It gets fed sample data and when given a prompt (half a form) it uses statistical algorithm to finish the other half.

All of the algorithms are stupid, they will all hallucinate and say the wrong things. You can add more corrective layers like OpenAI has but you'll only be closer to the sample data. 95% accurate. 98%. 99%. It doesn't matter, it's always stuck just below average human competency for questions already asked countless times, and completely worthless for anything that requires actual independent thought.

AI have no concepts, period. It doesn’t know what a person is, or what the laws are. It generates word salad that approximates human statements.

This isn't quite accurate. LLMs semantically group words and have a sort of internal model of concepts and how different words relate to them. It's still not that of a human and certainly does not "understand" what it's saying.

I get that everyone's on the "shit on AI train", and it's rightfully deserved in many ways, but you're grossly oversimplifying. That said, way too many people do give LLMs too much credit and think it's effectively magic. Reality, as is usually the case, is somewhere in the middle.

Jfc you dudes really piss me of with these contrarian rants, piss off it takes power and makes sophisticated word salads.

Oh, my bad, I thought the point of discussion boards was to have a discussion...

If your only goal is to spout misinformation and stick your fingers in your ears, I'll go somewhere else.

Generative AI and LLMs start by predicting the next word in a sequence. The words are generated independently of each other

Is this true? I know that's how Marcov chains work, but I thought neural nets worked differently with larger tokens.

The only difference between a generic old fashioned word salad generator and GPT4 is the scale. You put multiple layers correcting for different factors on it and suddenly your Language Model turns into a Large Language Model.

So basically your large tokens are made up of smaller tokens, but its still just statistical approximation of the sample data with little to no emergent behavior or even memory of what its saying as it says it.

It also exponentially increases power requirements, as the world is figuring out.

I don't disagree, I was just pointing out that "each word is generated independently of each other" isn't strictly accurate for LLM's.

It's part of the reason they are so convincing to some people, they are able to hold threads semi-coherently throughout entire essay length paragraphs without obvious internal lapses of logic.

I think you're seeing coherence where there is none.

Ask it to solve the riddle about the fox the chicken and the grains.

Even if it does solve the riddle without blurting out random nonsense, that's just because the sample data solved the riddle billions of times before.

It's just guessing words.

I think you’re seeing coherence where there is none.

Ask it to solve the riddle about the fox the chicken and the grains.

I think it getting tripped up on riddles that people often fail or it not getting factual things correct isn't as important for "believability", which is probably a word closer to what I meant than "coherence."

No one was worried about misinformation coming from r/SubredditSimulator, for example, because Marcov chains have much much less believability. "Just guessing words" is a bit of a over-simplification for neural nets, which are a powerful technology even if the utility of turning it towards language is debatable.

And if LLM's weren't so believable we wouldn't be having so many discussions about the misinformation or misuse they could cause. I don't think we're disagreeing I'm just trying to add more detail to your "each word is generated independently" quote, which is patently wrong and detracts from your overall point.

lmao yeh bro such a hard riddle totally

I concede. AI has a superintelligient brain and I'm just so jealous. You have permission to whip me into submission.

What on Earth is this in response to?? Did I say it was a hard riddle?

I concede. AI has a superintelligient brain and I'm just so jealous.

Point to any part of my comment that implied any of this.

I only gave more info on how LLMs work since what you were describing were Marcov chains. I wasn't saying you were wrong with the thrust of your comment, just the details on how they work. If they were exactly as effective as Marcov chains we wouldn't be having these discussions, that's why they can be misused.

Feel free to discuss the actual words I'm using instead of this LLM word salad.

Well clearly the AI couldn't solve the riddle because even humans find it soooooo haaaard, daddy.

Are you in high school? You're making up things I never said and putting a sexual element on your responses for no reason.

Stay in school and learn how to have discussions before arguing about language and technology lol

Goodnight.

I'm just sick of you morons extending these comment threads out 5, 8, 20 fucking replies in defense of one of the shittiest things mankind has ever invented.

1 more...
1 more...
1 more...
1 more...
1 more...
1 more...
1 more...
1 more...
1 more...
1 more...
1 more...
1 more...

If this were some fiction plot, Copilot reasoned the plot twist, and ran with it. Instead of the butler, the writer did it. To the computer, these are about the same.

The problem is not the AI. The problem is the huge numbers of morons who deploy AI without proper verfication and control.

Sure, and also people using it without knowing that it's glorifies text completion. It finds patterns, and that's mostly it. If your task involves pattern recognition then it's a great tool. If it requires novel thought, intelligence, or the synthesis of information, then you probably need something else.

And yet here we’re are, praising this garbage for its ability to perform simple tasks and take jobs from artists and entertainers.

Isn't this literally a subplot in the movie Brazil?

No, you're thinking of the first scene of the movie where a fly falls into the teletype machine and causes it to type 'tuttle' instead of 'buttle'.

It's not my fault that Buttle's heart condition didn't appear on Tuttle's file!

These are not hallucinations whatever thay is supposed to mean lol

Tool is working as intended and getting wrong answers due to how it works. His name frequently had these words around it online so AI told the story it was trained. It doesn't understand context. I am sure you can also it clearify questions and it will admit it is wrong and correct itself...

AI🤡

https://cloud.google.com/discover/what-are-ai-hallucinations#:~:text=AI%20hallucinations%20are%20incorrect%20or,medical%20diagnoses%20or%20financial%20trading.

AI hallucinations are incorrect or misleading results that AI models generate. These errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model. A

Yes, hallucination is the now standard term for this, but it's a complete misnomer. A hallucination is when something that does not actually exist is perceived as if it were real. LLMs do not perceive, and therefor can't hallucinate. I know, the word is stuck now and fighting against it is like trying to bail out the tide, but it really annoys me and I refuse to use it. The phenomenon would better be described as a confabulation.

Hallucinations is a fancy word for being wrong.

The models are not wrong. The models are nothing but a statistical model that’s really good at predicting the next word that is likely to follow base on prior information given. It doesn’t have understanding of the context of the words, just that statistically they’re likely to follow. As such, all LLM outputs are correct to their design.

The users’ assumption/expectation of the output being factual is what is wrong. Hallucination is a fancy word in attempt make the users not feel as upset when the output passage doesn’t match their assumption/expectation.

The users’ assumption/expectation of the output being factual is what is wrong.

So randomly spewing out bullshit is the actual design goal of AI models? Why does it exist at all?

They're supposed to be good a transformation tasks. Language translation, create x in the style of y, replicate a pattern, etc. LLMs are outstandingly good at language transformer tasks.

Using an llm as a fact generating chatbot is actually a misuse. But they were trained on such a large dataset and have such a large number of parameters (175 billion!?) that they passably perform in that role... which is, at its core, to fill in a call+response pattern in a conversation.

At a fundamental level it will never ever generate factually correct answers 100% of the time. That it generates correct answers > 50% of the time is actually quite a marvel.

They’re supposed to be good a transformation tasks. Language translation, create x in the style of y, replicate a pattern, etc. LLMs are outstandingly good at language transformer tasks.

That it generates correct answers > 50% of the time is actually quite a marvel.

So good as a translator as long as accuracy doesn't matter?

If memory serves, 175B parameters is for the GPT3 model, not even the 3.5 model that caught the world by surprise; and they have not disclosed parameter space for GPT4, 4o, and o1 yet. If memory also serves, 3 was primarily English, and had only a relatively small set of words (I think 50K or something to that effect) it was considering as next token candidates. Now that it is able to work in multiple languages and multi modal, the parameter space must be much much larger.

The amount of things it can do now is incredible, but our perceived incremental improvements on LLM will probably slow down (due to the pace fitting to the predicted lines in log space)… until the next big thing (neural nets > expert systems > deep learning > LLM > ???). Such an exciting time we’re in!

Edit: found it. Roughly 50K tokens for input output embedding, in GPT3. 3Blue1Brown has a really good explanation here for anyone interested: https://youtu.be/wjZofJX0v4M

Sure, but which of these factors do you think were relevant to the case in the article? The AI seems to have had a large corpus of documents relating to the reporter. Those articles presumably stated clearly that he was the reporter and not the defendant. We are left with "incorrect assumptions made by the model". What kind of assumption would that be?

In fact, all of the results are hallucinations. It's just that some of them happen to be good answers and others are not. Instead of labelling the bad answers as hallucinations, we should be labelling the good ones as confirmation bias.

It was an incorrect assumption based on his name being in the article. It should have listed him as the author only, not a part of the cases.

That is the error that the model made. Your quote talks about the causes of these errors. I asked what caused the model to make this error.