Fluffles

@Fluffles@pawb.social
0 Post – 1 Comments
Joined 1 years ago

I believe this phenomenon is called "artificial hallucination". It's when a language model exceeds its training and makes info out of thin air. All language models have this flaw. Not just ChatGPT.

9 more...