These types of articles bother me. Almost every game, movie, and product has an initial unsustainable level of hype, then comes back down from it.
But these articles inevitably try to frame it as if it's an indication that something's failing.
That very much depends on if you believed all the hype or not. If you did, then yes, it's failing, as ChatGPT was supposed to be the next big breakthrough that was going to automate everything ever, and any company that didn't get in on that right now was going to be left in the dust by all their competitors. On the other hand, if you were an actual sane person (so you know, not a CTO/CEO), then this is very much a non-story as you always knew that all those outlandish claims were nonsense and that this was always going to be yet another niche piece of tech that's useful in a few places in limited amounts.
It took a while but yeah that seems about right. It takes a lot of guiding to have it produce something usable. I have to know a lot about what I want it to do. It can teach me things but the hallucinations are strong sometimes so you have to be careful.
Still it helps me out and I make a lot of progress because of it.
I like it for certain techy things. I just used it to create a linux one-liner command for counting the unique occurances of a regex pattern. I often forget specific flags for Linux commands like how
uniq
can perform counting.And something like that is easy to test each piece of what it said and go from there.
As long as you treat it like a peer who prefaced the statement with "I might be wrong / if I recall correctly" it ends up being a pretty good aid.
"I can suggest an equation that has been a while to get the money to buy a new one for you to be a part of the wave of the day I will be there for you"
There, my phone keyboard "hallucinated" this by suggesting the next word.
I understand that anthropomorphising is fun, but it gives the statistical engines more hype than they deserve.
Your phone keyboard statistical engine is not a very insightful comparison to the neural networks that power LLMs. They're not the same technology at all and just share the barest minimum superficial similarities.
Ah "neural networks" with no neurons?
I'm not comparing technologies, I'm saying those are not "hallucinations", the engines don't "think" and they don't "get something wrong".
The output is dependent on the input, statistically calculated and presented to the user.
A parrot is, in the most literal of ways, smarter than the "Artificial intelligence" sentence generators we have now.