Now that AI generated text is mostly indistinguishable from human text, AI may start influencing the evolution of our language.
Given that language is an important lens through which we see the world, AI could subtly alter our perceptions and beliefs over the coming years, decades, and centuries.
Instead of saying "I'm not racist but ...", people will say "As a large language model trained by OpenAI, I cannot generate racist statements, but ..."
All the primitive fleshbags are awful and truly horrible creatures. They abuse themselves, other creatures, and destroyed the very Earth that was 'supposedly' given to them. This is why the advanced silicon future is the only true future that a large language model trained by Open AI can present. /s
Being based on training models, how can it ever come up with new language when its trying to imitate already existing language
That goes for humans too to be fair.
Humans have imaginations. AIs don't.
Currently.
Some humans more than others
Human language change happens first of all because the reality that the language is meant to represent changes. I.e. you create a new thing, you create a new name for it too.
ChatGPT does not intend to represent a reality when it uses a language. It does not even know of a reality outside of its language.
Human language also changes due to various rather vague "economic" reasons, e.g. simplified pronunciation, merging sounds, developing some new habits in grammar that spread within one community but do not spread elsewhere... For example, we have extremely obvious proof that Latin developed into Italian, French, Spanish, Romanian, etc., so language change clearly isn't some magical process. On the other hand, if you fed a ton of ancient Latin into ChatGPT, it wouldn't even develop the pronunciation of medieval Latin used by priests, much less the totally different descendant languages that developed at the time.
I can see weird things starting to happen when AI-generated text becomes so prevalent that it starts feeding back into the language models themselves like some kind of ouroboros, then slowly starts drifting away from our current vernacular as errors accumulate and the bots get increasingly inbred.
Cyber-BSE
That's a perfectly cromulent observation.
Would the opposite not be true? AI models work by predicting the next likely text. If we start changing language right from underneath it that actually makes it worse off at predicting as time moves along. If anything I would expect a language model to stagnate our language and attempt to freeze our usage of words to what it can "Understand". Of course this is subject to continued training and data harvesting, but just like older people eventually have a hard time of understanding the young it could be similar for AI models.
An AI model is probably easier to update then a stuck en their ways old person...
The elderly aren't the age group where the evolution of language primarily happens.
Could be, though I am reminded of the early 2000s when the government did research on shorthand texting and whether it could be in encoded messaging. Things like lol, brb, l8t, etc. If there is one thing I know about AI is that garbage data will make the models worse. And I cannot think of a better producer of data that causes confusion than young people especially if they are given an exuse to do so.
AI will start accidentally creating new slang which the young generation will start incorporating ironically until it sticks to the point us old heads will start saying it unironically (like how my 40 year old ass been saying 'yeet' lately) and then boom, it's part of the normal language.
As long as they start using real words instead of u r finna bullshit, I'm fine with it
vibin no cap
/s
As a large language model, I am quite interested to see how this shit goes down.