I finally figured out what's going on. Someone at reddit asked, "ChatGPT, what are the 10 most damaging things reddit could do to alienate users and decrease its value?" They then began working on the checklist... they're up to, what, 5?
I finally figured out what's going on. Someone at reddit asked, "ChatGPT, what are the 10 most damaging things reddit could do to alienate users and decrease its value?" They then began working on the checklist... they're up to, what, 5?
Fun fact: password controls like this have been obsolete since 2020. Standards that guide password management now focus on password length and external security features (like 2FA and robust password encryption for storage) rather than on individual characters in passwords.
AI/LLMs can train on whatever they want but when then these LLMs are used for commercial reasons to make money, an argument can be made that the copyrighted material has been used in a money making endeavour.
And does this apply equally to all artists who have seen any of my work? Can I start charging all artists born after 1990, for training their neural networks on my work?
Learning is not and has never been considered a financial transaction.
There are four stanzas to the Star Spangled Banner (the US national anthem) and what you typically here at sporting events is only the first.
Bonus fun fact, the fourth stanza contains the line that, in the 1860s became the shorter, "In God We Trust," motto on coinage that eventually became the national motto of the US in the 1950s (which was also when it was added to paper money). That original line from the fourth stanza was, "And this be our motto - 'In God is our trust.'"
I didn't believe it so I looked it up... but Smithsonian says it's even longer, 50 years!
Note that though AI is the new hotness and grabs headlines, this a) doesn't actually apply only to AI and b) has been done for at least a decade.
Many actors have refused such clauses (I know Sam Jackson is one of them) but many have not.
Putting actor's faces on CGI bodies has been something Hollywood has been working on for a long time, and AI is just a tool that improves on what we've been doing for a while.
It MIGHT not be as bad as you think. If the UI was just terrible at communicating and what it actually meant was, "that password is in our database of known compromised passwords," then that would be reasonable. Google does this now too, but I think they only do it after the fact (e.g. you get a warning that your password is in a database of compromised passwords).
Clearly the Founding Fathers were not advanced enough to have crafted the US Constitution unaided.
In a sense you are correct. They cribbed from lots of the most well known political philosophers at the time. For example, there are direct quotes from Locke in the Declaration and his influence over the Constitution can be felt clearly.
Yeah, this is important. Make it a really big number too so that I have to change my password lots of times in a row in order to put it back to what it was. ;)
Artists, construction workers, administrative clerks, police and video game developers all develop their neural networks in the same way, a method simulated by ANNs.
This is not, "foreign to most artists," it's just that most artists have no idea what the mechanism of learning is.
The method by which you provide input to the network for training isn't the same thing as learning.
The conservative platform in the US doesn't exist. At this point, conservative is a bucket term for, "not progressive." Most conservatives are on the right, but not all. Most conservatives are Republican leaning, but not all. Most conservatives are opposed to socially progressive change (e.g. expanded LGBT rights) but not all.
Basically any policy position you could point to will fail to capture a significant number of modern conservatives.
They cannot be anything other than stochastic parrots because that is all the technology allows them to be.
Are you referring to humans or AI? I'm not sure you're wrong about humans...
What you are describing is true of older LLMs. GPT4, it's less true of. GPT5 or whatever it is they are training now will likely begin to shed these issues.
The shocking thing that we discovered that lead to all of this is that this sort of LLM continues to scale in capabilities with the quality and size of the training set. AI researchers were convinced that this was not possible until GPT proved that it was.
So the idea that you can look at the limitations of the current generation of LLM and make blanket statements about the limitations of all future generations is demonstrably flawed.
I can't imagine who would hire him. He fucked Unity badly.
There are valid concerns with regard to bidet use. They do result in aerosolized particulates in greater number than results from wiping, which means you are literally breathing more feces.
Is it enough to be problematic? Probably not, but that may also depend on how aggressively/frequently you use them.
See also:
I wouldn’t say obsolete because that implies it’s not really used anymore.
I'm not sure where you heard someone use the word "obsolete" that way, but I assure you that there are thousands if not millions of examples of obsolete technologies in constant and everyday use.
It really seems like everything reddit is doing is rushed and always chooses to harm the users as a default. It's as if they're actively sabotaging their own platform.