I like to compare AI language model training to the early beginnings of music sampling in hip-hop. If they can prove that their works we're used w/o approval I'm guessing the same result will occur.
Might as well sue god at this point. At least it would be a cheaper failure.
We're all influenced by the things we've experienced. Unless it quoting things verbatim as its own content then I don't see the issue.
I mean if I watch something and profit off it or even make my own business that's not anything you can sue for.
Dunno why these folks think they can sue a model trainer.
She claims it regurgitated passages from her book word-for-word. If she has proof of this, it sounds like infringement to me.
Because it's their work being used algorithmically to support someone else's.
Regardless of how you feel about AI, the training models have to exclude copyrighted works to not have this happen, because otherwise it is absolutely true that that AI keeps record of everything fed into it, and if you dont have the rights to what was fed into it, then there's a copyright issue. Because even if it's being reworked and influenced by other works, it is still using other people's stuff to do it. It is, in many ways, an overgrown randomization & automation tool.
The problem is that people dont see AI's as a tool that companies are using, they see it almost like a person learning. It's not like a person learning, and cant be treated the same as say, a consumer reading the book referenced (in this example) for enjoyment.
If I went to an acting class to be trained to act like robert de niro and they used multiple facets of his work over the years to train me, is it infringement? If I go to an art class to learn how to paint like Picasso, and they use his work as reference, is that infringement? In these examples I'm the AI and the class is essentially the trainer. I get that the company is setting up the AI to be a product, but in these examples, I too would be setting myself to be a product if I use my new skills to profit.
All of the litigation isn't necessarily wrong, so far, but AI is happening much too quickly for it to matter. And what's more human, the thing the companies creating new AI are going for, than learning from our arts, languages, culture, etc.?
it is absolutely true that that AI keeps record of everything fed into it
No it isn't.
A properly trained deep learning system will ultimately far smaller than all of the data it's been trained on. It's simply impossible for it to have retained a record of very much of it at all.
When everything is working correctly it shouldn't have any of the actual text stored at all. Certainly every single piece of training data will have left some impression on the model, but that's a very long way from actually storing the training data. The model consists of statistical relationships, not a copy-paste of the inputs.
Strictly speaking there is something resembling text in the model, but it's made up of the smallest possible units of language (unless there's been overfitting, in which case the training has gone wrong and there probably would be a case to answer).
The model builds sentances from a list of "phrases" which don't even need to line up with word boundaries. Things like "is a" might be treated as a "word", as might "ing", if the model finds that to be a useful snippet.
Must be nice being rich and only spending money to make yourself richer.
What a scumbag.
You need to work on your trolling.
If you actually want people to get upset at you, and keep replying you have to make the contratian or low-brow opinions believable. You also need to start deleting your comments, since a single glance at past comment history reveals you as a troll instantly.
Ok bud. Just because I say things you don't like doesn't mean I'm trolling.
I like to compare AI language model training to the early beginnings of music sampling in hip-hop. If they can prove that their works we're used w/o approval I'm guessing the same result will occur.
Might as well sue god at this point. At least it would be a cheaper failure.
We're all influenced by the things we've experienced. Unless it quoting things verbatim as its own content then I don't see the issue.
I mean if I watch something and profit off it or even make my own business that's not anything you can sue for.
Dunno why these folks think they can sue a model trainer.
She claims it regurgitated passages from her book word-for-word. If she has proof of this, it sounds like infringement to me.
Because it's their work being used algorithmically to support someone else's.
Regardless of how you feel about AI, the training models have to exclude copyrighted works to not have this happen, because otherwise it is absolutely true that that AI keeps record of everything fed into it, and if you dont have the rights to what was fed into it, then there's a copyright issue. Because even if it's being reworked and influenced by other works, it is still using other people's stuff to do it. It is, in many ways, an overgrown randomization & automation tool.
The problem is that people dont see AI's as a tool that companies are using, they see it almost like a person learning. It's not like a person learning, and cant be treated the same as say, a consumer reading the book referenced (in this example) for enjoyment.
If I went to an acting class to be trained to act like robert de niro and they used multiple facets of his work over the years to train me, is it infringement? If I go to an art class to learn how to paint like Picasso, and they use his work as reference, is that infringement? In these examples I'm the AI and the class is essentially the trainer. I get that the company is setting up the AI to be a product, but in these examples, I too would be setting myself to be a product if I use my new skills to profit.
All of the litigation isn't necessarily wrong, so far, but AI is happening much too quickly for it to matter. And what's more human, the thing the companies creating new AI are going for, than learning from our arts, languages, culture, etc.?
No it isn't.
A properly trained deep learning system will ultimately far smaller than all of the data it's been trained on. It's simply impossible for it to have retained a record of very much of it at all.
When everything is working correctly it shouldn't have any of the actual text stored at all. Certainly every single piece of training data will have left some impression on the model, but that's a very long way from actually storing the training data. The model consists of statistical relationships, not a copy-paste of the inputs.
Strictly speaking there is something resembling text in the model, but it's made up of the smallest possible units of language (unless there's been overfitting, in which case the training has gone wrong and there probably would be a case to answer).
The model builds sentances from a list of "phrases" which don't even need to line up with word boundaries. Things like "is a" might be treated as a "word", as might "ing", if the model finds that to be a useful snippet.
Must be nice being rich and only spending money to make yourself richer.
What a scumbag.
You need to work on your trolling.
If you actually want people to get upset at you, and keep replying you have to make the contratian or low-brow opinions believable. You also need to start deleting your comments, since a single glance at past comment history reveals you as a troll instantly.
Ok bud. Just because I say things you don't like doesn't mean I'm trolling.
You seriously need to grow up with that nonsense.
So you can do it when you try.
Gonna have to block you now. Good bye.