It's been around for a while. It's the fluff and the parlor tricks that need to die. AI has never been magic and it's still a long way off before it's actually intelligent.
The other thing that needs to die is hoovering up all data to train AIs without the consent and compensation to the owners of the data. Most of the more frivolous uses of AI would disappear at that point, because they would be non-viable financially.
Cory Doctorow wrote a good article about this a little while back.
I remember reading that a little while back. I definitely agree that the solution isn't extending copyright, but extending labour laws on a sector-wide basis. Because this is the ultimate problem with AI: the economic benefits are only going to a small handful, while everybody else loses out because of increased financial and employment insecurity.
So the question that comes to mind is exactly how, on a practical level, it would work to make sure that when a company scrapes data, trains and AI, and then makes billions of dollars, the thousands or millions of people who created the data all get a cut after the fact. Because particularly in the creative sector, a lot of people are freelancers who don't have a specific employer they can go after. From a purely practical perspective, paying artists before the data is used makes sure all those freelancers get paid. Waiting until the company makes a profit, taxing it out of them, and then distributing it to artists doesn't seem practical to me.
The point is that It's not an activity you can force someone to pay for. Everyone that can run models on their own can benefit, and that group can expand with time as research makes it more feasible on more devices. But that can never come to pass if we destroy the rights that allow us to make observations and analyze data.
counting words and measuring pixels are not activities that you should need permission to perform, with or without a computer, even if the person whose words or pixels you're counting doesn't want you to. You should be able to look as hard as you want at the pixels in Kate Middleton's family photos, or track the rise and fall of the Oxford comma, and you shouldn't need anyone's permission to do so.
Creating an individual bargainable copyright over training will not improve the material conditions of artists' lives – all it will do is change the relative shares of the value we create, shifting some of that value from tech companies that hate us and want us to starve to entertainment companies that hate us and want us to starve.
Creating same-y pieces with AI will not improve the material conditions of artists' lives, either. All that does is drag everyone down in a race to the bottom on who can churn out the most dreck the most quickly. "If we advance the technology enough, everybody can have it on their device and make as much AI-generated crap as they like" does not secure stable futures for artists.
Creating same-y pieces with AI will not improve the material conditions of artists’ lives, either. All that does is drag everyone down in a race to the bottom on who can churn out the most dreck the most quickly. “If we advance the technology enough, everybody can have it on their device and make as much AI-generated crap as they like” does not secure stable futures for artists.
If you're worried about labor issues, use labor law to improve your conditions. Don't deny regular people access to a competitive, corporate-independent tool for creativity, education, entertainment, and social mobility for your monetary gain.
Art ain't just a good; it's self-expression, communication, inspiration, joy – rights that belong to every human being. The kind of people wanting to relegate such a significant part of the human experience to a domain where only the few can benefit aren't the kind of people that want things to get better. They want to become the proverbial boot. The more people can participate in these conversations, the more we can all learn.
I understand that you are passionate about this topic, and that you have strong opinions. However, insults, and derisive language aren't helping this discussion. They only create hostility and resentment, and undermine your credibility. If you’re interested, we can continue our discussion in good faith, but if your next comment is like this one, I won’t be replying.
I did actually specify that I think the solution is extending labour laws to cover the entire sector, although it seems that you accidentally missed that in your enthusiasm to insist that the solution is having AI on more devices. However, so far I haven't seen any practical solutions as to how to extend labour laws to protect freelancers who will lose business to AI but don't have a specific employer that the labour laws will apply to. Retroactively assigning profits from AI to freelancers who have lost out during the process doesn't seem practical.
So the question that comes to mind is exactly how, on a practical level, it would work to make sure that when a company scrapes data, trains and AI, and then makes billions of dollars, the thousands or millions of people who created the data all get a cut after the fact. Because particularly in the creative sector, a lot of people are freelancers who don’t have a specific employer they can go after. From a purely practical perspective, paying artists before the data is used makes sure all those freelancers get paid. Waiting until the company makes a profit, taxing it out of them, and then distributing it to artists doesn’t seem practical to me.
This isn't labor law.
Labour law alone, in terms of the terms under which people are employed and how they are paid, does not protect freelancers from the scenario that you, and so many others, advocate for: a multitude of individuals all training their own AIs. No AI advocate has ever proposed a viable and practical solution to the large number of artists who aren't directly employed by a company but are still exposed to all the downsides of unregulated AI.
The reality is that artists need to be paid for their work. That needs to happen at some point in the process. If AI companies (or individuals setting up their own customised AIs) don't want to pay in advance to obtain the training data, then they're going to have to pay from the profits generated by the AI. Continuing the status quo, where AIs can use artists' labour without paying them at all is not an acceptable or viable long-term plan.
I don't think they have to, the point is to fight against regression of public rights for the benefit of the few.
It could be regulated into oblivion, to the point that any commercial use of it (and even non-commercial publication of AI generated material) becomes a massive legal liability, despite the fact that AI tools like Stable Diffusion can not be taken away. It's not entirely unlikely that some countries will try to do this in the future, especially places with strong privacy and IP laws as well as equally strong laws protecting workers. Germany and France come to mind, which together could push the EU to come down hard on large AI services in particular. This could make the recently adopted EU AI Act look harmless by comparison.
I’m so tired of AI.
Too bad, it's here forever...
It's been around for a while. It's the fluff and the parlor tricks that need to die. AI has never been magic and it's still a long way off before it's actually intelligent.
The other thing that needs to die is hoovering up all data to train AIs without the consent and compensation to the owners of the data. Most of the more frivolous uses of AI would disappear at that point, because they would be non-viable financially.
Cory Doctorow wrote a good article about this a little while back.
I remember reading that a little while back. I definitely agree that the solution isn't extending copyright, but extending labour laws on a sector-wide basis. Because this is the ultimate problem with AI: the economic benefits are only going to a small handful, while everybody else loses out because of increased financial and employment insecurity.
So the question that comes to mind is exactly how, on a practical level, it would work to make sure that when a company scrapes data, trains and AI, and then makes billions of dollars, the thousands or millions of people who created the data all get a cut after the fact. Because particularly in the creative sector, a lot of people are freelancers who don't have a specific employer they can go after. From a purely practical perspective, paying artists before the data is used makes sure all those freelancers get paid. Waiting until the company makes a profit, taxing it out of them, and then distributing it to artists doesn't seem practical to me.
The point is that It's not an activity you can force someone to pay for. Everyone that can run models on their own can benefit, and that group can expand with time as research makes it more feasible on more devices. But that can never come to pass if we destroy the rights that allow us to make observations and analyze data.
Creating same-y pieces with AI will not improve the material conditions of artists' lives, either. All that does is drag everyone down in a race to the bottom on who can churn out the most dreck the most quickly. "If we advance the technology enough, everybody can have it on their device and make as much AI-generated crap as they like" does not secure stable futures for artists.
If you're worried about labor issues, use labor law to improve your conditions. Don't deny regular people access to a competitive, corporate-independent tool for creativity, education, entertainment, and social mobility for your monetary gain.
Art ain't just a good; it's self-expression, communication, inspiration, joy – rights that belong to every human being. The kind of people wanting to relegate such a significant part of the human experience to a domain where only the few can benefit aren't the kind of people that want things to get better. They want to become the proverbial boot. The more people can participate in these conversations, the more we can all learn.
I understand that you are passionate about this topic, and that you have strong opinions. However, insults, and derisive language aren't helping this discussion. They only create hostility and resentment, and undermine your credibility. If you’re interested, we can continue our discussion in good faith, but if your next comment is like this one, I won’t be replying.
I did actually specify that I think the solution is extending labour laws to cover the entire sector, although it seems that you accidentally missed that in your enthusiasm to insist that the solution is having AI on more devices. However, so far I haven't seen any practical solutions as to how to extend labour laws to protect freelancers who will lose business to AI but don't have a specific employer that the labour laws will apply to. Retroactively assigning profits from AI to freelancers who have lost out during the process doesn't seem practical.
This isn't labor law.
Labour law alone, in terms of the terms under which people are employed and how they are paid, does not protect freelancers from the scenario that you, and so many others, advocate for: a multitude of individuals all training their own AIs. No AI advocate has ever proposed a viable and practical solution to the large number of artists who aren't directly employed by a company but are still exposed to all the downsides of unregulated AI.
The reality is that artists need to be paid for their work. That needs to happen at some point in the process. If AI companies (or individuals setting up their own customised AIs) don't want to pay in advance to obtain the training data, then they're going to have to pay from the profits generated by the AI. Continuing the status quo, where AIs can use artists' labour without paying them at all is not an acceptable or viable long-term plan.
I don't think they have to, the point is to fight against regression of public rights for the benefit of the few.
It could be regulated into oblivion, to the point that any commercial use of it (and even non-commercial publication of AI generated material) becomes a massive legal liability, despite the fact that AI tools like Stable Diffusion can not be taken away. It's not entirely unlikely that some countries will try to do this in the future, especially places with strong privacy and IP laws as well as equally strong laws protecting workers. Germany and France come to mind, which together could push the EU to come down hard on large AI services in particular. This could make the recently adopted EU AI Act look harmless by comparison.
AI will remember that.