AI learned from their work. Now they want compensation.

Peaces@infosec.pub to Technology@lemmy.world – 178 points –
wapo.st

A rising movement of artists and authors are suing tech companies for training AI on their work without credit or payment

46

I never understand this argument. If I go to an art museum, look at all the works, and create an art piece inspired by what i saw, no problem. If I go to an art museum, look at the works, and create a computer program that can create an art piece based on what I saw, that is somehow different? Because of the single step of abstraction that was taken by making some software to do it?

I think the difference is in the actual implementation. A lot of times AI has been caught outputing near direct copies of other peoples work or otherwise similar enough that if a human did it then it would become plagarism which is the crux of the problem. If I read someones book then write a slightly altered version and try to pass it off as my own work then thats plagarism, if I feed someones book into a language model and then have it write a slightly altered version thats somehow different and allowed.

Okay, I can understand that. But why is that being turned into “the creator of any work an AI looks at needs to be compensated” instead of holding AI companies accountable for plagiarized works?

I totally understand fining an AI company that produces a copy of Starry Night. But if it makes a painting similar in style to Starry Night that wouldn’t normally be considered a plagiarized work if a human did it, do we still consider that an issue?

From an existing legal perspective (giving some reddit tier legal advice here) I'm pretty sure there's nothing legally wrong with AI art so long as its not straight up plagarism however there is another argument thats likely going to need settling at some point and iI'll do my best to summarise it.

Humans learn from other peoples work, but then eventually develop their own style and become net producers of 'data' (data being pictures, books whatever we're training the AI on) Current AI never does this, it can effectively only remix other peoples work and thus needs to constantly scrape other peoples work in order to expand its repotoir it is never a net producer of 'data', this is effectively proven (for current AI) by the fact that using AI output as training data can actually make the AI worse because it compounds existing flaws and 'AI hallucinations'

This means human artists initially rely on others but ultimately create value from their own effort, AI on the other hand (for now) must continuously rely on the work of others in order to produce value. Or to put it even more simply, the AI industry is entierly reliant on the work of human artists but gives no credit or remediation.

8 more...

While I recognize the AI art is quite obviously derivative and considering that ML pattern matching requires much more input, there's argument that it's more derivative, I really struggle to grasp how humans learning to be creative aren't doing exactly the same thing and what makes that ok (except of course that's ok).

Maybe it's just less obvious and auditable?

I believe with humans, the limitations of our capacity to know, create, learn, and the limited contexts that we apply such knowledge and skills may actually be better for creativity and relatability - knowing everything may not always be optimal, especially when it is something about subjective experience. Plus, such limitations may also protect creators from certain claims about copyright, 1 idea can come from many independent creators, and can be implemented briefly similar or vastly different. And usually, we, as humans, develop a sense of work ethics to attribute the inspirations of our work. There are other who steal ideas without attribution as well, but that’s where laws come in to settle it.

On the side of tech companies using their work to train, AI gen tech is learning at a vastly different scale, slurping up their work without attributing them. If we’re talking about the mechanism of creativity, AI gen tech seems to be given a huge advantage already. Plus, artists/creators learn and create their work, usually with some contexts, sometimes with meaning. Excluding commercial works, I’m not entirely sure the products AI gen tech creates carry such specificity. Maybe it does, with some interpretation?

Anyway, I think the larger debate here is about compensation and attribution. How is it fair for big companies with a lot of money to take creators’ work, without (or minimal) paying/attributing them, while those companies then use these technologies to make more money?

EDIT: replace AI with gen(erative) tech

How is it fair for big companies with a lot of money to take creators’ work, without (or minimal) paying/attributing them, while those companies then use these technologies to make more money?

Because those works were put online, at a publicly accessible location, and not behind a paywall or subscription. If literally anyone on the planet can see your work just by typing a URL into their browser, then you have essentially allowed them to learn from it. Also, it's not like there are copies of those works stored away in some database somewhere, they were merely looked at for a few seconds each while a bunch of numbers went up and down in a neural network. There is absolutely not enough data kept to reproduce the original work.

Besides, if OpenAI (or other companies in the same business) had to pay a million people for the rights to use their work to train an AI model, how much do you think they'd be able to pay? A few dollars? Why bother seeking that kind of compensation at all?

It's not being creative. It's generating a statistically likely facsimile with a seperate set of input parameters. It's sampling, but keeping the same pattern of beats even if the order of the notes changes.

Because I never think I can post it enough, Let’s forget the term AI. Let’s call them Systematic Approaches to Learning Algorithms and Machine Inferences (SALAMI).

So much confusion and prejudice is thrown into this discussion by the mere fact that they're called AIs. I don't believe they are intelligent anymore than I believe a calculator is.

And even if they are, the AIs don't have the needs the humans do. So we still must value the work of the humans more highly than the work of the AIs.

I agree with you on both points. Fixed texts in my comment from AI to generative tech, mostly because I honestly dont fully have a good grasp on what exactly can be considered intelligence.

But your second point, I think, is more important, at least to me. We can have debates on what AI/AGI or whatever is, the thing that matters right now and in years (even months) to come is that we as humans have multiple needs.

We need to work, some of our work requires generating something (code, arts, blueprints, writing) that may be replaceable by these techs really soon. Such work takes years, even decades, of training and experience, especially domain knowledge experience that is invaluable to issues such as necessary human interaction, communication, bias detection and resolution, ... Yet within a couple of years, if all of that effort might get replaced by a bot (that might have more unintended consequences but cut costs), instead of augmented/assisted, many of us would struggle to have a job for living while the companies that build these profit and benefit from that.

Though, at what point does sampling become coherentism from philosophy? In the end, whether an AI performs "coherently" is all that matters. I think we are amazed now at ChatGPT because of the quality LLM from 2021 but that value will degrade or become less "coherent" over time, i.e. model collapse.

I agree with you, the only caveat here is that the artist mentioned say that their books were illegally obtained. Which is a valid argument. I don't see c how training an ai in publicly available information is any different than a human reading/seeing said information and learning from it. Now that same human pirating a book is illegal.

The additional complexity here are laws that were written and are enforced by people that don't fully grasp this technology. If this was traditional code, then yes it could be a copywrite issue, but the models should be trained on enough data to create derivative works.

I don’t see c how training an ai in publicly available information is any different than a human reading/seeing said information and learning from it.

well, the difference is that humans are quite well autoregulated system… as new artists are created by learning from the old ones, the old ones die, so the total number stays about the same. the new artists also have to eat, so they won’t undermine others in the industry (at least not behind some line) and they cannot scale their services to the point where one artist would serve all the customers and all other artists would go die due to starvation. that’s how the human civilization works since the dawn of the civilization.

i hope i don’t need to describe how ai is different.

I'm not sure this argument really addresses the point. If some human artist did become so phenomenally efficient at creating art that they could match the output of the likes of Midjourney as it is today, I don't think anybody would be complaining that they learned their craft by looking at other artists' work. If they wouldn't, it's clearly not the scale of the output alone that's the issue here.

It's also not reasonable to describe the art market as an infinitely and inherently self-regulating one just because artists die. Technology has severely disrupted it before. The demand for calligraphers certainly took quite a hit when the printing press was invented. The camera presumably displaced a substantial amount of the portrait market. Modern digital art tools like Photoshop facilitate an enormously increased output from a given number of artists.

what makes that ok

well, the difference is that humans are quite well autoregulated system... as new artists are created by learning from the old ones, the old ones die, so the total number stays about the same. the new artists also have to eat, so they won't undermine others in the industry (at least not behind some line) and they cannot scale their services to the point where one artist would serve all the customers and all other artists would go die due to starvation. that's how the human civilization works since the dawn of the civilization.

i hope i don't need to describe how ai is different.

AI is an existential threat to so many. I see it similarly to how an established worker may sabotage a talented up-and-comer to protect their position.

While I do think it's technically possible and the right thing to do to determine what original works were used in derivative art to pay royalties, the reality of the situation is that those payments would be a small fraction of what they make now since the whole point of generative art is to be able to reproduce derivative works for a fraction of the cost. Unless the demand for art increases proportionately with the decreased cost, which it can't, compensation with decrease, and as more art goes into the public domain the compensation will decrease further.

This will not save artists and they need a back up plan. I think the future of art will probably be that artists will become more like art directors or designers who are responsible for having good taste, not producing the original work, but even that will have greatly diminishing demand as generative AI will handle most basic needs.

If i look at art and then try to make art..

Do i also have to pay the artist?

I mean museums do have free days but you have to pay for regulars visits

Same as well with art galleries

Current artists learned their art from the previous generation's artists. Are they also suposed to pay?