Opinion: The Copyright Office is making a mistake on AI-generated art
arstechnica.com
I've generally been against giving AI works copyright, but this article presented what I felt were compelling arguments for why I might be wrong. What do you think?
You are viewing a single comment
The strongest argument against AI art is that it is derivative of the copyrighted art it is based on. A photo of a copyrighted artwork would be similarly difficult to copyright. In this sense, AI art is more akin to music sampling in that it uses original material to make something new -- and to sample music you must ask permission.
You can't copyright AI-generated art even if it was only trained with images in the public domain.
In fact, you can't copyright AI-generated art even it was only trained with images that you made.
Which works were sampled for this?
Is that a picture of a straw man?
It's characters from a popular TV show as knitted figures.
I think this nails it. It's probably the attack authors will use against OpenAI.
But the copyright office clearly states otherwise, so we're in for a showdown.
Personally, I think the AI stuff seems more akin to writing a book in the style of another author, which is completely legal. And, to be clear, my option has no legal effect here whatsoever. đ
There are two separate issues here. First, can you copyright art that is completely AI-generated? The answer is no. So openAI cannot claim a copyright for its output, no matter how it was trained.
The other issue is if openAI violated a copyright. It's true that if you write a book in the style of another author, then you aren't violating copyright. And the same is true of openAI.
But that's not really what the openAI lawsuit alleges. The issue is not what it produces today, but how it was originally trained. The authors point out that in the process of training openAI, the developers illegally download their works. You can't illegally download copyrighted material, period. It doesn't matter what you do with it afterwards. And AI developers don't get a free pass.
Illegally downloading copyrighted books for pleasure reading is illegal. Illegally downloading copyrighted books for training an AI is equally illegal.
You actually can. I recommend reading this article by Kit Walsh, a senior staff attorney at the EFF if you haven't already. The EFF is a digital rights group who most recently won a historic case: border guards now need a warrant to search your phone.
Here's an excerpt:
The article I linked is about image generation, but this part about scraping applies here as well. Copyright forbids a lot of things, but it also allows much more than people think. Fair use is vital to protecting creativity, innovation, and our freedom of expression. We shouldn't be trying to weaken it.
You should also read this open letter by artists that have been using generative AI for years, some for decades. I'd like to hear your thoughts.
When determining whether something is fair use, the key questions are often whether the use of the work (a) is commercial, or (b) may substitute for the original work. Furthermore, the amount of the work copied is also considered.
Search engine scrapers are fair use, because they only copy a snippet of a work and a search result cannot substitute for the work itself. Likewise if you copy an excerpt of a movie in order to critique it, because consumers don't watch reviews as a substitute for watching movies.
On the other hand, openAI is accused of copying entire works, and openAI is explicitly intended as a replacement for hiring actual writers. I think it is unlikely to be considered fair use.
And in practice, fair use is not easy to establish.
You should know that the statistical models don't contain copies of their training data. During training, the data is used just to give a bump to the numbers in the model. This is all in service of getting LLMs to generate cohesive text that is original and doesnât occur in their training sets. Itâs also very hard if not impossible to get them to quote back copyrighted source material to you verbatim. If they're going with the copying angle, this is going to be an uphill battle for them.
I know the model doesn't contain a copy of the training data, but it doesn't matter.
If the copyrighted data is downloaded at any point during training, that's an IP violation. Even if it is immediately deleted after being processed by the model.
As an analogy, if you illegally download a Disney movie, watch it, write a movie review, and then delete the file ... then you still violated copyright. The movie review doesn't contain the Disney movie and your computer no longer has a copy of the Disney movie. But at one point it did, and that's all that matters.
Read the article I linked, it goes over this.
No, it doesn't.
It defends web scraping (downloading copyrighted works) as legal if necessary for fair use. But fair use is not a foregone conclusion.
In fact, there was a recent case in which a company was sued for scraping images and texts from Facebook users. Their goal was to analyze them and create a database of advertising trackers, in competition with Facebook. The case settled, but not before the judge noted that the web scraper was not fair use and very likely infringing IP.
The whole thing hinges on if this is fair use or not, so, yes, it does.
Yes, it absolutely hinges on fair use. That's why the very first page of the lawsuit alleges:
"Defendantsâ LLMs endanger fiction writersâ ability to make a living, in that the LLMs allow anyone to generateâautomatically and freely (or very cheaply)âtexts that they would otherwise pay writers to create"
If the court agrees with that claim, it will basically kill the fair use defense.
First of all, fair use is not simple or as clear-cut a concept that can be applied uniformly to all cases than you make it out to be. It's flexible and context-dependent on careful analysis of four factors: the purpose and character of the use, the nature of the copyrighted work, the amount and substantiality of the portion used, and the effect of the use upon the potential market. No one factor is more important than the others, and it is possible to have a fair use defense even if you do not meet all the criteria of fair use.
Generative models create new and original works based on their weights, such as poems, stories, code, essays, songs, images, video, celebrity parodies, and more. These works may have their own artistic merit and value, and may be considered transformative uses that add new expression or meaning to the original works. Allowing people to generate text that they would otherwise pay writers to create that isn't making the original redundant nor isn't reproducing the original is likely fair use. Stopping people from cheaply producing non-infringing text doesn't seem like something the courts would agree should be stopped just 'cause someone wants to get paid instead.
I think you're being too narrow and rigid with your interpretation of fair use, and I don't think you understand the doctrine that well.
Yes, and I named three of those factors:
And while you don't need to meet all the criteria, the odds are pretty long when you fail three of the four (commercial nature, copying complete work rather than a portion, and negative effect on the market for the original).
Think of it this way: if it were legal to download books in order to train an AI, then it would also be legal to download books in order to train a human student. After all, why would a human have fewer rights than an AI?
Do you really think courts are going to decide that it's ok to download books from The Pirate Bay or Z-Library, provided they are being read by the next generation of writers?
I'm happy with the illegal downloading being illegal. Where things get murky for me is what algorithms you're allowed to use on the data.
I get the impression that if they'd bought all the books legally that the lawsuit would still be happening.
If they bought physical books then the lawsuit might happen, but it would be much harder to win.
If they bought e-books, then it might not have helped the AI developers. When you buy an e-book you are just buying a license, and the license might restrict what you can do with the text. If an e-book license prohibits AI training (and they will in the future, if they don't already) then buying the e-book makes no difference.
Anyway, I expect that in the future publishers will make sets of curated data available for AI developers who are willing to pay. Authors who want to participate will get royalties, and developers will have a clear license to use the data they paid for.
I bet you could build a machine that could recognize subject matter from photographs of it more feasibly than you could build a machine that recognized training data from output
Derivative doesn't mean what you think it means. I recommend reading this article by Kit Walsh, a senior staff attorney at the EFF if you haven't already. The EFF is a digital rights group who most recently won a historic case: border guards now need a warrant to search your phone.
Here's an excerpt:
You should also read this open letter by artists that have been using generative AI for years, some for decades. I'd like to hear your thoughts.