OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling's Harry Potter series

L4sBot@lemmy.worldmod to Technology@lemmy.world – 771 points –
businessinsider.com

OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling's Harry Potter series::A new research paper laid out ways in which AI developers should try and avoid showing LLMs have been trained on copyrighted material.

354

You are viewing a single comment

Yeah, but if you wanna act out the contents of the book and sell it as a movie, you need to buy the rights.

Yes but there's a threshold of how much you need to copy before it's an IP violation.

Copying a single word is usually only enough if it's a neologism.
Two matching words in a row usually isn't enough either.
At some point it is enough though and it's not clear what that point is.

On the other hand it can still be considered an IP violation if there are no exact word matches but it seems sufficiently similar.

Until now we've basically asked courts to step in and decide where the line should be on a case by case basis.

We never set the level of allowable copying to 0, we set it to "reasonable". In theory it's supposed to be at a level that's sufficient to, "promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries." (US Constitution, Article I, Section 8, Clause 8).

Why is it that with AI we take the extreme position of thinking that an AI that makes use of any information from humans should automatically be considered to be in violation of IP law?

Making use of the information is not a violation -- making use of that violation to turn a profit is a violation. AI software that is completely free for the masses without any paid upgrades can look at whatever it wants. As soon as a corporation is making money on it though, it's in violation and needs to pay up.

Is that intended as a legal or moral position?

As far as I know, the law doesn't care much if you make money off of IP violations. There are many cases of individuals getting hefty fines for both the personal use and free distribution of IP. I think if there is commercial use of IP the profits are forfeit to the IP holder. I'm not a lawyer though, so don't bank on that.

There's still the initial question too. At present, we let the courts decide if the usage, whether profitable or not, meets the standard of IP violation. Artists routinely take inspiration from one another and sometimes they take it too far. Why should we assume that AI automatically takes it too far and always meets the standard of IP violation?

It's also just not fair, unless your going to rule that nothing an AI produces can be copyrighted. Otherwise some billionaire could just flood the office with copyrighted requests and copyrighted everything... hell if they really did that they would probably convince the government to let them hire outside contractors for free to speed up the process...

Idk that feels like saying that as soon as you sell the skills you learned on YouTube, you should have to start paying the people you learned from, since you're "using" their copyrighted material to turn profit.

I don't agree whatsoever that copyright extends to inspiration of other artists/data models. Unless they recreate what you've made in a sufficiently similar manor, they haven't copied you.

Why is it that with AI we take the extreme position of thinking that an AI that makes use of any information from humans should automatically be considered to be in violation of IP law?

Luddites throwing their sabots into the machinery.

Not if your stories are transformative of the original work.

AI works are not transformative. No new content is added.

I would disagree, they are transformative.

I don’t see how it can be transformative if no new context can be added.

Also transformative is only 1 of the 4 pillars of free use and isn’t enough to justify it alone.

You're just repeating yourself.

New content is created, that's the whole point. The AI takes the original content and transforms it and creates new content.

I don’t believe that AI creates new content. Fundamentally it just statistically repeats what it’s seen before. That’s not new content, it’s a mad lib.

Current AI is great at pattern recognition, that’s it. It recognizes patterns in words and outputs the next one it guesses is right.

In that sense then the dictionary needs to lawyer up and go after all these artists

So you've decided on your own definition of "new content" and are now trying to bend what ai does to make it seem like what they've made isn't new content.

Tell me where I can find all this ai art then if it's not new?

It isn’t new. It’s just a rehash of what it’s already seen.

AI and LLMs don’t create anything new. They just rearrange what has already been given to them based on an arbitrary set of input parameters. There’s no creativity and there’s nothing new about anything they create.

yes, but that's a different situation. with the LLM, the issue is that the text from copyrighted books are influencing the way it speaks. this is the same with humans.

1 more...