Reddit Will License Its Data to Train LLMs, So We Made a Firefox Extension That Lets You Replace Your Comments

Blaze@lemmy.blahaj.zone to Reddit@lemmy.world – 321 points –
theluddite.org

cross-posted from: https://lemmy.ca/post/19946388

An anticapitalist tech blog. Embrace the technology that liberates us. Smash that which does not.

74

You are viewing a single comment

That’s probably not going to be useful. Reddit keeps your original comment text.

I think you missed the part where you were strongly suggested "not" to use copyrighted text.

The point is not to get rid of the original text. It's to "poison" the training data.

If the AI trainers have the original text then "poisoning" the live site's content isn't going to do anything at all.

You can't touch the original text. It's already been archived.

If they scrape the updated comments again and ingest copyrighted text, you are poisoning the data.

That's my point. They won't.

And even if they did, it's unclear that copyright has anything to say about AI training anyway.

NYT is currently suing because of copyright infringiments.

https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html

it’s unclear that copyright has anything to say about AI training anyway

Although lawmakers worldwide have slept while AI advanced and therefore missed to make some important laws, they are catching up. Europe recently passed its first AI act. As far as I've seen it also states that companies must disclose a detailed summary of their training data.

https://www.ml6.eu/blogpost/ai-models-compliance-eu-ai-act

You can sue about anything you want in the United States, it remains to be seen whether the courts will side with them. I think it's unlikely they'll get much of a win out of it.

A law that requires disclosing a summary of training data isn't going to stop anyone from using that training data.

1 more...

Yeah - this is what I was thinking. We all heard about people being unable to delete comments or Reddit keeping comments even after account deletions back during the first migration, so what stops them holding onto comment history - and what stops them using that to teach llms to discern poisoned data from real data as @pixxelkick said.

Yeah in fact you're giving the llm additional data to train on what poisoned data looks like so it can avoid it better, as they can clear see the before vs after

It is necessary to employ a method which enables the training procedure to distinguish copyrighted material. In the "dumbest" case, some humans will have to label it.

Just because you've edited a comment, doesn't mean that this can be seen as "oh, this is under copyright now".

I don't say it's technical impossible. To the contrary, it very much is possible. It's just more work. This drives the development costs up and can give some form of satisfaction to angered ex-reddit users like me. However, those costs will be peanuts for giants like Google / Alphabet.

1 more...