OpenAI and Anthropic are ignoring an established rule that prevents bots scraping online content

IndustryStandard@lemmy.world to Technology@lemmy.world – 340 points –
archive.ph

The world's top two AI startups are ignoring requests by media publishers to stop scraping their web content for free model training data, Business Insider has learned.

OpenAI and Anthropic have been found to be either ignoring or circumventing an established web rule, called robots.txt, that prevents automated scraping of websites.

TollBit, a startup aiming to broker paid licensing deals between publishers and AI companies, found several AI companies are acting in this way and informed certain large publishers in a Friday letter, which was reported earlier by Reuters. The letter did not include the names of any of the AI companies accused of skirting the rule.

OpenAI and Anthropic have stated publicly that they respect robots.txt and blocks to their specific web crawlers, GPTBot and ClaudeBot.

However, according to TollBit's findings, such blocks are not being respected, as claimed. AI companies, including OpenAI and Anthropic, are simply choosing to "bypass" robots.txt in order to retrieve or scrape all of the content from a given website or page.

A spokeswoman for OpenAI declined to comment beyond pointing BI to a corporate blogpost from May, in which the company says it takes web crawler permissions "into account each time we train a new model." A spokesperson for Anthropic did not respond to emails seeking comment.

Robots.txt is a single bit of code that's been used since the late 1990s as a way for websites to tell bot crawlers they don't want their data scraped and collected. It was widely accepted as one of the unofficial rules supporting the web.

52

You are viewing a single comment

The game plan is to scrape, store and utilise as much data as possible regardless of conventions, best practice, license agreements etc until specifically regulated to stop.

At that point, a few early companies will have used vast swathes of data that any newly established company is banned from also using

And they will be “unable” to purge it.

Hoping the EU drops GDPR 2 requiring them to delete the entire model if it infringes or something.

Expecting the US to meaningfully regulate US companies is like expecting.........

You know what, even including physical impossibilities, I'm struggling to think of anything less likely

I've yet to understand how the hell they get away with "I don't know how it works". Either figure out how it works or stop using it, shithead. It's software not magic beans.

There's lots of complicated fields out there, none of them get a pass for "I don't know how my drugs work" or "I don't know how my rockets work". That's absolutely ridiculous.

Uh, we don't really know how our drugs work (especially the older ones). We have a vague understanding of their mechanisms, but we really don't know how they work. We don't even have a clear idea of what the structures of most drugs look like, and how they interact with their binding sites.

Luckily, we don't actually have to know how they work, to know that they work. Instead we use clinical trials and real world evidence to support their use.

(Fun fact: there's actually a branch of drug development called phenotypic drug discovery which actually does away with the understanding of the mechanisms altogether. )

It’s just how machine learning has been since ever.

We only know the model’s behavior by testing, hence we only know more or less the behavior in relation to the amount of testing that was done. But the model internals has always been a black box of numbers that individually mean nothing and if tracked which neurons fire here and there it’ll appear just random, because it probably is.

Remember the machine learning models aren’t carefully designed, they’re just brute-force trained for a long time and have the numbers adjusted again and again whenever the results look closer or further away from the desired output.

If the models are random then we shouldn't be trusting them to do anything, let alone serious applications. If any other type of software told us that it's based on partially random results we'd say "get that shit out of here, I want my software to work first time, every time".

"Statistically good enough" works for some applications but not for others. If a LLM finds a formula that has an 80% chance to be the cure for cancer or a new magical fuel or some amazing new material that's cool, we're not going to look the gift horse in the mouth.

But using LLM to polute the web with advertising texts that are barely inteligible, and using it as a pretext to break copyright in the process, who does that help? So far the only readily available commercial application for LLMs has been to spit out semi-nonsense so that a bunch of bottom-crawling parasitic industries can be enabled to keep on pinching pennies and shitting up everything they touch.

Which, ironically, it will help them to hit bottom all the faster, so in a strange way it's a positive return, but the problem is they're going to take down a lot of useful things with them.

I’m in the US so yeah…. Even if the current of future GDPR requires deletion I guarantee it’ll still be used in the US. I have no faith that any US company will follow rules like that. Any fines are just looked at as the cost of doing business.

Or they'll "purge" it and somehow the canaries will end up in the model anyway

It's like weapons testing. You only move to ban testing after you've developed it yourself.

Whatever happened to those "nightshade" images that poison the model?

You mean that work that took open source software, closed sourced it and refused to release the source code and the poisoning only worked against one specific open source model (stable diffusion)? I don't think that's going to come riding to anyone's rescue.

They only kinda work but more importantly they need mass adoption to actually poison training data. Most people aren't going to add another step to their posts so probably the only way to mass adopt it is to have platforms automatically poison uploaded images. I wonder if reposts on a platform like that would start to have noticable artifacts in the images like jpeg but different

Same approach as all the other 'disruptive' new companies that ignore industry standards, rules, and laws.

I'd say they are pushing for regulations behind the scene because they know it gives them an instant monopoly.

They are already pass the door, they can afford to shut it behind them to own the room. Having to send checks to websites like Reddit and Getty in the future is a small price to pay.