AI companies are violating a basic social contract of the web and and ignoring robots.txt

Andy Reid@lemmy.world to Technology@lemmy.world – 933 points –
The rise and fall of robots.txt
theverge.com
198

You are viewing a single comment

hmm, i though websites just blocked crawler traffic directly? I know one site in particular has rules about it, and will even go so far as to ban you permanently if you continually ignore them.

You cannot simply block crawlers lol

hide a link no one would ever click. if an ip requests the link, it's a ban

Except that it'd also catch out people who use accessibility devices might see the link anyways, or use the keyboard to navigate a site instead of a mouse.

i don't know, maybe there's a canvas trick. i'm not a webdev so i am a bit out of my depth and mostly guessing and remembering 20-year-old technology

If it weren't so difficult and require so much effort, I'd rather clicking the link cause the server to switch to serving up poisoned data -- stuff that will ruin a LLM.

Visiting /enter_spoopmode.html will choose a theme and mangle the text for any page you next go to accordingly (think search&replace with swear words or santa clause)

It will also show a banner letting the user know they are in spoop mode, with a javascript button to exit the mode, where the AJAX request URL is ofuscated (think base64) The banner is at the bottom of the html document (not nesisarly the screen itself) and/or inside unusual/normally ignored tags. `

Would that be effective? A lot of poisoning seems targeted to a specific version of an LLM, rather than being general.

Like how the image poisoning programs only work for some image generators and not others.

1 more...
1 more...
1 more...

Well you can if you know the IPs that come in from but that's of course the trick.

last i checked humans dont access every page on a website nearly simultaneously...

And if you imitate a human then honestly who cares.

1 more...

Detecting crawlers can be easier said than done 🙁

i mean yeah, but at a certain point you just have to accept that it's going to be crawled. The obviously negligent ones are easy to block.

There are more crawlers than I have fucks to give, you'll be in a pissing match forever. robots.txt was supposed to be the norm to tell crawlers what they can and cannot access. Its not on you to block them. Its on them, and its sadly a legislative issues at this point.

I wish it wasn't, but legislative fixes are always the most robust and complied against.

yes but also there's a point where it's blatantly obvious. And i can't imagine it's hard to get rid of the obviously offending ones. Respectful crawlers are going to be imitating humans, so who cares, disrespectful crawlers will ddos your site, that can't be that hard to implement.

Though if we're talking "hey please dont scrape this particular data" Yeah nobody was ever respecting that lol.

1 more...