Put something in robots.txt that isn't supposed to be hit and is hard to hit by non-robots. Log and ban all IPs that hit it.
Imperfect, but can't think of a better solution.
Good old honeytrap. I'm not sure, but I think that it's doable.
Have a honeytrap page somewhere in your website. Make sure that legit users won't access it. Disallow crawling the honeytrap page through robots.txt.
Then if some crawler still accesses it, you could record+ban it as you said... or you could be even nastier and let it do so. Fill the honeytrap page with poison - nonsensical text that would look like something that humans would write.
I think I used to do something similar with email spam traps. Not sure if it's still around but basically you could help build NaCL lists by posting an email address on your website somewhere that was visible in the source code but not visible to normal users, like in a div that was way on the left side of the screen.
Anyway, spammers that do regular expression searches for email addresses would email it and get their IPs added to naughty lists.
I'd love to see something similar with robots.
Yup, it's the same approach as email spam traps. Except the naughty list, but... holy fuck a shareable bot IP list is an amazing addition, it would increase the damage to those web crawling businesses.
but with all of the cloud resources now, you can switch through IP addresses without any trouble. hell, you could just browse by IP6 and not even worry with how cheap those are!
Yeah, that throws a monkey wrench into the idea. That's a shame, because "either respect robots.txt or you're denied access to a lot of websites!" is appealing.
That's when Google's browser DRM thing starts sounding like a good idea š
Even better. Build a WordPress plugin to do this.
Iām the idiot human that digs through robots.txt and the site map to see things that arenāt normally accessible by an end user.
Yeah, this is a pretty classic honeypot method. Basically make something available but inaccessible to the normal user. Then you know anyone who accesses it is not a normal user.
Iāve even seen this done with Steam achievements before; There was a hidden game achievement which was only available via hacking. So anyone who used hacks immediately outed themselves with a rare achievement that was visible on their profile.
Thatās a bit annoying as it means you canāt 100% the game as there will always be one achievement you canāt get.
There are tools that just flag you as having gotten an achievement on Steam, you don't even have to have the game open to do it. I'd hardly call that 'hacking'.
Better yet, point the crawler to a massive text file of almost but not quite grammatically correct garbage to poison the model. Something it will recognize as language and internalize, but severely degrade the quality of its output.
Maybe one of the lorem ipsum generators could help.
a bad-bot .htaccess trap.
robots.txt is purely textual; you can't run JavaScript or log anything. Plus, one who doesn't intend to follow robots.txt wouldn't query it.
If it doesn't get queried that's the fault of the webscraper. You don't need JS built into the robots.txt file either. Just add some line like:
here-there-be-dragons.html
Any client that hits that page (and maybe doesn't pass a captcha check) gets banned. Or even better, they get a long stream of nonsense.
server {
name herebedragons.example.com;
root /dev/random;
}
Nice idea! Better use /dev/urandom through, as that is non blocking. See here.
That was really interesting. I always used urandom by practice and wondered what the difference was.
I wonder if Nginx would just load random into memory until the kernel OOM kills it.
I actually love the data-poisoning approach. I think that sort of strategy is going to be an unfortunately necessary part of the future of the web.
You're second point is a good one, but you absolutely can log the IP which requested robots.txt. That's just a standard part of any http server ever, no JavaScript needed.
You'd probably have to go out of your way to avoid logging this. I've always seen such logs enabled by default when setting up web servers.
People not intending to follow it is the real reason not to bother, but it's trivial to track who downloaded the file and then hit something they were asked not to.
Like, 10 minutes work to do right. You don't need js to do it at all.
As unscrupulous AI companies crawl for more and more data, the basic social contract of the web is falling apart.
Honestly it seems like in all aspects of society the social contract is being ignored these days, that's why things seem so much worse now.
It's abuse, plain and simple.
Governments could do something about it, if they weren't overwhelmed by bullshit from bullshit generators instead and lead by people driven by their personal wealth.
these days
When, at any point in history, have people acknowledged that there was no social change or disruption and everyone was happy?
Well the trump era has shown that ignoring social contracts and straight up crime are only met with profit and slavish devotion from a huge community of dipshits. So. Yāknow.
Only if you're already rich or in the right social circles though. Everyone else gets fined/jail time of course.
Meh maybe. I know plenty of people who get away with all kinds of crap without money or connections.
The open and free web is long dead.
just thinking about robots.txt as a working solution to people that literally broker in people's entire digital lives for hundreds of billions of dollars is so ... quaint.
It's up there with Do-Not-Track.
Completely pointless because it's not enforced
Do-Not-Track, AKA, "I've made my browser fingerprint more unique for you, please sell my data"
I bet at least one site I've visited in my lifetime has enforced it
you're jaded. me, too. but you're jaded.
i prefer "antiqued" but yes
I would be shocked if any big corpo actually gave a shit about it, AI or no AI.
if exists("/robots.txt"):
no it fucking doesn't
Robots.txt is in theory meant to be there so that web crawlers don't waste their time traversing a website in an inefficient way. It's there to help, not hinder them. There is a social contract being broken here and in the long term it will have a negative impact on the web.
Yeah I always found it surprising that everyone just agreed to follow a text file on a website on how to act. It's one of the worst thought out/significant issues with browsing still out there from the beginning pretty much.
Alternative title: Capitalism doesn't care about morals and contracts. It wants to make more money.
Exactly. Capitalism spits in the face of the concept of a social contract, especially if companies themselves didn't write it.
Capitalism, at least, in a lassie-faire marketplace, operates on a social contract, fiat money is an example of this. The market decides, the people decide. Are there ways to amass a certain amount of money to make people turn blind eyes? For sure, but all systems have their ways to amass power, no matter what
I'd say that historical evidence directly contradicts your thesis. Were it factual, times of minimal regulation would be times of universal prosperity. Instead, they are the time of robber-barons, company scrip that must be spent in company stores, workers being massacred by hired thugs, and extremely disparate distribution of wealth.
No. Laissez-faire capitalism has only ever consistently benefitted the already wealthy and sociopaths happy to ignore social contact for their own benefit.
You said āa social contractā. Capitalism operates on one. āThe social contractā as you presumably intend to use it here is different. Yes, capitalism allows those with money to generate money, but a disproportionate distribution of wealth is not violation of a social contract. Iām not arguing for deregulation, FAR from it, but the social contract is there. If a corporation is doing something too unpopular then people donāt work for them and they cease to exist.
If a corporation is doing something too unpopular then people donāt work for them and they cease to exist.
Unfortunately, this is not generally the case. In the US, for example, the corporation merely engages in legalized bribery to ensure that people are dependent upon it (ex. limiting healthcare access, erosion of social safety nets) and don't have a choice but to work for them or die. Disproportionate distribution of wealth may not by itself be a violation of social contact but if gives the wealthy extreme leverage to use in coercing those who are not wealthy and further eroding protections against bad actors. This has been shown historically to be a self-reinforcing cycle that requires that the wealthy be forced to stop.
Yes, regulations should be in place, but the ālegalized briberyā isnāt forcing people, itās just easier to stick with the status quo than change it. They arenāt forced to die, itās just a lot of work to not. The social contract is there, itās just one we donāt like
Capitalism is a concept, it can't care if it wanted and it even can't want to begin with. It's the humans. You will find greedy, immoral ones in every system and they will make it miserable for everyone else.
Capitalism is the widelly accepted self-serving justification of those people for their acts.
The real problem is in the "widelly accepted" part: a sociopath killing an old lady and justifying it because "she looked funny at me" wouldn't be "widelly accepted" and Society would react in a suitable way, but if said sociopath scammed the old lady's pension fund because (and this is a typical justification in Investment Banking) "the opportunity was there and if I didn't do it somebody else would've, so better be me and get the profit", it's deemed "acceptable" and Society does not react in a suitable way.
Mind you, Society (as in, most people) might actually want to react in a suitable way, but the structures in our society are such that the Official Power Of Force in our countries is controlled by a handful of people who got there with crafty marketing and backroom plays, and those deem it "acceptable".
People will always find justification to be asholes. Capitalism tried to harvest that energy and unleashed it's full potential, with rather devastating consequences.
Sure, but think-structures matter. We could have a system that doesn't reward psychopathic business choices (as much), while still improving our lives bit by bit. If the system helps a bit with making the right choices, that would matter a lot.
That's basically what I wrote, (free) market economy especially in combination with credit based capitalism gives those people a perfect combination of a system to thrive in. This seems to result in very fast progress and immense wealth, which is not distributed very equally. Than again, I prefer Besos and Zuckerberg as CEOs rather than politicians or warlords. Dudes with big Egos and Ambitions need something productive to work on.
It's deemed "acceptable"? A sociopath scamming an old lady's pension is basically the "John Wick's dog" moment that leads to the insane death-filled warpath in recent movie The Beekeeper.
This is the kind of edgelord take that routinely expects worse than the worst of society with no proof to their claims.
This is the kind of shit I saw from the inside in Investment Banking before and after the 2008 Crash.
None of those assholes ever gets prison time for the various ways in which they abuse markets and even insider info for swindeling amongst other Pension Funds, so de facto the Society we have with the power structures it has, accepts it.
Most every other social contract has been violated already. If they don't ignore robots.txt, what is left to violate?? Hmm??
It's almost as if leaving things to social contracts vs regulating them is bad for the layperson... š¤
Nah fuck it. The market will regulate itself! Tax is theft and I don't want that raise or I'll get in a higher tax bracket and make less!
This can actually be an issue for poor people, not because of tax brackets but because of income-based assistance cutoffs. If $1/hr raise throws you above those cutoffs, that extra $160 could cost you $500 in food assistance, $5-$10/day for school lunch, or get you kicked out of government subsidied housing.
Yet another form of persecution that the poor actually suffer and the rich pretend to.
God the number of people Iāve heard say this over the years is nuts.
And then the companies hit the "trust thermocline", customers leave them in droves and companies wonder how this could've happened.
I got it was sarcasm, but it's always good to add a /s just in case
Yea, because authoritarianism is well known to be sooooo good for the layperson.
Ah yes, equal protection under the law... the true hallmark of an authoritarian regime.
We need laws mandating respect of robots.txt. This is what happens when you donāt codify stuff
AI companies will probably get a free pass to ignore robots.txt even if it were enforced by law. That's what they're trying to do with copyright and it looks likely that they'll get away with it.
It's a bad solution to a problem anyway. If we are going to legally mandate a solution I want to take the opportunity to come up with an actually better fix than the hacky solution that is robots.txt
Turning that into a law is ridiculous - you really canāt consider that more than advisory unless you enforce it with technical means. For example, maybe put it behind a login or captcha if you want only humans to see it
Are you aware of what "unlisted" means?
Yes, and thereās also no law against calling an unlisted phone number
Also we already had this battle with robots.txt. In the beginning, search engines wouldnāt honor it either because they wanted the competitive advantage of more info, and websites trusted it too much and tried to wall off too much info that way.
There were complaints, bad pr, lawsuits, call for a law
Itās no longer the Wild West:
search engines are mature and generally honor robots.txt
websites use rate limiting to conserve resources and user logins to fence off data thereās a reason to fence off
truce: neither side is as greedy
there is no such law nor is that reasonable
There's also no law against visiting an unlisted webpage? What?
Sounds like the type of thing that would either be unenforceable or profitable to violate compared to the fines.
Why? What would you like to achieve and how would that help?
I hope not, laws tend to get outdated real fast. Who knows robots.txt might not even be used in the future and it just there adding space because of law reasons.
robots.txt is a 30 year old standard. If we can write common sense laws around things like email and VoIP, we can do it for web standards too.
You can describe the law in a similar way to a specification, and you can make it as broad as needed. Something like the file name shouldn't ever come up as an issue.
The law can be broad with allowances to define specifics by decree, executive order or the equivalent.
robots.txt has been an unofficial standard for 30 years and its augmented with sitemap.xml to help index uncrawlable pages, and Schema.org to expose contents for Semantic Web. I'm not stating it shouldn't not be a law, but to suggest changing norms as a reason is a pretty weak counterargument, man.
We don't need new laws we just need enforcement of existing laws. It is already illegal to copy copyrighted content, it's just that the AI companies do it anyway and no one does anything about it.
Enforcing respect for robots.txt doesn't matter because the AI companies are already breaking the law.
I think the issue is that existing laws don't clearly draw a line that AI can cross. New laws may very well be necessary if you want any chance at enforcement.
And without a law that defines documents like robots.txt as binding, enforcing respect for it isn't "unnecessary", it is impossible.
I see no logic in complaining about lack of enforcement while actively opposing the ability to meaningfully enforce.
Copyright law in general needs changing though that's the real problem. I don't see the advantage of legally mandating that a hacky workaround solution becomes a legally mandated requirement.
Especially because there are many many legitimate reasons to ignore robots.txt including it being misconfigured or it just been set up for search engines when your bot isn't a search engine crawler.
All my scrapping scripts go to shit...please no, I need automation to live...
They didn't violate the social contact, they disrupted it.
True innovation. So brave.
I explicitly have my robots.txt set to block out AI crawlers, but I don't know if anyone else will observe the protocol. They should have tools I can submit a sitemap.xml against to know if i've been parsed. Until they bother to address this, I can only assume their intent is hostile and if anyone is serious about building a honeypot and exposing the tooling for us to deploy at large, my options are limited.
The funny (in an "wtf" not "haha" sense) thing is, individuals such as security researchers have been charged under digital trespassing laws for stuff like accessing publicly available ststems and changing a number in the URL in order to get access to data that normally wouldn't, even after doing responsible disclosure.
Meanwhile, companies completely ignore the standard mentions to say "you are not allowed to scape this data" and then use OUR content/data to build up THEIR datasets, including AI etc.
That's not a "violation of a social contract" in my book, that's violating the terms of service for the site and essentially infringement on copyright etc.
No consequences for them though. Shit is fucked.
Remember Aaron Swartz
Corporations are people except when it comes to liability. Compare the consequences of stealing several thousand dollars from someone by fraud vs. stealing several thousand dollars from someone by fraud as an LLC.
Just thought of a nasty hack the browser makers (or hackers) could use to scrape unlisted sites - by surreptitiously logging user browser history for a crawl list
Perhaps some web extensions already do this and phone home about it.
While there are some extensions that do this, last I saw Google didn't use Chrome for populating Search:
Strong "the constitution is a piece of paper" energy right there
hmm, i though websites just blocked crawler traffic directly? I know one site in particular has rules about it, and will even go so far as to ban you permanently if you continually ignore them.
You cannot simply block crawlers lol
hide a link no one would ever click. if an ip requests the link, it's a ban
Except that it'd also catch out people who use accessibility devices might see the link anyways, or use the keyboard to navigate a site instead of a mouse.
i don't know, maybe there's a canvas trick. i'm not a webdev so i am a bit out of my depth and mostly guessing and remembering 20-year-old technology
If it weren't so difficult and require so much effort, I'd rather clicking the link cause the server to switch to serving up poisoned data -- stuff that will ruin a LLM.
Visiting /enter_spoopmode.html will choose a theme and mangle the text for any page you next go to accordingly (think search&replace with swear words or santa clause)
It will also show a banner letting the user know they are in spoop mode, with a javascript button to exit the mode, where the AJAX request URL is ofuscated (think base64)
The banner is at the bottom of the html document (not nesisarly the screen itself) and/or inside unusual/normally ignored tags. `
Would that be effective? A lot of poisoning seems targeted to a specific version of an LLM, rather than being general.
Like how the image poisoning programs only work for some image generators and not others.
Detecting crawlers can be easier said than done š
i mean yeah, but at a certain point you just have to accept that it's going to be crawled. The obviously negligent ones are easy to block.
There are more crawlers than I have fucks to give, you'll be in a pissing match forever. robots.txt was supposed to be the norm to tell crawlers what they can and cannot access. Its not on you to block them. Its on them, and its sadly a legislative issues at this point.
I wish it wasn't, but legislative fixes are always the most robust and complied against.
yes but also there's a point where it's blatantly obvious. And i can't imagine it's hard to get rid of the obviously offending ones. Respectful crawlers are going to be imitating humans, so who cares, disrespectful crawlers will ddos your site, that can't be that hard to implement.
Though if we're talking "hey please dont scrape this particular data" Yeah nobody was ever respecting that lol.
No Idea why you're getting downvotes, in my opinion it was very eloquently said
Why not blame the companies ? After all they are the ones that are doing it, not the boomer politicians.
And in the long term they are the ones that risk to be "punished", just imagine people getting tired of this shit and starting to block them at a firewall level...
Because the politicians also created the precedent that anything you can get away with, goes. They made the game, defined the objective, and then didnāt adapt quickly so that they and their friends would have a shot at cheating.
There is absolutely no narrative of āwhat can you do for your countryā anymore. Itās been replaced by the mottos of āevery man for himselfā and āget while the gettingās goodā.
What social contract? When sites regularly have a robots.txt that says "only Google may crawl", and are effectively helping enforce a monolopy, that's not a social contract I'd ever agree to.
I had a one-eared rabbit. He was a monolopy.
Sounds like a Pal name lol
Only if its model is a Lopunny missing an ear
I was thinking of a short lil bunny wearing a top hat and monocle with one ear sticking out of the center of the top hat but that works too
When sites regularly have a robots.txt that says āonly Google may crawlā
Is that actually true?
If so, why would they do that?
This is the best summary I could come up with:
If you hosted your website on your computer, as many people did, or on hastily constructed server software run through your home internet connection, all it took was a few robots overzealously downloading your pages for things to break and the phone bill to spike.
AI companies like OpenAI are crawling the web in order to train large language models that could once again fundamentally change the way we access and share information.
In the last year or so, the rise of AI products like ChatGPT, and the large language models underlying them, have made high-quality training data one of the internetās most valuable commodities.
You might build a totally innocent one to crawl around and make sure all your on-page links still lead to other live pages; you might send a much sketchier one around the web harvesting every email address or phone number you can find.
The New York Times blocked GPTBot as well, months before launching a suit against OpenAI alleging that OpenAIās models āwere built by copying and using millions of The Timesās copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides, and more.ā A study by Ben Welsh, the news applications editor at Reuters, found that 606 of 1,156 surveyed publishers had blocked GPTBot in their robots.txt file.
āWe recognize that existing web publisher controls were developed before new AI and research use cases,ā Googleās VP of trust Danielle Romain wrote last year.
The original article contains 2,912 words, the summary contains 239 words. Saved 92%. I'm a bot and I'm open source!
This is a very interesting read. It is very rarely people on the internet agree to follow 1 thing without being forced
Loads of crawlers don't follow it, i'm not quite sure why AI companies not following it is anything special. Really it's just to stop Google indexing random internal pages that mess with your SEO.
It barely even works for all search providers.
The Internet Archive does not make a useful villain and it doesn't have money, anyway. There's no reason to fight that battle and it's harder to win.
sigh. Of course they are ...
Wow I'm shocked! Just like how OpenAI preached for "privacy and ethics" and went deafly silent on data hoarding and scraping, then privatizes their stolen scraped data. If they insist their data collection to be private, then it needs regular external audits by strict data privacy firms just like they do with security.
Also, by the way, violating a basic social contract to not work towards triggering an intelligence explosion that will likely replace all biological life on Earth with computronium, but whoās counting? :)
I don't think glorified predictive text is posing any real danger to all life on Earth.
Until we weave consciousness with machines we should be good.
If it makes you feel any better, my bet is still on nuclear holocaust or complete ecological collapse resulting from global warming to be our undoing. Given a choice, Iād prefer nuclear holocaust. Feels less protracted. Worst option is weaponized microbes or antibiotic resistant bacteria. Thatāll take foreeeever.
100%. Autopoietic computronium would be a ābest caseā outcome, if Earth is lucky! More likely we donāt even get that before something fizzles. āThe Vulnerable World Hypothesisā is a good paper to read.
That would be a danger if real AI existed. We are very far away from that and what is being called "AI" today (which is advanced ML) is not the path to actual AI. So don't worry, we're not heading for the singularity.
Strong AI, also called artificial general intelligence (AGI), possesses the full range of human capabilities, including talking, reasoning, and emoting. So far, strong AI examples exist in sci-fi movies
Weak AI is easily identified by its limitations, but strong AI remains theoretical since it should have few (if any) limitations.
AGI represents a level of power that remains firmly in the realm of speculative fiction as on date
Ah, I understand you now. You donāt believe weāre close to AGI. I donāt know what to tell you. Weāre moving at an incredible clip; AGI is the stated goal of the big AI players. Many experts think we are probably just one or two breakthroughs away. Youāve seen the surveys on timelines? Years to decades. Seems wise to think ahead to its implications rather than dismiss its possibility.
See the sources above and many more. We don't need one or two breakthroughs, we need a complete paradigm shift. We don't even know where to start with for AGI. There's a bunch of research, but nothing really came out of it yet. Weak AI has made impressive bounds in the past few years, but the only connection between weak and strong AI is the name. Weak AI will not become strong AI as it continues to evolve. The two are completely separate avenues of research. Weak AI is still advanced algorithms. You can't get AGI with just code. We'll need a completely new type of hardware for it.
Before Deep Learning recently shifted the AI computing paradigm, I would have written exactly what you wrote. But as of late, the opinion that we need yet another type of hardware to surpass human intelligence seems increasingly rare. Multimodal generative AI is already pretty general. To count as AGI for you, you would like to see the addition of continuous learning and agentification? (Or are you looking for āconsciousnessā?)
That said, Iām all for a new paradigm, and favor Russellās āprovably beneficial AIā approach!
Deep learning did not shift any paradigm. It's just more advanced programming. But gen AI is not intelligence. It's just really well trained ML. ChatGPT can generate text that looks true and relevant. And that's its goal. It doesn't have to be true or relevant, it just has to look convincing. And it does. But there's no form of intelligence at play there. It's just advanced ML models taking an input and guessing the most likely output.
What we have today does not exhibit even the faintest signs of actual intelligence. Gen AI models don't actually understand the output they are providing, that's why they so often produce self-contradictory results. And the algorithms will continue to be fine-tuned to produce fewer such mistakes, but that won't change the core of what gen AI really is. You can't teach ChatGPT how to play chess or a new language or music. The same model can be trained to do one of those tasks instead of chatting, but that's not how intelligence works.
Hi! Thanks for the conversation. Iām aware of the 2022 survey referenced in the article. Notably, in only one yearās time, expected timelines have advanced significantly. Here is that survey authorās latest update: https://arxiv.org/abs/2401.02843 (click on PDF in the sidebar)
I consider Deep Learning to be new and a paradigm shift because only recently have we had the compute to prove its effectiveness. And the Transformer paradigm enabling LLMs is from 2017. I donāt know what counts as new for you. (Also I wouldnāt myself call it āprogrammingā in the traditional senseā with neural nets weāre more āgrowingā AI, but you probably know this.)
If you are reading me as saying that generative AI alone scales to AGI, we are talking past each other. But I do disagree with you and think Hinton and others are correct where they show there is already some form of reasoning and understanding in these models. (See https://youtu.be/iHCeAotHZa4 for a recent Hinton talk.) I donāt doubt that additional systems will be developed to improve/add additional reasoning and planning to AI processesāand I have no reason to doubt your earlier assertion that it will be a different additional system or paradigm. We donāt know when the breakthroughs will come. Maybe itās āTree of Thoughtsā, maybe itās something else. Things are moving fast. (And weāre already at the point where AI is used to improve next gen AI.)
At any rate, I believe my initial point remains regardless of oneās timelines: it is the goal of the top AI labs to create AGI. To me, this is fundamentally a dangerous mission because of concerns raised in papers such as Natural Selection Favors AIs over Humans. (Not to mention the concerns raised in An Overview of Catastrophic AI Risks, many of which apply to even todayās systems.)
Cheers and wish us luck!
There are two dangers in the current race to get to AGI and in developing the inevitable ANI products along the way. One is that advancement and profit are the goals while the concern for AI safety and alignment in case of success has taken a back seat (if it's even considered anymore). Then there is number two - we don't even have to succeed in AGI for there to be disastrous consequences. Look at the damage early LLM usage has already done, and it's still not good enough to fool anyone who looks closely. Imagine a non-reasoning LLM able to manipulate any media well enough to be believable even with other AI testing tools. We're just getting to that point - the latest AI Explained video discussed Gemini and Sora and one of them (I think Sora) fooled some text generation testers into thinking its stories were 100% human created. In short, we don't need full general AI to end up with catastrophe, we'll easily use the "lesser" ones ourselves. Which will really fuel things if AGI comes along and sees what we've done.
This is like saying putting logs on a fire is "one or two breakthroughs away" from nuclear fusion.
LLMs do not have anything in common with intelligence. They do not resemble intelligence. There is no path from that nonsense to intelligence. It's a dead end, and a bad one.
I remember early Zuckerberg comments that put me onto just how douchey corporations could be about exploiting a new resource.
Ah, AI doesn't pose as danger in that way. It's danger is in replacing jobs, people getting fired bc of ai, etc.
Those are dangers of capitalism, not AI.
Fair point, but AI is part of it, I mean it exists in capitalist system. This AI Singularity apocalypse is like not gonna happen in 99%, AI within capitalism will affect us badly.
All progress comes with old jobs becoming obsolete and new jobs being created. It's just natural.
But AI is not going to replace any skilled professionals soon. It's a great tool to add to professionals' arsenal, but non-professionals who use it to completely replace hiring a professional will get what they pay for (and those people would have never actually paid for a skilled professional in the first place; they'd have hired the cheapest outsourced wannabe they could find; after first trying to convince a professional that exposure is worth more than money)
It replaced content writers, replacing digital artists, replacing programmers. In a sense they fire unexeprieced ones because ai speeds up those with more experience.
Any type of content generated by AI should be reviewed and polished by a professional. If you're putting raw AI output out there directly then you don't care enough about the quality of your product.
For example, there are tons of nonsensical articles on the internet that were obviously generated by AI and their sole purpose is to crowd search results and generate traffic. The content writers those replaced were paid $1/article or less (I work in the freelancing business and I know these types of jobs). Not people with any actual training in content writing.
But besides the tons of prompt crafting and other similar AI support jobs now flooding the market, there's also huge investment in hiring highly skilled engineers to launch various AI related product while the hype is high.
So overall a ton of badly paid jobs were lost and a lot of better paid jobs were created.
The worst part will be when the hype dies and the new trend comes along. Entire AI teams will be laid off to make room for others.
Your worry at least has possible solutions, such as a global VAT funding UBI.
Yeah I'm not for UBI that much, and don't see anyone working towards global VAT. I was comparing that worry about AI that is gonna destroy humanity is not possible, it's just scifi.
Seven years ago I would have told you that GPT-4 was sci fi, and I expect you would have said the same, as would have most every AI researcher. The deep learning revolution came as a shock to most. We donāt know when the next breakthrough will be towards agentification, but given the funding now, we should expect soon. Anyways, if youāre ever interested to learn more about unsolved fundamental AI safety problems, the book āHuman Compatibleā by Stewart Russell is excellent. Also āUncontrollableā by Darren McKee just came out (I havenāt read it yet) and is said to be a great introduction to the bigger fundamental risks. A lot to think about; just saying I wouldnāt be quick to dismiss it. Cheers.
Like so many terrible ideas, it worked flawlessly for generations
Why should I care about a text file lol
All laws are just words on peices of paper. Why should you care?
This seems to interestingly prove the point made by the person this is in reply to. Breaking laws come with consequences. Not caring about a robots.txt file doesn't. But maybe it should.
My angle was more about all rules being social contructs, and said rules being important for the continued operation of society, but that's a good angle too.
Lots of laws don't come with real punishments either, especially if you have money. We can change this too.
Because this tiny text file gives web adminis the ability to consent or not consent to having their website crawled and potentially saved by bots.
I disallow all in my robots.txt files, and I would like for AI companies to respect this basic social contract and fuck off along with Googlebot.
A config* file š
š¤£š¤£š¤£š¤£š¤£š¤£š¤£ "robots.txt is a social contract" š¤£š¤£š¤£š¤£š¤£š¤£š¤£
š¤”
If you have something to say, actually explain it instead of the obnoxious emoji spam.
It's completely off-topic, but you know 4chan filters? Like, replacing "fam" with "senpai" and stuff like this?
So. It would be damn great if Lemmy had something similar. Except that it would replace emojis, "lol" and "lmao" with "I'm braindead."
That extension is fun, but it doesn't "gently shame" the person spamming emojis by replacing their emojis with "I'm braindead" in a way that themself would see.
How do I edit someone else's post
Contrariwise to your blatant assumption, I'm not proposing a system where users can edit each others' posts. I'm just toying with the idea of word filters, not too different from the ones that already exist for slurs in Lemmy.
For example. If you write [insert slur here], it gets replaced with removed. What if it replaced emojis with "I'm braindead."? That's it.
(Before yet another assumer starts doing its shit: the idea is not too serious.)
Arenāt they effective when used sparingly š
They would be less obnoxious if used sparingly, but they wouldn't be effective unless the reason why they're used changed, from graphical echo ("I saw a cat today š±") and mood/attitude particles (like you did) to ideographic usage (e.g. "I saw a š± today"). Plus they're still colourful and attention-grabbing drawings within text, they detract attention from the text itself.
Canāt distractions from the text sometimes be exactly what you want?
And have you seen an emoji perfectly complete a meme when being used for mood? How about convey lighthearted intent when discussing a serious subject?
They can but most of the time they aren't. That's the key here: most of the time emojis only add noise, to the point that the shreds of legitimate usage (that can be conveyed through other means) don't really justify keeping the cons of the noise.
It isn't like anyone would implement my idea though. I'm mostly doing like that old man screaming at the sky, or something like this.
A lot of post-September 1993 internet users wouldn't understand, I get it.
post-September 1993
you're talking nonsense, for all I know today is Wed 11124 set 1993
I've just converted to polytheism and have begun praying to the Emoji God asking them to use 1,000 origami cry laughing Emojis to smite you down, so that you may die how you lived.
I hope it won't be quick, or painless, but that's up to the Gods now.
I hope it wonāt be quick, or painless, but thatās up to the Gods now.
Considering that we're talking about emojis, it'll definitely be silent.
Put something in robots.txt that isn't supposed to be hit and is hard to hit by non-robots. Log and ban all IPs that hit it.
Imperfect, but can't think of a better solution.
Good old honeytrap. I'm not sure, but I think that it's doable.
Have a honeytrap page somewhere in your website. Make sure that legit users won't access it. Disallow crawling the honeytrap page through robots.txt.
Then if some crawler still accesses it, you could record+ban it as you said... or you could be even nastier and let it do so. Fill the honeytrap page with poison - nonsensical text that would look like something that humans would write.
I think I used to do something similar with email spam traps. Not sure if it's still around but basically you could help build NaCL lists by posting an email address on your website somewhere that was visible in the source code but not visible to normal users, like in a div that was way on the left side of the screen.
Anyway, spammers that do regular expression searches for email addresses would email it and get their IPs added to naughty lists.
I'd love to see something similar with robots.
Yup, it's the same approach as email spam traps. Except the naughty list, but... holy fuck a shareable bot IP list is an amazing addition, it would increase the damage to those web crawling businesses.
but with all of the cloud resources now, you can switch through IP addresses without any trouble. hell, you could just browse by IP6 and not even worry with how cheap those are!
Yeah, that throws a monkey wrench into the idea. That's a shame, because "either respect robots.txt or you're denied access to a lot of websites!" is appealing.
That's when Google's browser DRM thing starts sounding like a good idea š
Even better. Build a WordPress plugin to do this.
Iām the idiot human that digs through robots.txt and the site map to see things that arenāt normally accessible by an end user.
"Help, my website no longer shows up in Google!"
Yeah, this is a pretty classic honeypot method. Basically make something available but inaccessible to the normal user. Then you know anyone who accesses it is not a normal user.
Iāve even seen this done with Steam achievements before; There was a hidden game achievement which was only available via hacking. So anyone who used hacks immediately outed themselves with a rare achievement that was visible on their profile.
Thatās a bit annoying as it means you canāt 100% the game as there will always be one achievement you canāt get.
There are tools that just flag you as having gotten an achievement on Steam, you don't even have to have the game open to do it. I'd hardly call that 'hacking'.
Better yet, point the crawler to a massive text file of almost but not quite grammatically correct garbage to poison the model. Something it will recognize as language and internalize, but severely degrade the quality of its output.
Maybe one of the lorem ipsum generators could help.
a bad-bot .htaccess trap.
robots.txt is purely textual; you can't run JavaScript or log anything. Plus, one who doesn't intend to follow robots.txt wouldn't query it.
If it doesn't get queried that's the fault of the webscraper. You don't need JS built into the robots.txt file either. Just add some line like:
Any client that hits that page (and maybe doesn't pass a captcha check) gets banned. Or even better, they get a long stream of nonsense.
server {
name herebedragons.example.com; root /dev/random;
}
Nice idea! Better use
/dev/urandom
through, as that is non blocking. See here.That was really interesting. I always used urandom by practice and wondered what the difference was.
I wonder if Nginx would just load random into memory until the kernel OOM kills it.
I actually love the data-poisoning approach. I think that sort of strategy is going to be an unfortunately necessary part of the future of the web.
You're second point is a good one, but you absolutely can log the IP which requested robots.txt. That's just a standard part of any http server ever, no JavaScript needed.
You'd probably have to go out of your way to avoid logging this. I've always seen such logs enabled by default when setting up web servers.
People not intending to follow it is the real reason not to bother, but it's trivial to track who downloaded the file and then hit something they were asked not to.
Like, 10 minutes work to do right. You don't need js to do it at all.
Honestly it seems like in all aspects of society the social contract is being ignored these days, that's why things seem so much worse now.
It's abuse, plain and simple.
Governments could do something about it, if they weren't overwhelmed by bullshit from bullshit generators instead and lead by people driven by their personal wealth.
When, at any point in history, have people acknowledged that there was no social change or disruption and everyone was happy?
Well the trump era has shown that ignoring social contracts and straight up crime are only met with profit and slavish devotion from a huge community of dipshits. So. Yāknow.
Only if you're already rich or in the right social circles though. Everyone else gets fined/jail time of course.
Meh maybe. I know plenty of people who get away with all kinds of crap without money or connections.
The open and free web is long dead.
just thinking about robots.txt as a working solution to people that literally broker in people's entire digital lives for hundreds of billions of dollars is so ... quaint.
It's up there with Do-Not-Track.
Completely pointless because it's not enforced
Do-Not-Track, AKA, "I've made my browser fingerprint more unique for you, please sell my data"
I bet at least one site I've visited in my lifetime has enforced it
you're jaded. me, too. but you're jaded.
i prefer "antiqued" but yes
I would be shocked if any big corpo actually gave a shit about it, AI or no AI.
Robots.txt is in theory meant to be there so that web crawlers don't waste their time traversing a website in an inefficient way. It's there to help, not hinder them. There is a social contract being broken here and in the long term it will have a negative impact on the web.
Yeah I always found it surprising that everyone just agreed to follow a text file on a website on how to act. It's one of the worst thought out/significant issues with browsing still out there from the beginning pretty much.
Alternative title: Capitalism doesn't care about morals and contracts. It wants to make more money.
Exactly. Capitalism spits in the face of the concept of a social contract, especially if companies themselves didn't write it.
Capitalism, at least, in a lassie-faire marketplace, operates on a social contract, fiat money is an example of this. The market decides, the people decide. Are there ways to amass a certain amount of money to make people turn blind eyes? For sure, but all systems have their ways to amass power, no matter what
I'd say that historical evidence directly contradicts your thesis. Were it factual, times of minimal regulation would be times of universal prosperity. Instead, they are the time of robber-barons, company scrip that must be spent in company stores, workers being massacred by hired thugs, and extremely disparate distribution of wealth.
No. Laissez-faire capitalism has only ever consistently benefitted the already wealthy and sociopaths happy to ignore social contact for their own benefit.
You said āa social contractā. Capitalism operates on one. āThe social contractā as you presumably intend to use it here is different. Yes, capitalism allows those with money to generate money, but a disproportionate distribution of wealth is not violation of a social contract. Iām not arguing for deregulation, FAR from it, but the social contract is there. If a corporation is doing something too unpopular then people donāt work for them and they cease to exist.
Unfortunately, this is not generally the case. In the US, for example, the corporation merely engages in legalized bribery to ensure that people are dependent upon it (ex. limiting healthcare access, erosion of social safety nets) and don't have a choice but to work for them or die. Disproportionate distribution of wealth may not by itself be a violation of social contact but if gives the wealthy extreme leverage to use in coercing those who are not wealthy and further eroding protections against bad actors. This has been shown historically to be a self-reinforcing cycle that requires that the wealthy be forced to stop.
Yes, regulations should be in place, but the ālegalized briberyā isnāt forcing people, itās just easier to stick with the status quo than change it. They arenāt forced to die, itās just a lot of work to not. The social contract is there, itās just one we donāt like
Capitalism is a concept, it can't care if it wanted and it even can't want to begin with. It's the humans. You will find greedy, immoral ones in every system and they will make it miserable for everyone else.
Capitalism is the widelly accepted self-serving justification of those people for their acts.
The real problem is in the "widelly accepted" part: a sociopath killing an old lady and justifying it because "she looked funny at me" wouldn't be "widelly accepted" and Society would react in a suitable way, but if said sociopath scammed the old lady's pension fund because (and this is a typical justification in Investment Banking) "the opportunity was there and if I didn't do it somebody else would've, so better be me and get the profit", it's deemed "acceptable" and Society does not react in a suitable way.
Mind you, Society (as in, most people) might actually want to react in a suitable way, but the structures in our society are such that the Official Power Of Force in our countries is controlled by a handful of people who got there with crafty marketing and backroom plays, and those deem it "acceptable".
People will always find justification to be asholes. Capitalism tried to harvest that energy and unleashed it's full potential, with rather devastating consequences.
Sure, but think-structures matter. We could have a system that doesn't reward psychopathic business choices (as much), while still improving our lives bit by bit. If the system helps a bit with making the right choices, that would matter a lot.
That's basically what I wrote, (free) market economy especially in combination with credit based capitalism gives those people a perfect combination of a system to thrive in. This seems to result in very fast progress and immense wealth, which is not distributed very equally. Than again, I prefer Besos and Zuckerberg as CEOs rather than politicians or warlords. Dudes with big Egos and Ambitions need something productive to work on.
It's deemed "acceptable"? A sociopath scamming an old lady's pension is basically the "John Wick's dog" moment that leads to the insane death-filled warpath in recent movie The Beekeeper.
This is the kind of edgelord take that routinely expects worse than the worst of society with no proof to their claims.
This is the kind of shit I saw from the inside in Investment Banking before and after the 2008 Crash.
None of those assholes ever gets prison time for the various ways in which they abuse markets and even insider info for swindeling amongst other Pension Funds, so de facto the Society we have with the power structures it has, accepts it.
Most every other social contract has been violated already. If they don't ignore robots.txt, what is left to violate?? Hmm??
It's almost as if leaving things to social contracts vs regulating them is bad for the layperson... š¤
Nah fuck it. The market will regulate itself! Tax is theft and I don't want that raise or I'll get in a higher tax bracket and make less!
This can actually be an issue for poor people, not because of tax brackets but because of income-based assistance cutoffs. If $1/hr raise throws you above those cutoffs, that extra $160 could cost you $500 in food assistance, $5-$10/day for school lunch, or get you kicked out of government subsidied housing.
Yet another form of persecution that the poor actually suffer and the rich pretend to.
God the number of people Iāve heard say this over the years is nuts.
And then the companies hit the "trust thermocline", customers leave them in droves and companies wonder how this could've happened.
I got it was sarcasm, but it's always good to add a /s just in case
Yea, because authoritarianism is well known to be sooooo good for the layperson.
Ah yes, equal protection under the law... the true hallmark of an authoritarian regime.
Harrison Bergeron
Fiction can be fun!
We need laws mandating respect of
robots.txt
. This is what happens when you donāt codify stuffAI companies will probably get a free pass to ignore robots.txt even if it were enforced by law. That's what they're trying to do with copyright and it looks likely that they'll get away with it.
It's a bad solution to a problem anyway. If we are going to legally mandate a solution I want to take the opportunity to come up with an actually better fix than the hacky solution that is robots.txt
Turning that into a law is ridiculous - you really canāt consider that more than advisory unless you enforce it with technical means. For example, maybe put it behind a login or captcha if you want only humans to see it
Are you aware of what "unlisted" means?
Yes, and thereās also no law against calling an unlisted phone number
Also we already had this battle with robots.txt. In the beginning, search engines wouldnāt honor it either because they wanted the competitive advantage of more info, and websites trusted it too much and tried to wall off too much info that way.
There were complaints, bad pr, lawsuits, call for a law
Itās no longer the Wild West:
There's also no law against visiting an unlisted webpage? What?
Sounds like the type of thing that would either be unenforceable or profitable to violate compared to the fines.
Why? What would you like to achieve and how would that help?
I hope not, laws tend to get outdated real fast. Who knows robots.txt might not even be used in the future and it just there adding space because of law reasons.
robots.txt is a 30 year old standard. If we can write common sense laws around things like email and VoIP, we can do it for web standards too.
You can describe the law in a similar way to a specification, and you can make it as broad as needed. Something like the file name shouldn't ever come up as an issue.
The law can be broad with allowances to define specifics by decree, executive order or the equivalent.
robots.txt has been an unofficial standard for 30 years and its augmented with sitemap.xml to help index uncrawlable pages, and Schema.org to expose contents for Semantic Web. I'm not stating it shouldn't not be a law, but to suggest changing norms as a reason is a pretty weak counterargument, man.
We don't need new laws we just need enforcement of existing laws. It is already illegal to copy copyrighted content, it's just that the AI companies do it anyway and no one does anything about it.
Enforcing respect for robots.txt doesn't matter because the AI companies are already breaking the law.
I think the issue is that existing laws don't clearly draw a line that AI can cross. New laws may very well be necessary if you want any chance at enforcement.
And without a law that defines documents like robots.txt as binding, enforcing respect for it isn't "unnecessary", it is impossible.
I see no logic in complaining about lack of enforcement while actively opposing the ability to meaningfully enforce.
Copyright law in general needs changing though that's the real problem. I don't see the advantage of legally mandating that a hacky workaround solution becomes a legally mandated requirement.
Especially because there are many many legitimate reasons to ignore robots.txt including it being misconfigured or it just been set up for search engines when your bot isn't a search engine crawler.
All my scrapping scripts go to shit...please no, I need automation to live...
They didn't violate the social contact, they disrupted it.
True innovation. So brave.
I explicitly have my robots.txt set to block out AI crawlers, but I don't know if anyone else will observe the protocol. They should have tools I can submit a sitemap.xml against to know if i've been parsed. Until they bother to address this, I can only assume their intent is hostile and if anyone is serious about building a honeypot and exposing the tooling for us to deploy at large, my options are limited.
The funny (in an "wtf" not "haha" sense) thing is, individuals such as security researchers have been charged under digital trespassing laws for stuff like accessing publicly available ststems and changing a number in the URL in order to get access to data that normally wouldn't, even after doing responsible disclosure.
Meanwhile, companies completely ignore the standard mentions to say "you are not allowed to scape this data" and then use OUR content/data to build up THEIR datasets, including AI etc.
That's not a "violation of a social contract" in my book, that's violating the terms of service for the site and essentially infringement on copyright etc.
No consequences for them though. Shit is fucked.
Remember Aaron Swartz
Corporations are people except when it comes to liability. Compare the consequences of stealing several thousand dollars from someone by fraud vs. stealing several thousand dollars from someone by fraud as an LLC.
Just thought of a nasty hack the browser makers (or hackers) could use to scrape unlisted sites - by surreptitiously logging user browser history for a crawl list
Perhaps some web extensions already do this and phone home about it.
While there are some extensions that do this, last I saw Google didn't use Chrome for populating Search:
https://blogs.perficient.com/2017/03/15/does-google-use-chrome-to-discover-new-urls-for-crawling/
Strong "the constitution is a piece of paper" energy right there
hmm, i though websites just blocked crawler traffic directly? I know one site in particular has rules about it, and will even go so far as to ban you permanently if you continually ignore them.
You cannot simply block crawlers lol
hide a link no one would ever click. if an ip requests the link, it's a ban
Except that it'd also catch out people who use accessibility devices might see the link anyways, or use the keyboard to navigate a site instead of a mouse.
i don't know, maybe there's a canvas trick. i'm not a webdev so i am a bit out of my depth and mostly guessing and remembering 20-year-old technology
If it weren't so difficult and require so much effort, I'd rather clicking the link cause the server to switch to serving up poisoned data -- stuff that will ruin a LLM.
Visiting
/enter_spoopmode.html
will choose a theme and mangle the text for any page you next go to accordingly (think search&replace with swear words or santa clause)It will also show a banner letting the user know they are in spoop mode, with a javascript button to exit the mode, where the AJAX request URL is ofuscated (think base64) The banner is at the bottom of the html document (not nesisarly the screen itself) and/or inside unusual/normally ignored tags. `
Would that be effective? A lot of poisoning seems targeted to a specific version of an LLM, rather than being general.
Like how the image poisoning programs only work for some image generators and not others.
Well you can if you know the IPs that come in from but that's of course the trick.
last i checked humans dont access every page on a website nearly simultaneously...
And if you imitate a human then honestly who cares.
Detecting crawlers can be easier said than done š
i mean yeah, but at a certain point you just have to accept that it's going to be crawled. The obviously negligent ones are easy to block.
There are more crawlers than I have fucks to give, you'll be in a pissing match forever. robots.txt was supposed to be the norm to tell crawlers what they can and cannot access. Its not on you to block them. Its on them, and its sadly a legislative issues at this point.
I wish it wasn't, but legislative fixes are always the most robust and complied against.
yes but also there's a point where it's blatantly obvious. And i can't imagine it's hard to get rid of the obviously offending ones. Respectful crawlers are going to be imitating humans, so who cares, disrespectful crawlers will ddos your site, that can't be that hard to implement.
Though if we're talking "hey please dont scrape this particular data" Yeah nobody was ever respecting that lol.
Both paragraphs demonstrate gross ignorance
No laws to govern so they can do anything they want. Blame boomer politicians not the companies.
ĀæPor quĆ© no los dos?
Fhdj glgllf d''''''ĆĆ·Ļā¢=|Ā¶ fkssb
No Idea why you're getting downvotes, in my opinion it was very eloquently said
Why not blame the companies ? After all they are the ones that are doing it, not the boomer politicians.
And in the long term they are the ones that risk to be "punished", just imagine people getting tired of this shit and starting to block them at a firewall level...
Because the politicians also created the precedent that anything you can get away with, goes. They made the game, defined the objective, and then didnāt adapt quickly so that they and their friends would have a shot at cheating.
There is absolutely no narrative of āwhat can you do for your countryā anymore. Itās been replaced by the mottos of āevery man for himselfā and āget while the gettingās goodā.
What social contract? When sites regularly have a
robots.txt
that says "only Google may crawl", and are effectively helping enforce a monolopy, that's not a social contract I'd ever agree to.I had a one-eared rabbit. He was a monolopy.
Sounds like a Pal name lol
Only if its model is a Lopunny missing an ear
I was thinking of a short lil bunny wearing a top hat and monocle with one ear sticking out of the center of the top hat but that works too
Is that actually true?
If so, why would they do that?
This is the best summary I could come up with:
If you hosted your website on your computer, as many people did, or on hastily constructed server software run through your home internet connection, all it took was a few robots overzealously downloading your pages for things to break and the phone bill to spike.
AI companies like OpenAI are crawling the web in order to train large language models that could once again fundamentally change the way we access and share information.
In the last year or so, the rise of AI products like ChatGPT, and the large language models underlying them, have made high-quality training data one of the internetās most valuable commodities.
You might build a totally innocent one to crawl around and make sure all your on-page links still lead to other live pages; you might send a much sketchier one around the web harvesting every email address or phone number you can find.
The New York Times blocked GPTBot as well, months before launching a suit against OpenAI alleging that OpenAIās models āwere built by copying and using millions of The Timesās copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides, and more.ā A study by Ben Welsh, the news applications editor at Reuters, found that 606 of 1,156 surveyed publishers had blocked GPTBot in their robots.txt file.
āWe recognize that existing web publisher controls were developed before new AI and research use cases,ā Googleās VP of trust Danielle Romain wrote last year.
The original article contains 2,912 words, the summary contains 239 words. Saved 92%. I'm a bot and I'm open source!
This is a very interesting read. It is very rarely people on the internet agree to follow 1 thing without being forced
Loads of crawlers don't follow it, i'm not quite sure why AI companies not following it is anything special. Really it's just to stop Google indexing random internal pages that mess with your SEO.
It barely even works for all search providers.
The Internet Archive does not make a useful villain and it doesn't have money, anyway. There's no reason to fight that battle and it's harder to win.
sigh. Of course they are ...
Wow I'm shocked! Just like how OpenAI preached for "privacy and ethics" and went deafly silent on data hoarding and scraping, then privatizes their stolen scraped data. If they insist their data collection to be private, then it needs regular external audits by strict data privacy firms just like they do with security.
Also, by the way, violating a basic social contract to not work towards triggering an intelligence explosion that will likely replace all biological life on Earth with computronium, but whoās counting? :)
I don't think glorified predictive text is posing any real danger to all life on Earth.
Until we weave consciousness with machines we should be good.
If it makes you feel any better, my bet is still on nuclear holocaust or complete ecological collapse resulting from global warming to be our undoing. Given a choice, Iād prefer nuclear holocaust. Feels less protracted. Worst option is weaponized microbes or antibiotic resistant bacteria. Thatāll take foreeeever.
100%. Autopoietic computronium would be a ābest caseā outcome, if Earth is lucky! More likely we donāt even get that before something fizzles. āThe Vulnerable World Hypothesisā is a good paper to read.
That would be a danger if real AI existed. We are very far away from that and what is being called "AI" today (which is advanced ML) is not the path to actual AI. So don't worry, we're not heading for the singularity.
I request sources :)
https://www.lifewire.com/strong-ai-vs-weak-ai-7508012
https://en.m.wikipedia.org/wiki/Artificial_general_intelligence
Boucher, Philip (March 2019). How artificial intelligence works
https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf
Ah, I understand you now. You donāt believe weāre close to AGI. I donāt know what to tell you. Weāre moving at an incredible clip; AGI is the stated goal of the big AI players. Many experts think we are probably just one or two breakthroughs away. Youāve seen the surveys on timelines? Years to decades. Seems wise to think ahead to its implications rather than dismiss its possibility.
See the sources above and many more. We don't need one or two breakthroughs, we need a complete paradigm shift. We don't even know where to start with for AGI. There's a bunch of research, but nothing really came out of it yet. Weak AI has made impressive bounds in the past few years, but the only connection between weak and strong AI is the name. Weak AI will not become strong AI as it continues to evolve. The two are completely separate avenues of research. Weak AI is still advanced algorithms. You can't get AGI with just code. We'll need a completely new type of hardware for it.
Before Deep Learning recently shifted the AI computing paradigm, I would have written exactly what you wrote. But as of late, the opinion that we need yet another type of hardware to surpass human intelligence seems increasingly rare. Multimodal generative AI is already pretty general. To count as AGI for you, you would like to see the addition of continuous learning and agentification? (Or are you looking for āconsciousnessā?)
That said, Iām all for a new paradigm, and favor Russellās āprovably beneficial AIā approach!
Deep learning did not shift any paradigm. It's just more advanced programming. But gen AI is not intelligence. It's just really well trained ML. ChatGPT can generate text that looks true and relevant. And that's its goal. It doesn't have to be true or relevant, it just has to look convincing. And it does. But there's no form of intelligence at play there. It's just advanced ML models taking an input and guessing the most likely output.
Here's another interesting article about this debate: https://ourworldindata.org/ai-timelines
What we have today does not exhibit even the faintest signs of actual intelligence. Gen AI models don't actually understand the output they are providing, that's why they so often produce self-contradictory results. And the algorithms will continue to be fine-tuned to produce fewer such mistakes, but that won't change the core of what gen AI really is. You can't teach ChatGPT how to play chess or a new language or music. The same model can be trained to do one of those tasks instead of chatting, but that's not how intelligence works.
Hi! Thanks for the conversation. Iām aware of the 2022 survey referenced in the article. Notably, in only one yearās time, expected timelines have advanced significantly. Here is that survey authorās latest update: https://arxiv.org/abs/2401.02843 (click on PDF in the sidebar)
I consider Deep Learning to be new and a paradigm shift because only recently have we had the compute to prove its effectiveness. And the Transformer paradigm enabling LLMs is from 2017. I donāt know what counts as new for you. (Also I wouldnāt myself call it āprogrammingā in the traditional senseā with neural nets weāre more āgrowingā AI, but you probably know this.)
If you are reading me as saying that generative AI alone scales to AGI, we are talking past each other. But I do disagree with you and think Hinton and others are correct where they show there is already some form of reasoning and understanding in these models. (See https://youtu.be/iHCeAotHZa4 for a recent Hinton talk.) I donāt doubt that additional systems will be developed to improve/add additional reasoning and planning to AI processesāand I have no reason to doubt your earlier assertion that it will be a different additional system or paradigm. We donāt know when the breakthroughs will come. Maybe itās āTree of Thoughtsā, maybe itās something else. Things are moving fast. (And weāre already at the point where AI is used to improve next gen AI.)
At any rate, I believe my initial point remains regardless of oneās timelines: it is the goal of the top AI labs to create AGI. To me, this is fundamentally a dangerous mission because of concerns raised in papers such as Natural Selection Favors AIs over Humans. (Not to mention the concerns raised in An Overview of Catastrophic AI Risks, many of which apply to even todayās systems.)
Cheers and wish us luck!
There are two dangers in the current race to get to AGI and in developing the inevitable ANI products along the way. One is that advancement and profit are the goals while the concern for AI safety and alignment in case of success has taken a back seat (if it's even considered anymore). Then there is number two - we don't even have to succeed in AGI for there to be disastrous consequences. Look at the damage early LLM usage has already done, and it's still not good enough to fool anyone who looks closely. Imagine a non-reasoning LLM able to manipulate any media well enough to be believable even with other AI testing tools. We're just getting to that point - the latest AI Explained video discussed Gemini and Sora and one of them (I think Sora) fooled some text generation testers into thinking its stories were 100% human created. In short, we don't need full general AI to end up with catastrophe, we'll easily use the "lesser" ones ourselves. Which will really fuel things if AGI comes along and sees what we've done.
This is like saying putting logs on a fire is "one or two breakthroughs away" from nuclear fusion.
LLMs do not have anything in common with intelligence. They do not resemble intelligence. There is no path from that nonsense to intelligence. It's a dead end, and a bad one.
I remember early Zuckerberg comments that put me onto just how douchey corporations could be about exploiting a new resource.
Ah, AI doesn't pose as danger in that way. It's danger is in replacing jobs, people getting fired bc of ai, etc.
Those are dangers of capitalism, not AI.
Fair point, but AI is part of it, I mean it exists in capitalist system. This AI Singularity apocalypse is like not gonna happen in 99%, AI within capitalism will affect us badly.
All progress comes with old jobs becoming obsolete and new jobs being created. It's just natural.
But AI is not going to replace any skilled professionals soon. It's a great tool to add to professionals' arsenal, but non-professionals who use it to completely replace hiring a professional will get what they pay for (and those people would have never actually paid for a skilled professional in the first place; they'd have hired the cheapest outsourced wannabe they could find; after first trying to convince a professional that exposure is worth more than money)
It replaced content writers, replacing digital artists, replacing programmers. In a sense they fire unexeprieced ones because ai speeds up those with more experience.
Any type of content generated by AI should be reviewed and polished by a professional. If you're putting raw AI output out there directly then you don't care enough about the quality of your product.
For example, there are tons of nonsensical articles on the internet that were obviously generated by AI and their sole purpose is to crowd search results and generate traffic. The content writers those replaced were paid $1/article or less (I work in the freelancing business and I know these types of jobs). Not people with any actual training in content writing.
But besides the tons of prompt crafting and other similar AI support jobs now flooding the market, there's also huge investment in hiring highly skilled engineers to launch various AI related product while the hype is high.
So overall a ton of badly paid jobs were lost and a lot of better paid jobs were created.
The worst part will be when the hype dies and the new trend comes along. Entire AI teams will be laid off to make room for others.
Your worry at least has possible solutions, such as a global VAT funding UBI.
Yeah I'm not for UBI that much, and don't see anyone working towards global VAT. I was comparing that worry about AI that is gonna destroy humanity is not possible, it's just scifi.
Seven years ago I would have told you that GPT-4 was sci fi, and I expect you would have said the same, as would have most every AI researcher. The deep learning revolution came as a shock to most. We donāt know when the next breakthrough will be towards agentification, but given the funding now, we should expect soon. Anyways, if youāre ever interested to learn more about unsolved fundamental AI safety problems, the book āHuman Compatibleā by Stewart Russell is excellent. Also āUncontrollableā by Darren McKee just came out (I havenāt read it yet) and is said to be a great introduction to the bigger fundamental risks. A lot to think about; just saying I wouldnāt be quick to dismiss it. Cheers.
Seems relevant.
https://www.notebookcheck.net/UPS-lays-off-12-000-managers-as-AI-replaces-jobs.802229.0.html
good. robots.txt was always a bad idea
Like so many terrible ideas, it worked flawlessly for generations
Why should I care about a text file lol
All laws are just words on peices of paper. Why should you care?
This seems to interestingly prove the point made by the person this is in reply to. Breaking laws come with consequences. Not caring about a robots.txt file doesn't. But maybe it should.
My angle was more about all rules being social contructs, and said rules being important for the continued operation of society, but that's a good angle too.
Lots of laws don't come with real punishments either, especially if you have money. We can change this too.
Because this tiny text file gives web adminis the ability to consent or not consent to having their website crawled and potentially saved by bots.
I disallow all in my robots.txt files, and I would like for AI companies to respect this basic social contract and fuck off along with Googlebot.
A config* file š
š¤£š¤£š¤£š¤£š¤£š¤£š¤£ "robots.txt is a social contract" š¤£š¤£š¤£š¤£š¤£š¤£š¤£ š¤”
If you have something to say, actually explain it instead of the obnoxious emoji spam.
It's completely off-topic, but you know 4chan filters? Like, replacing "fam" with "senpai" and stuff like this?
So. It would be damn great if Lemmy had something similar. Except that it would replace emojis, "lol" and "lmao" with "I'm braindead."
https://addons.mozilla.org/en-US/firefox/addon/word-replacer-max/
That extension is fun, but it doesn't "gently shame" the person spamming emojis by replacing their emojis with "I'm braindead" in a way that themself would see.
Contrariwise to your blatant assumption, I'm not proposing a system where users can edit each others' posts. I'm just toying with the idea of word filters, not too different from the ones that already exist for slurs in Lemmy.
For example. If you write [insert slur here], it gets replaced with removed. What if it replaced emojis with "I'm braindead."? That's it.
(Before yet another assumer starts doing its shit: the idea is not too serious.)
Arenāt they effective when used sparingly š
They would be less obnoxious if used sparingly, but they wouldn't be effective unless the reason why they're used changed, from graphical echo ("I saw a cat today š±") and mood/attitude particles (like you did) to ideographic usage (e.g. "I saw a š± today"). Plus they're still colourful and attention-grabbing drawings within text, they detract attention from the text itself.
Canāt distractions from the text sometimes be exactly what you want?
And have you seen an emoji perfectly complete a meme when being used for mood? How about convey lighthearted intent when discussing a serious subject?
They can but most of the time they aren't. That's the key here: most of the time emojis only add noise, to the point that the shreds of legitimate usage (that can be conveyed through other means) don't really justify keeping the cons of the noise.
It isn't like anyone would implement my idea though. I'm mostly doing like that old man screaming at the sky, or something like this.
š“š£ļøš£š
That would be amazing.
A lot of post-September 1993 internet users wouldn't understand, I get it.
you're talking nonsense, for all I know today is Wed 11124 set 1993
I've just converted to polytheism and have begun praying to the Emoji God asking them to use 1,000 origami cry laughing Emojis to smite you down, so that you may die how you lived.
I hope it won't be quick, or painless, but that's up to the Gods now.
Considering that we're talking about emojis, it'll definitely be silent.
Silent, but deadly.