AI Is Poisoning Reddit to Promote Products and Game Google With 'Parasite SEO'

ugjka@lemmy.world to Technology@lemmy.world – 769 points –
AI Is Poisoning Reddit to Promote Products and Game Google With 'Parasite SEO'
404media.co
154

You are viewing a single comment

Generative AI has really become a poison. It'll be worse once the generative AI is trained on its own output.

Here's my prediction. Over the next couple decades the internet is going to be so saturated with fake shit and fake people, it'll become impossible to use effectively, like cable television. After this happens for a while, someone is going to create a fast private internet, like a whole new protocol, and it's going to require ID verification (fortunately automated by AI) to use. Your name, age, and country and state are all public to everybody else and embedded into the protocol.

The new 'humans only' internet will be the new streaming and eventually it'll take over the web (until they eventually figure out how to ruin that too). In the meantime, they'll continue to exploit the infested hellscape internet because everybody's grandma and grampa are still on it.

I would rather wade with bots than exist on a fully doxxed Internet.

Yup. I have my own prediction - that humanity will finally understand the wisdom of PGP web of trust, and using that for friend-to-friend networks over Internet. After all, you can exchange public keys via scanning QR codes, it's very intuitive now.

That would be cool. No bots. Unfortunately, corps, govs and other such mythical demons really want to be able to automate influencing public opinion. So this won't happen until the potential of the Web for such influence is sucked dry. That is, until nobody in their right mind would use it.

That sounds very reasonable as a prediction. I could see it being a pretty interesting black mirror episode. I would love it to stay as fiction though.

1 more...

You’re two years late.

Maybe not for the reputable ones, that’s 2026, but these sheisters have been digging out the bottom of the swimming pool for years.

https://theconversation.com/researchers-warn-we-could-run-out-of-data-to-train-ai-by-2026-what-then-216741

New models already train on synthetic data. It's already a solved solution.

Is it really a solution, though, or is it just GIGO?

For example, GPT-4 is about as biased as the medical literature it was trained on, not less biased than its training input, and thereby more inaccurate than humans:

https://www.thelancet.com/journals/landig/article/PIIS2589-7500(23)00225-X/fulltext

All the latest models are trained on synthetic data generated on got4. Even the newer versions of gpt4. Openai realized it too late and had to edit their license after Claude was launched. Human generated data could only get us so far, recent phi 3 models which managed to perform very very well for their respective size (3b parameters) can only achieve this feat because of synthetic data generated by AI.

I didn't read the paper you mentioned, but recent LLM have progressed a lot in not just benchmarks but also when evaluated by real humans.

1 more...