Reddit if full of bots: thread reposted exactly the same, comment by comment, 10 months later
i.imgur.com
For the threads with the older one on the left: https://lemmy.world/post/14859950
(Thank you @Nelots@lemm.ee )
You are viewing a single comment
Reddit has access to its own data - they absolutely know which users are posting unique content and which user's content is a 100% copy of data that exists elsewhere on their own platform
I know they could be I'm just not sure they're that competent. These bots often aren't single user or just copy paste either, there's usually some effort to mix it up or change wording slightly. Reddits internal search function is infamously shit but they "know" which users are unlabeled bots with some effort put behind them?
I figure it’s their absolute last priority. They might know rough bot #s, but haven’t built or don’t widely use takedown tools. There’s always an enhancement to deliver, and bots help their engagement metrics.
I know everyone here likes to circle jerk over "le Reddit so incompetent" but at the end of the day they are a (multi) billion dollar company and it's willfully ignorant to infer that there isn't a single engineer at the company who knows how to measure string similarity between two comment trees (hint:
import difflib
in python)You think in Reddit's 20 year history no one has thought of indexing comments for data science workloads? A cursory glance at their engineering blog indicates they perform much more computationally demanding tasks on comment data already for purposes of content filtering
Analytics workflows are never run on the production database, always on read replicas which are taken asynchronously and built from the transaction logs so as not to affect production database read/write performance
Reddit's entire monetization strategy is collecting user data and selling it to advertisers - It's incredibly naive to think that they don't have a vested interest in identifying organic engagement
I'm sure they have, but an index doesn't have anything to do with the python library you mentioned.
Sure, either that or aggregating live streams of data, but either way it doesn't have anything to do with ElasticSearch.
It's still totally possible to sync things to ElasticSearch in a way that won't affect performance on the production servers, but I'm just saying it's not entirely trivial, especially at the scale reddit operates at, and there's a cost for those extra servers and storage to consider as well.
It's hard for us to say if that math works out.
You would think, but you could say the same about Facebook and I know from experience that they don't give a fuck about bots. If anything they actually like the bots because it looks like they have more users.