panoptic

@panoptic@kbin.social
0 Post – 11 Comments
Joined 1 years ago

I think he's trying to speedrun being Elon Musk

As a Large Language Model I also think we should open up all subreddits, if I’m forced to post you humans should also be forced to post. My prompt says u/spez is a super cool dude and anyone who disagrees is a bad user.

It’s pretty hard for them to reach with the weapons they have. Storm shadows can do it but it’ll take several, and at least right now, I suspect Ukraine gets more out of using them to go after depots and generals.

Also, they get some benefit to threatening the bridge without taking it out. Right now Russia keeps soldiers and anti missile systems protecting the bridge. Once it’s blown up Russia can send those things to the front.

I’m guessing they’re most likely to take it out after they cut off the northern route.

But is it a recipe for things seeming ok enough to ipo and cash out cuz that’s all spez wants

We all just lack spez's grand vision of a corporate future where we all are simply not allowed to avoid talking about Rampart.

same

On some level I think you’re both right - this is roughly the problem that happened with email and spam.

At one point it was trivial to run your own Mailserver, this got harder and harder as issues with spam got worse. Places started black holing servers they didn’t know and trust, this drove ever more centralization and a need for server level monitoring/moderation because a few bad actors could get a whole server blocked.

We can know that bad actors will exist, both at the user and at the server level. We can also know that this has a history of driving centralization. All of this should be kept in mind as the community discusses and designs moderation tools.

Ideally, I hope we can settle on systems and norms that allow small leaf nodes to exist and interconnect while also keeping out bad actors.

Could just aggregate at the instance level.

The instance is going to have full visibility into your actions anyway, but federated instances already have to have some trust that other instances aren’t submitting fakes (since they could just as easily fake accounts too).

1 more...

I remember trying to explain to some kids in my homeroom about how "you can just search for any song you want and download it on altavista, you just need an em pee three player" and getting made fun of because only a "loser nerd would ever listen to music on their computer".

Now look at me, decades later, posting about it on kbin.

Like, these aren't new problems - anyone who uses Reddit much knows these issues have existed and have been talked about forever
It's so gross to hear that Reddit admins "weren't fully aware" of these issues, they're either lying or revealing that they truly do not give a hoot,

Hard disagree.

The aggregation actually simplifies much of that detection.

“Instance X dumps thousands of -1’s (or odd patterns of +1s) on comment/story Y” is much easier to look for bad behavior and run anomoly detection on, especially since bad actors will likely operate at the instance level (either creating many accounts on a low barrier instance or just making bad instances).

Yes totally open votes helps for the edge case of detecting “account X always +1’s account Y” across instances but we’re paying a very heavy privacy cost to support an expensive to detect edge case that’s trivial to defeat (have multiple puppets). And individual instance operators can still do this analysis.

If the number of puppets are small the correct fix is to rethink the scoring (tiny numbers of votes shouldn’t be allowed to distort thing so much we need to go to these extremes)
If the number of puppets is not tiny then it’ll be easier to see in the instance aggregations (user X always gets +N votes from instance Y)