The (annotated) case for a "big fedi"

thenexusofprivacy@lemmy.world to Fediverse@lemmy.world – 55 points –
The (annotated) case for a "big fedi"
privacy.thenexus.today

A response to Evan Prodromou's "Big Fedi, Small Fedi"

19

You are viewing a single comment

I have seen people say the Fedi is fine as it is, he’s not mistaking that. The annotations are misleading and make it seems like he’s just wanting the Fedi to be unsafe

I have to preface with my comment with the fact that I’m not someone who would be targeted for harassment because of my identity, so I can’t fully appreciate the distress a lot of people who are part of minority groups have to go through on platforms like X, Reddit’s, or Meta’s.

But I hate the argument that we can’t/shouldn’t federate with them because they are nazis, transphobes, etc. 99.9% of people on those platforms are not that.

Like the author said there are people like that already on the fediverse. We have tools to block it at the user or moderator level. We have the ability to migrate to instances that have higher or more lax levels of moderation.

It's the nazi bar analogy. The moment you allow nazis, they invited that's friends and then you can't kick them out because they can cause big trouble. The solution is not to allow nazis in the first place.

And the issue with the "99% of people" argument is, it doesn't matter what percentage of the population are decent people. If they're too many, the network effect takes place and the platform becomes too big to fail.

Do you know how many people are dictators in dictatorships? One. The leader. The rest are subjects. But that single leader is enough to make everyone's lives shitty.

So it doesn't matter what percentage of Threads/Facebook users are nazis; by being there they support the nazis they can't kick out.

The decentralized fediverse model isn't perfect, but defederation has worked so far at keeping nazis at bay. They have their little corner where they can talk all the racist shit they want, but they can't harm users of instances that have blocked them.

The real question is how much money Zuck plans to invest in moderating hate speech. I can't be sure how much, but based on the precedents, I can say it's a pretty small figure. That IS a problem.

Indeed, there are lots of people like that already on the fediverse, and blocking entire instances is a blunt but powerful tool that well-moderated fediverse instances currently rely on for protection. Today, people on instances where admins and moderators don't block instances that have multiple badly-behaving people have to deal with a lot more harassment and hate speech than people on instances who do. So we'll certainly see a situation where some instances block Threads and others don't. The open question, though, is how many instances will decide to also block instances that federate with Threads -- just as many instances decided to block instances that federated with Gab.

It's not that he wants the fediverse to be unsafe. It's more that the Big Fedi beliefs he describes for the fediverse -- everybody having an account there (which by definition includes Nazis, anti-trans hate groups, etc) , relying on the same kind of automated moderation tools that we've don't lead to safety on other platforms -- lead to a fediverse that's unsafe for many.

And sure there are some people who say Fedi is fine as it is. But that's not the norm for people who disagree with the "Big Fedi" view he sketches. It's like if somebody said "People who want to federate with Threads are all transphobic." There are indeed some transphobic people who want to federate with Threads -- We Distribute just reported on one -- but claiming that's the typical view of people who want to federate with Threads would be a mischaracterization.

Yes, but your response is misleading it insinuates that we have ways to keep those people off of the Fediverse and we don’t. It’s not possible, all of the kinds of people you’re saying he wants on the Fediverse are already on the Fediverse. Also, small doesn’t equal safer, the Fediverse has been small until last year and for years minorities have dealt with racism and harassment. What he said is ultimately right, we need to make the tools. With better tools people can largely have their preferred experiences, it doesn’t have to be an either or situation.

I agree that small doesn't equal safer, in other articles I've quoted Mekka as saying that for many Black Twitter users there's more racism and Nazis on the fediverse than Twitter. And I agre that better tools will be good. The question is whether, with current tools, growth with the principles of Big Fedi leads to more or less safety. Evan assumes that safety can be maintained: "There may be some bad people too, but we'll manage them." Given that the tools aren't sufficient to manage the bad people today, that seems like an unrealistic assumption to me.

And yes, there are ways to keep these people off the fediverse (although they're not perfect). Gab isn't on the fediverse today because everybody defederated it. OANN isn't on the fediverse today because everybody threatened to defederate the instance that (briefly) hosted them, and as a result the instance decided to enforce their terms of service. There's a difference between Evan's position that he wants them to have accounts on the fediverse, and the alternate view that we don't want them to have accounts on the fediverse (although may not always be able to prevent it).

OANN and Gab are one example of a back down. What about the child porn instances? They are still on the Fediverse, they’re just blocked by lots of instances. Using Gab provides a false sense of safety to people.

Or, using Gab provides a sense of what's possible.

And child porn is a great example -- and CSAM more generally. Today's fediverse would have less CSAM if the CSAM instances weren't on it. Why hasn't that happened? The reason that many instances give for not block the instances that are well-known sources of CSAM is that CSAM isn't the only thing on that instance. And it's true: these instances have lots of people talking about all kinds of things, and only a relatively-small number of people spreading CSAM. So not blocking them is completely in aligment with the Big Fedi views Evan articulates: everybody (even CSAM-spreaders) should have an account, and it's more important to have the good (non-CSAM) people on the fediverse than to keep the bad (CSAM-spreading) people off.

A different view is that whoa, even a relatively-small number of people spreading CSAM is way too many, and today's fediverse would be better if they weren't on it, and if the instances that allow CSAM are providing a haven for them then those instances shouldn't be on the fediverse. It seems to me that view would result in less CSAM on the fediverse, which I see as a good thing.