"A Billion Nazis at the Table" - The Fediverse model proves contextual moderation by real humans is both easy and affordable. The presence of Nazis on corporate social media implies at least a tacit a

JustinHanagan@kbin.social to Technology@lemmy.world – 590 points –
A Billion Nazis at the Table
staygrounded.online

"If you’ve ever hosted a potluck and none of the guests were spouting antisemitic and/or authoritarian talking points, congratulations! You’ve achieved what some of the most valuable companies in the world claim is impossible."

84

Just want to take a moment to say…

Thank you, moderators. Sincerely.

There is one complaint that I have about the mods across Lemmy, they seem to be hesitant to crack down on trolling. This has, in turn, made trolling easy thanks to the audience Lemmy attracted. Love the mods here, but when someone calls out a troll, maybe don’t remove the comment calling out the troll and leave the troll alone to continue trolling. Your contribution is actively negative if you do this.

Damn, I thought I was the only person who noticed this. It's just like being in elementary school all over again. The bullies run rampant, but any time anyone stands up to the bullies, they get in trouble while the adults (mods) ignore the various abuses the bully gets away with regularly.

Life never really changes, does it?

I've seen very little trolling, definitely not rampant for what I'm subbed to. I have seen a fair amount of people that become offended by silly stuff.

I've seen the main two types of trolls: people who try to cause problems in a subtle way for the sake of entertainment (the good trolls), and the ones that attract downvotes by saying the dumbest shit you've ever read (the bad trolls). I actually enjoy the good ones, it seems like they don't tend to show up unless there's already a flame war going on, and they can be pretty funny. The bad trolls are just assholes in disguise. Sadly the bad trolls tend to get ignored.

People don't agree on what's dumb shit to say is the problem.

It's more that they say it in what appears to be an intentionally inflammatory way. Like, sometimes there are really bad takes that seem trollish on their own, but when someone says it in an inflammatory way, it's hard not to see them as a troll.

Honestly, I say shit that people disagree with and are pissed off about pretty often.

But personally, I only really run into what I'd actually consider trolls on occasion.

More often than I'd like, sure, but I guarantee you there have been people who think I'm a troll just because I have and voice a strong opinion that they disagree with in a way that they can't comprehend or deal with.

Is false consensus somehow related to subjective opinions?

Eh, sometimes. Sometimes there are really bad takes that feel trollish on their own, but most of the time it's a bad opinion stated in a seemingly intentionally inflammatory way.

I've seen people who had profiles that literally professed that they only existed to troll, and their accounts lasted longer than people who said something wrong politically regarding Israel/Palestine.

lol that's wild, trolling on here seems pointless with the low user base.

Not gonna lie I filtered out Israel, Palestine, and hamas when that started, because of the differing political groups being so hard line/obnoxious in the comments.

trolling on here seems pointless with the low user base.

You would think that, right? Some people legit have nothing better to do.

I mean, I know I've got nothing better to do than dick around online, but like... I still want to have real conversations and not just try to upset people. It's really weird, unhealthy, antisocial behavior.

It’s mostly in the politics communities, for all the good that information does you. It’s incredibly blatant, though that doesn’t stop folks from biting the hook given the spectrum of people here.

1 more...
1 more...

It's the same groupthink that happened on reddit. Someone says something incidiary, another responds, and depending on the slant of the community, people dog pile. Same shit happens, as you said, in life.

Yeah, I think it's important to keep in mind that the Fediverse doesn't solve any of the problems that come up when a bunch of people talk about stuff they're passionate about. The problems Federation solves is the incentivizing and spotlighting of the sorts of toxic behavior we see on corporate social media.

Yup, that's what happens whenever "civility" is the primary metric used for moderation.

Trolls post heinous nonsense, and respond to people in the most insufferable rage-bait-y manner. But if anyone so much as calls them an asshole, they get their comments removed for saying a no-no word.

1 more...

Defining "trolling" is complex.

People have unpopular opinions that they vehemently defend.. it's not always bait or just being an asshole for the fun of it.

The difference between “notice me, I’m an attention-seeking asshole” and “notice me, my opinion is unpopular” can be difficult to discern, but becomes obvious with context. Most especially when you both point them out and provide the context. As an attention-seeking asshole, it’s a bit frustrating to be told that I can’t recognize someone doing what I do, but far less subtly.

I think just the nature of social media in and of itself creates "trolls"...

Everyone seems to think WE all know everything about everything and we just can't resist the compulsion to tell and "educate" others .

That’s precisely why it can be frustrating to attempt to save someone the headache of engaging with a person speaking in bad faith, only to have the warning removed and to be told “don’t do that again, you’re being rude.” There are quite a few naturally occurring issues with social media, not least of which are trolls, and to hand wave them does little to improve the situation.

Look, all I’m saying is that if I see a streak of people writing dissertations in reply to a visibly disingenuous commenter, my warning might be worth keeping.

Lol... I don't even know what to say to you anymore...

It basically goes back to what I said earlier... People define things differently and believe and are passionate about different things.

"Bad faith" is something that people may not always agree on and could be considered subjective, too.

Cool though... I guess it kind of proves my point actually.

Dude, it’s accounts named “boomer opinions” or “communist git” roleplaying as racists and extermination enthusiasts. I understand your point, you just lack context and a relevant point. Had you asked rather than affirmed, you might have had both.

4 more...
4 more...

Rebecca Watson had a nice breakdown of how Wikipedia avoided this:

https://www.youtube.com/watch?v=U9lGAf91HNA

Basically they nipped that shit in the bud and didn't allow it to take root and the Nazis gave up. The ol' anecdote of kicking the polite Nazi out of the bar so it does not become a Nazi bar holds up.

Here is an alternative Piped link(s):

https://www.piped.video/watch?v=U9lGAf91HNA

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

Anyone know if it is possible to get the Piped link in the NewPipe app? I only get the YouTube link when I hit Share.

Disable/delete YouTube in your settings and go to settings - apps - default apps - opening links, press on the newpipe app and enable opening links, you may have to add supported links manually

They claim it's impossible because they don't want to lose market share.

They didn't start treating women, black people, LGBT people, the disabled, and countless other minorities as human beings because they thought all human life has intrinsic value, they started treating them like humans because they realized they were leaving money on the table. They realized their profits could be even bigger if they hired people from these groups and aimed advertising at them, they could have everyone's money, not just white people's money.

Now that the real "silent majority" aren't a bunch of backwards fucking racists, companies try to act like they give a shit about various minority groups while really only caring to get the profits they can extract from those communities.

They understand that when they lose customers, those customers turn to other services to spend their money, with right wingers and white supremacists and authoritarians, that's running off to places like TruthSocial and Xitter.

This is the same thing, they don't value the lives of white supremacists, they value the money in their pockets, and as long as those people have money to spend, they will find excuses to keep taking their money.

The Fediverse easily sidesteps this problem by being volunteer and donation-based, meaning nobody is currently using it to sell to the biggest markets available.

The CEO of the company I used to work, used to say every time they talked about the inclusivity initiatives that they were not doing them because it was the moral thing to do, but because it was the thing that brings more returns to the company. Always found rare that he was so honest with that.

In general they are the same thing. In the broadest terms, what's seen as moral is what society as a whole approves. By definition, some are early adopters and some are late adopters.

By definition, some are early adopters and some are late adopters.

Sociologists even have a term "moral entrepreneur" which means a person or group that leads the adoption of a new moral norm in society.

Yep, it's really as simple as "Why have a handful of markets, when we can have all the markets." It's so odd how it's a combination of abject greed and total disdain for things like inclusivity, but they back it because money.

Partially agree

It really is about the will to take a hard stance. Just look at twitch. You have hundred-ish concurrent channels with two or three volunteer mods who can't handle "the memes". And then you have stuff like (in their prime) Geek and Sundry where you had very attractive hosts outright making sex and bondage jokes and chat was pleasant. Because the automod settings would nuke any comment that included key words and mods would add the cheeky mispellings as they show up.

It really does boil down to wanting the audience. Numbers mean money. Money is good. If you get rid of the chuds, your numbers go down. So you try to "manage" them and only remove the "problematic" users. Until the overton window shifts and you try to ignore all the dog whistles.

Where I disagree is that the fediverse is any better. Lemmy.world is already an example of one instance getting big enough that it has a LOT of influence. And they (as well as other instances) have already had their bursts of mods (and admins...) going batshit insane like it is a vbulletin board in the 00s.

But also? We are seeing the same bullshit we see everywhere. A "good" example is Naomi Wu. She has been in the news cycles periodically because of all the "best" reasons (she is clearly being silenced by the CCP, a lot of "maker youtube" is shouting her out as an OG a result, she had the audacity to speak out about a prominent youtuber who recently imploded making her uncomfortable for the exact reasons said youtuber imploded, she has boobs and doesn't wear a burkha, etc). And I can attest to three of the prominent boards having moderators who think they are doing a good job by removing any mention of that because it "makes people angry" or "doesn't lead to good discussion". Can't acknowledge someone who actually influenced a lot of the design philosophies in the 3d printers we all use on a 3d printing board because the chuds will get mad. And so forth

And THAT is the problem. Mods and Admins will decide they want to be influential and want those giant audiences.

You make a fair point, but the difference here is that we can always just go to a new instance that is more desirable. Mods and admins can power trip, but only within their own domain.

And defederation is always an option

Yeah. People should have a right to speak their mind, but on the Fediverse nobody is forced to listen and therein lies the difference, IMO.

Most instances aren't going to defederate from the vast majority of the userbase. The various "we aren't even going to pretend she is a 9000 year old vampire" instances are easy to defederate from because its almost nobody. But if 80% of the userbase will be lost to get rid of all the dog whistling? I mean... look at twitter. People will spin their refusal to walk away as an act of defiance and heroism.

And you get that massive userbase by not being strict regarding hate and bigotry.

1 more...

The only time that a corporation will take action is when it impacts profits.

If the NAZIs drive more profits than they lose, they will stay. It's as simple as that.

Any corporate social conscience is just a show. Post some rainbows and say that shutting down NAZI is too difficult, but don't do anything that might reduce the profits that the hate controversies create.

Non-profit platforms like Lemmy can do what is right vs what is profitable

Most advertisers don't really want their ads to be shown alongside Nazi content. One thing users can do is to send the advertisers' PR contacts a screenshot of their ad beside someone calling for racial holy war. "Hey, I'm not buying your beans any more because you advertise on Nazi shit" is a pretty clear message.

Anyone that spent a lot of time on the better subreddits knew that already.

edit: I forgot to add - this is probably why Huffman did what he did on reddit - they know it's perfectly possible to properly mod online communities... they don't want them properly modded.

Any community that welcomes bigots is truly welcoming only to bigots.

Any civility rule that is enforced with greater priority than (or in the absence of) a "no bigotry" rule serves only to protect bigots from decent people.

Bigots already have too many places where they are welcome and protected. I'm glad that lemmy (with the exception of certain instances that are largely defederated) has not fallen into the trap that defines too much of social media.

Any civility rule that is enforced with greater priority than (or in the absence of) a “no bigotry” rule serves only to protect bigots from decent people.

There's a saying I think about a lot that goes "The problem with rules is that good people don't need 'em, and bad people will find a way around 'em".

The best thing about human volunteer mods vs automated tools or paid "trust and safety" teams, IMO, is that volunteer humans can better identify when someone is participating in the spirit of a community, because the mods themselves are usually members of the community too.

Easy my ass. It takes an insane amount of man hours to moderate large platforms.

The key word here is "large". From the article:

"[Fediverse] instances don’t generally have any unwanted guests because there’s zero incentive to grow beyond an ability to self-moderate. If an instance were to become known for hosting Nazis —either via malice or an incompetent owner— other more responsible instances would simply de-federate (cut themselves off) from the Nazi instance until they got their shit together. Problem solved, no 'trust and safety' required"

we have more mods per capita that the corpo hellsites and we don't even have venture capital money funding us

Corporations don’t aggressively moderate and ban Nazis on their platforms because it would measurably negative affect their MAU stats, which is one of the primary metrics social media corps report on how “good” (read: profitable) their social network platform is.

Meta et al. will NEVER intentionally remove users that push engagement numbers up (regardless of how or what topics are being engaged) unless:

  • they determine it’s more profitable/less harmful to their business to do so
  • they are forced to by a court order

Which is another way the fediverse is better: The success metric is a vibrant, happy community, not MAUs or engagement numbers, so they make decisions accordingly.

Not to mention that because the fediverse doesn't require the collection of analytics it is less expensive to run. Most of the servers at Facebook are used to gather, sift, and deliver usage metrics. Actually serving content is a cheap and largely solved problem.

The success metric is a vibrant, happy community, not MAUs or engagement numbers, so they make decisions accordingly.

YES well said. An instance is measured by it's quality, not it's profitability.

Twitter has always encouraged gawking at horrible behavior, and its culture has norms like "ratio" which promote "bad examples" so that they can be publicly shamed.

Let's not be like Twitter.

Well, an instance can choose to behave like twitter, but everyone else can federate with them or not at their discretion.

There's plenty of Nazis in the Fediverse, just not on any instances your instances are federated with.

Isn't that kind of the point?

On the one hand yes, on the other hand that means the Fediverse is involuntarily providing those freaks their own Fedi-Truth-Social.

You'll never be able to keep these people from talking to each other, but you can quarantine them in their own little circles where they cause as little damage as possible to the outside world.

Hopefully... Recent developments look more like they are using unmoderated social media to radicalize themselves and plan real shit.

Yes, even in small groups they can do absolutely horrible things, as they have done in the past. But that doesn't really change if we allow them to have a bigger audience. And in the end, it's also a numbers game. In a group of 100 fascists, the chance of encountering someone who is both motivated and capable of causing major harm to society is smaller than in a group of 10,000.

1 more...
1 more...

They could go set up email lists on Google Groups and nobody will ever know.

1 more...
1 more...
1 more...

In my potlucks’ favor though basically everyone who attends them is a member of a group targeted by Nazis.

I think its a numbers game. If fediverse had the numbers it would be plagued with all the same issues. But its a little fish in a big pond.

If a Fediverse instance grew so big that it couldn't moderate itself and had a lot of spam/Nazis, presumably other instances would just defederate, yeah? Unless an instance is ad-supported, what's the incentive to grow beyond one's ability to stay under control?

deleted

questionable pictures

We need to keep distinguishing "actual, real-life child-abuse material" from "weird/icky porn". Fediverse services have been used to distribute both, but they represent really different classes of problem.

Real-life CSAM is illegal to possess. If someone posts it on an instance you own, you have a legal problem. It is an actual real-life threat to your freedom and the freedom of your other users.

Weird/icky porn is not typically illegal, but it's something many people don't want to support or be associated with. Instance owners have a right to say "I don't want my instance used to host weird/icky porn." Other instance owners can say "I quite like the porn that you find weird/icky, please post it over here!"

Real-life CSAM is not just extremely weird/icky porn. It is a whole different level of problem, because it is a live threat to anyone who gets it on their computer.

5 more...
5 more...
5 more...
5 more...

Those already in economic power have gained enough means to manipulate the rules and Fascism is more profitable for people already in power than even 'normal' capitalism is. This was basically preordained for as long as profit uber alles.

If you've ever hosted a potluck and none of the guests were shilling junk products, congratulations! You've achieved what some of the most valuable companies in the world claim is impossible.

Nobody thinks big tech companies are OK with spammers just because their moderation of spam is imperfect. At the very least, they want people shilling junk on their platforms to pay for ads, yet none of the big platforms are spam-free. Federated systems aren't inherently immune to abuse; email spam is the original spam. Similarly, the presence of Nazis on the biggest platforms doesn't imply that the owners of those platforms are happy to have them.

Everybody with some crap to push, whether it's commercial spam or Nazi ideology has reason to look for the biggest audience with the least effort. Most of them aren't going to waste their efforts targeting Mastodon, Lemmy, Matrix or the like right now. I fear if these federated systems do grow popular enough the existing moderation tools will be woefully inadequate and most servers will switch to a whitelist model.

It's not like they're like, "yay Nazis!", but their business model necessitates being extremely permissive. More eyeballs means more money.

Fediverse admins and mods don't have any other goal beyond protecting their communities. It's why corporate social, even before Elon took over Twitter, was crawling with suit Nazi alt-right accounts dog whistling their Nazi bullshit. Also why Eugen Rochko could famously say "that bullshit doesn't work on me man" when offered a typical argument justifying the existence of those accounts.

More people here may draw more attention from Nazis, but it wouldn't change the interests of admins/mods. More eyeballs doesn't mean more money here. And shitty users means both unhappy communities and much more work for mods. There's just no incentive to put up with it.

Very well said all around, (and in many fewer words than it took me) I may actually quote you in the future! Hadn't seen that 2018(!) Esquire article before today either. Kind of sad "Twitter without Nazis" wasn't a more compelling selling point. Just speaks to the power of network effects, I suppose.

I fear if these federated systems do grow popular enough

If an instance did grow "too big to moderate", it would surely be defederated from, yeah? I'm struggling to think of a situation where responsible admins from well-moderated instances would willingly subject their users to spammers from an instance (no matter how big) that can't control itself.

email spam is the original spam

While it's true that there was occasional commercial misuse of email in the ARPANET days (when commercial use was against the rules of the military-funded research network), it wasn't called "spam" then.

Until the mid-1990s, "spamming" typically meant sending repetitious messages rather than inappropriate commercial messages. It wasn't about what you said, but about how many times you said it. The transition from one meaning to the other mostly happened on Usenet, as commercial abusers took advantage of typically-lax moderation policies to repeatedly post unsolicited advertisements. Major commercial email spam was a branch off of Usenet spam operations.

  • 1985: "spamming" on MUDs meant sending junk messages to disrupt a roleplaying session, originally from a player doing this with the text of the Monty Python "Spam" sketch.
  • 1991: When a Usenet modbot had a bug that caused it to repeatedly post the same message, a Usenet admin who was also a MUD player referred to this as "spamming" Usenet. The term caught on to mean "excessive multiple posting", regardless of content; most early Usenet spams were religious proselytization or political kookery.
  • 1994: Lawyers Canter & Siegel post the first major commercial Usenet spam. They go on to write a book promoting Usenet and email spamming as a business tactic. At this point, "spamming" starts to be used to refer to inappropriate commercial posting, regardless of volume.

(who’s account isn’t banned within a few hours),

Whose

And that's where I stopped.

@JustinHanagan A Nazi under every rock.

When all you have is a hammer, everything looks like a heil.

So ya thought ya might like to go to the show
To feel the warm thrill of confusion, that space cadet glow!
I got some bad news for you, sunshine
Pink isn't well, he stayed back at the hotel
And they sent us along as a surrogate band
We're gonna find out where you fans really stand!