Lemmyshitpost community closed until further notice

lwadmin@lemmy.worldmod to Lemmy.World Announcements@lemmy.world – 1736 points –

Hello everyone,

We unfortunately have to close the !lemmyshitpost community for the time being. We have been fighting the CSAM (Child Sexual Assault Material) posts all day but there is nothing we can do because they will just post from another instance since we changed our registration policy.

We keep working on a solution, we have a few things in the works but that won't help us now.

Thank you for your understanding and apologies to our users, moderators and admins of other instances who had to deal with this.

Edit: @Striker@lemmy.world the moderator of the affected community made a post apologizing for what happened. But this could not be stopped even with 10 moderators. And if it wasn't his community it would have been another one. And it is clear this could happen on any instance.

But we will not give up. We are lucky to have a very dedicated team and we can hopefully make an announcement about what's next very soon.

Edit 2: removed that bit about the moderator tools. That came out a bit harsher than how we meant it. It's been a long day and having to deal with this kind of stuff got some of us a bit salty to say the least. Remember we also had to deal with people posting scat not too long ago so this isn't the first time we felt helpless. Anyway, I hope we can announce something more positive soon.

695

Fuck these trolls

troll is too mild of an adjective for these people

How about "pedophile"? I mean, they had to have the images to post them.

"Terrorist". Having the images doesn't mean they liked them, they used them to terrorize a whole community though.

"Petophilile enabled Terrorist" or "petophilic terrorist" depending on the person

It still means they can tolerate CSAM or are normalized to it enough that they can feel anything other than discust during "shipping and handling".

6 more...
6 more...

Yeah, this isn’t just joking or shitposting. This is the kind of shit that gets people locked up in federal pound-you-in-the-ass prison for decades. The feds don’t care if you sought out the CSAM, because it still exists on your device regardless of intent.

The laws about possessing CSAM are written in a way that any plausible deniability is removed, specifically to prevent pedophiles from being able to go “oh lol a buddy sent that to me as a joke” and getting acquitted. The courts don’t care why you have CSAM on your server. All they care about is the fact that you do. And since you own the server, you own the CSAM and they’ll prosecute you for it.

And not just the instance admins would be at risk as well. Any time you view an image your device is making a local copy of it. Meaning every person who viewed the image even accidentally is at risk as well.

Yeah honestly report all of those accounts to law enforcement. It’s unlikely they’d be able to do much, I assume, but these people are literally distributing CSAM.

6 more...

That's not a troll, CSAM goes well beyond trolling, pedophile would be a more accurate term for them.

Yeah. A troll might post something like a ton of oversized images of pig buttholes. Who the fuck even has access to CSAM to post? That's something you only have on hand if you're a predator already. Nor is it something you can shrug off like "lol I was only trolling". It's a crime that will send you to jail for years. It's a major crime that gets entire police units dedicated to it. It's a huuuuge deal and I cannot even fathom what kind of person would risk years in prison to sabotage an internet forum.

1 more...
40 more...
50 more...

I would like to extend my sincerest apologies to all of the users here who liked lemmy shit posting. I feel like I let the situation grow too out of control before getting help. Don't worry I am not quitting. I fully intend on staying around. The other two deserted the community but I won't. Dm me If you wish to apply for mod.

Sincerest thanks to the admin team for dealing with this situation. I wish I linked in with you all earlier.

@Striker@lemmy.world this is not your fault. You stepped up when we asked you to and actively reached out for help getting the community moderated. But even with extra moderators this can not be stopped. Lemmy needs better moderation tools.

Hopefully the devs will take the lesson from this incident and put some better tools together.

There's a Matrix Room for building mod tools here maybe we might want to bring up this issue there, just in case they aren't already aware.

3 more...
3 more...

Please, please, please do not blame yourself for this. This is not your fault. You did what you were supposed to do as a mod and stepped up and asked for help when you needed to, lemmy just needs better tools. Please take care of yourself.

This isn't your fault. Thank you for all you have done in regards to this situation thus far.

It's not your fault, these people attacked and we don't have the proper moderation tools to defend ourselves yet. Hopefully in the future this will change though. As it stands you did the best that you could.

Definitely not your fault mate, you did what anyone would do, it's a new community and shit happens

I love your community and I know it is hard for you to handle this but it isn't your fault! I hope no one here blames you because it's 100% the fault of these sick freaks posting CSAM.

You don't have to apologize for having done your job. You did everything right and we appreciate it a lot. I've spent the whole day trying to remove this shit from my own instance and understanding how purges, removals and pictrs work. I feel you, my man. The only ones at fault here are the sickos who shared that stuff, you keep holding on.

You didn't do anything wrong, this isn't your fault and we're grateful for the effort. These monsters will be slain, and we will get our community back.

7 more...

Contact the FBI

This isn't as crazy as it may sound either. I saw a similar situation, contacted them with the information I had, and the field agent was super nice/helpful and followed up multiple times with calls/updates.

This doesn't sound crazy in the least. It sounds like exactly what should be done.

yha, what do people think the FBI is for... this isn't crazy. They can get access to ISP logs, VPN provider logs, etc.

I think what they're saying is that contacting the FBI may seem daunting to someone who has never dealt with something like this before, but that they don't need to worry about it. Just contact them.

1 more...
1 more...
1 more...

This is good advice; I suspect they're outside of the FBI's jurisdiction, but they could also be random idiots, in which case they're random idiots who are about to become registered sex offenders.

They might be, but I'd imagine most countries have laws on the books about this sort of stuff too.

And it's something that the nations usually have no issues cooperating with.

The FBI has assisted in a lot of global raids related to CSAM.

1 more...
1 more...

I have to wonder if Interpol could help with issues like this I know there are agencies that work together globally to help protect missing and exploited children.

'Criminal activity should be reported to your local or national police. INTERPOL does not carry out investigations or arrest people; this is the responsibility of national police.'

From their website.

3 more...

The FBI reports it to interpol I believe, interpol is more of like an international warrant system built from treaties.

FBI would be great in this case tbh. They have the resources.

3 more...
17 more...
18 more...

This is seriously sad and awful that people would go this far to derail a community. It makes me concerned for other communities as well. Since they have succeeded in having shitpost closed does this mean they will just move on to the next community? That being said here is some very useful information on the subject and what can be done to help curb CSAM.

The National Center for Missing & Exploited Children (NCMEC) CyberTipline: You can report CSAM to the CyberTipline online or by calling 1-800-843-5678. Your report will be forwarded to a law enforcement agency for investigation. The National Sexual Assault Hotline: If you or someone you know has been sexually assaulted, you can call the National Sexual Assault Hotline at 800-656-HOPE (4673) or chat online. The hotline is available 24/7 and provides free, confidential support.

The National Child Abuse Hotline: If you suspect child abuse, you can call the National Child Abuse Hotline at 800-4-A-CHILD (422-4453). The hotline is available 24/7 and provides free, confidential support. Thorn: Thorn is a non-profit organization that works to fight child sexual abuse. They provide resources on how to prevent CSAM and how to report it.

Stop It Now!: Stop It Now! is an organization that works to prevent child sexual abuse. They provide resources on how to talk to children about sexual abuse and how to report it.

Childhelp USA: Childhelp USA is a non-profit organization that provides crisis intervention and prevention services to children and families. They have a 24/7 hotline at 1-800-422-4453. Here are some tips to prevent CSAM:

Talk to your children about online safety and the dangers of CSAM.

Teach your children about the importance of keeping their personal information private. Monitor your children's online activity.

Be aware of the signs of CSAM, such as children being secretive or withdrawn, or having changes in their behavior. Report any suspected CSAM to the authorities immediately.

So far I have not seen such disgusting material, but I'm saving this comment in case I ever need the information.

Are there any other numbers or sites people can contact in countries other than the USA?

2 more...
2 more...

Not that I'm familiar with Rust at all, but... perhaps we need to talk about this.

The only thing that could have prevented this is better moderation tools. And while a lot of the instance admins have been asking for this, it doesn’t seem to be on the developers roadmap for the time being. There are just two full-time developers on this project and they seem to have other priorities. No offense to them but it doesn’t inspire much faith for the future of Lemmy.

Lets be productive. What exactly are the moderation features needed, and what would be easiest to implement into the Lemmy source code? Are you talking about a mass-ban of users from specific instances? A ban of new accounts from instances? Like, what moderation tool exactly is needed here?

Speculating:

Restricting posting from accounts that don't meet some adjustable criteria. Like account age, comment count, prior moderation action, average comment length (upvote quota maybe not, because not all instances use it)

Automatic hash comparison of uploaded images with database of registered illegal content.

On various old-school forums, there's a simple (and automated) system of trust that progresses from new users (who might be spam)... where every new user might need a manual "approve post" before it shows up. (And this existed in Reddit in some communities too).

And then full powers granted to the user eventually (or in the case of StackOverlow, automated access to the moderator queue).

What are the chances of a hash collision in this instance? I know accidental hash collisions are usually super rare, but with enough people it'd probably still happen every now and then, especially if the system is designed to detect images similar to the original illegal image (to catch any minor edits).

Is there a way to use multiple hashes from different sources to help reduce collisions? For an example, checking both the MD5 and SHA256 hashes instead of just one or the other, and then it only gets flagged if both match within a certain degree.

Traditional hash like MD5 and SHA256 are not locality-sensitive. Can't be used to detect match with certain degree. Otherwise, yes you are correct. Perceptual hashes can create false positive. Very unlikely, but yes it is possible. This is not a problem with perfect solution. Extraordinary edge cases must be resolved on a case by case basis.

And yes, simplest solution must be implemented first always. Tracking post reputation, captcha before post, wait for account to mature before can post, etc. The problem is that right now the only defense we have access to are mods. Mods are people, usually with eyeballs. Eyeballs which will be poisoned by CSAM so we can post memes and funnies without issues. This is not fair to them. We must do all we can, and if all we can includes perceptual hashing, we have moral obligation to do so.

2 more...
4 more...
5 more...

I guess it'd be a matter of incorporating something that hashes whatever it is that's being uploaded. One takes that hash and checks it against a database of known CSAM. If match, stop upload, ban user and complain to closest officer of the law. Reddit uses PhotoDNA and CSAI-Match. This is not a simple task.

None of that really works anymore in the age of AI inpainting. Hashes / Perceptual worked well before but the people doing this are specifically interested in causing destruction and chaos with this content. they don’t need it to be authentic to do that.

It’s a problem that requires AI on the defensive side but even that is just going to be eternal arms race. This problem cannot be solved with technology, only mitigated.

The ability to exchange hashes on moderation actions against content may offer a way out, but it will change the decentralized nature of everything - basically bringing us back to the early days of the usenet, Usenet Death Penaty, etc.

Not true.

A simple CAPTCHA got rid of a huge set of idiotic script-kiddies. CSAM being what it is, could (and should) result in an immediate IP ban. So if you're "dumb" enough to try to upload a well-known CSAM hash, then you absolutely deserve the harshest immediate ban automatically.


You're pretty much like the story of the economist who refuses to believe that $20 exists on a sidewalk. "Oh, but if that $20 really existed on the sidewalk there, then it would have been arbitraged away already". Well guess what? Human nature ain't economic theory. Human nature ain't cybersecurity.

Idiots will do dumb, easy attacks because they're dumb and easy. We need to defend against the dumb-and-easy attacks, before spending more time working on the harder, rarer attacks.

3 more...
3 more...

Couldn't one small change in the picture change the whole hash?

Good question. Yes. Also artefacts from compression can fuck it up. However hash comparison returns percentage of match. If match is good enough, it is CSAM. Davai ban. There is bigger issue however for developers of Lemmy, I assume. It is a philosophical pizdec. It is that if we elect to use PhotoDNA and CSAI Match, Lemmy is now at the whims of Microsoft and Google respectively.

Honestly I'd rather that than see shit like this any day.

The bigger thing is that hash detection tools don't want to give access to just anyone, and just anyone can run a Lemmy instance. The concern is that you're effectively giving the CSAM people a way to know if they'll be detected.

Perhaps they can allow some of the biggest Lemmy instances to use the tech, but I wouldn't expect it to be available to everyone.

1 more...
1 more...

Mod tools are not Lemmy. Give admins and mods an option. Even a paid one. Hell. Admins of Lemmy.world could have us donate extra to cover costs of api services.

I agree. Perhaps what Lemmy developers can do is they can put slot for generic middleware before whatever the POST request is in Lemmy API for uploading content? This way, owner of instance can choose to put whatever middleware for CSAM they want. This way, we are not dependent on developers of Lemmy for solution to pedo problem.

1 more...

If they hash the file binary data, like CRC32 or SHA, yes. But there are other hash types out there, which are more like "fingerprints" of an image. Think of how Shazam or Sound Hound can recognize a song playing, despite the extra wind, static, etc that's present. There are similar algorithms for images/videos.

No idea how difficult those are to implement, though.

There are FOSS applications that can do that (czkawka for example). What I'm not sure it's if the specific algorithm used is available and, more importantly, if the csam hashes are available for general audiences. I would assume if they are any attacker could check first and get the right amount of changes.

2 more...
5 more...

The best feature the current Lemmy devs could work on is making the process to onboard new devs smoother. We shouldn't expect anything more than that for the near future.

I haven't actually tried cloning and compiling, so if anyone has comments here they're more than welcome.

I think having a means of viewing uploaded images as an admin would be helpful, as well disabling external image caching. Like an "uploaded" gallery for admins to view that can potentially hook into Photodna/CSAI-Match or whatever.

I think it would be an AI autoscan that flags some posts for mod approval before they show up to the public and perhaps more fine-grained controls for how media is posted like for instance only allowing certain image hosting sites and no directly uploaded images.

24 more...

The amount of people in these comments asking the mods not to cave is bonkers.

This isn’t Reddit. These are hobbyists without legal teams to a) fend off false allegations or b) comply with laws that they don’t have any deep understanding of.

Yeah, you've got to think of this place like the big forums of 20 years ago, they're just run by a tiny handful of regular people, and having to deal with solicitors and other such stuff is entirely out of the question.

If something's bad, you lock it down and purge it until it's not bad any more. Unfortunately that's the best you can do with such minimal resources as a regular member of the public, and for those that don't like it, there's other forums out there.

This isn't one single huge monopoly thing like Reddit, you either stay or leave forever, if you don't like how one is being run, just sign up on a different one. Takes the stress out of it :-)

Lemmy instances are also international, which would cause more problems.

This instance is Finnish, and lemmy.ml is Malaysian(?), each with their own separate administration teams. Whereas Reddit knows that all of the Reddit servers are in a few known countries, and they have the capacity to make big changes across all of them as needed.

2 more...
2 more...
2 more...

This is flat out disgusting. Extremely questionable someone having an arsenal of this crap to spread to begin with. I hope they catch charges.

See that's the part of this that bothers me most.... Why do they have so much of it? Why do they feel comfortable letting others know they have so much of it? Why are they posting it on an open forum?

The worst part is, there is not a single god damn answer to ANY of those that wouldn't keep a sane person up at night.... shudder

2 more...
4 more...

There are just two full-time developers on this project and they seem to have other priorities. No offense to them but it doesn’t inspire much faith for the future of Lemmy.

this doesn't seem like a respectful comment to make. People have responsibilities; they aren't paid for this. It doesn't seem to fair to make criticisms of something when we aren't doing anything to provide a solution. A better comment would be "there are just 2 full time developers on this project and they have other priorities. we are working on increasing the number of full time developers."

Imagine if you were the owner of a really large computer with CSAM in it. And there is in fact no good way to prevent creeps from putting more into it. And when police come to have a look at your CSAM, you are liable for legal bullshit. Now imagine you had dependents. You would also be well past the point of being respectful.

On that note, the captain db0 has raised an issue on the github repository of LemmyNet, requesting essentially the ability to add middleware that checks the nature of uploaded images (issue #3920 if anyone wants to check). Point being, the ball is squarely in their court now.

I think the FBI or eqivilant keeps a record of hashes for a known CASM and middleware should be able to compare to that. Hopefully, if a match is found, kill the post and forward all info on to LE.

Interesting. But aren’t hashes unique to a specific photo? Just a single change to the photo would inevitably change its hash.

I think Apple was going to implement a similar system and deploy to all iPhones/Macs in some iOS/macOS update. However was eventually 86’d due to privacy concerns from many people and the possible for abuse and/or false positives.

A system like this might work on a small scale though as part of moderating tools. Not sure where you would get a constantly updated database of CSAM hashes though.

Interesting. But aren’t hashes unique to a specific photo? Just a single change to the photo would inevitably change its hash.

Most people are lazy and stupid, so maybe hash checking is enough to catch a huge portion (probably more than 50%, maybe even 80% or 90%?) of the CSAM that doesn't bother (or know how) to do that?

2 more...
5 more...
5 more...

You can already protect your instance using CloudFlare’s CSAM protection, and sorry to say it, but I would not use db0’s solution. It is more likely to get you in trouble than help you out. I posted about it in their initial thread, but they are not warning people about actual legal requirements that are required in many places and their script can get you put in jail (yes, put in jail for deleting CSAM).

3 more...
8 more...

I agree with you, I'd just gently suggest that it's borne of what is probably significant upset at having to deal with what they're having to deal with.

we are working on increasing the number of full time developers.

I see where you are coming from, but who is supposed to make this statement, LW admins? Because it's not their role. And if it's Lemmy devs, then it shouldn't be we.

3 more...

I mean, the "other priorities" comment does seem to be in bad taste. But as for the comment on the future of Lemmy, I dunno. I feel like they're just being realistic. I think the majority of us understand the devs have lives but if things don't get sorted out soon enough it could impact the future of Lemmy.

Thing is, if this continues to be a problem and if the userbase/admins of instances are organised, we can shift those priorities. They may not have envisioned this being a problem with the work they decided to work on for the next several months. Truly, the solution is to get more developers involved so that more can happen at once.

Seriously. We need to cut them some slack because nobody expected Reddit to go full Elon in May.

2 more...
2 more...

No one is paid for this, but moderation is going to become a problem for Lemmy and the volunteers who are admins are going to need support.

No one is paid for this, but moderation is going to become a problem for Lemmy and the volunteers who are admins are going to need support.

yes, that's what i'm saying. We should acknowledge that we are fortunate to have dedicated volunteer devs and work on helping/supporting them.

We definitely should acknowledge the volunteer devs supporting the platform, but we need to address that there may be issues with the tools for mods as is and we need the paid devs to pull back from only coding and do more design of the architecture that can be filled in by volunteer devs.

4 more...
4 more...
4 more...
37 more...

Fucking bastards. I don't even know what beef they have with the community and why, but using THAT method to get them to shut down is nothing short of despicable. What absolute scum.

1 more...

I hope the devs take this seriously as an existential threat to the fediverse. Lemmyshitpost was one of the largest communities on the network both in AUPH and subscribers. If taking the community down is the only option here, that's extremely insufficient and bodes death for the platform at the hands of uncontrolled spam.

Lemmy is new for all of us. I don't see any other solution right now. You got some ideas how to handle it better?

I think better mod tools are needed but it will take time. Doesn't mean the platform will die, but means we may have to deal with some stuff like this.

It's a hard problem but it absolutely is an existential risk. Spam is an existential risk. A platform that collapses under spam will either remain too small to be irrelevant or collapse from unusability. I'm sorry but I don't think your response completely grasps the number of forums, social media sites, wikis, etc. that have been completely crushed by spam.

1 more...
1 more...
73 more...

Genuine question: won't they just move to spamming CSAM in other communities?

With how slow Lemmy moves anyways, it wouldn't be hard to make everything "mod approved" if it's a picture/video.

This, or blocking self hosting pictures

This seems like the better approach. Let other sites who theoretically have image detection in place sort this out. We can just link to images hosted elsewhere

1 more...

Yes, and only whitelist trusted image hosting services (that is ones that have the resources to deal with any illegal material).

1 more...
3 more...

[This comment has been deleted by an automated system]

Not-so-fun fact - the FBI has a hard limit on how long an individual agent can spend on CSAM related work. Any agent that does so is mandated to go to therapy afterwards.

It's not an easy task at all and does emotionally destroy you. There's a reason why you can find dozens of different tools to automate the detection and reporting.

1 more...
1 more...
7 more...
7 more...

We have been fighting the CSAM (Child Sexual Assault Material) posts all day but there is nothing we can do because they will just post from another instance since we changed our registration policy.

It's likely that we'll be seeing a large number of instances switch to whitelist based federation instead of the current blacklist based one, especially for niche instances that does not want to deal with this at all (and I don't blame them).

Basically ruins the fediverse concept, as it will ultimately lock out all small instances of self hosted instances, which really is a shame.

I mean, you could have some sort of application system and allow instances to pull from you by default. That way you're "read only" until approved by other instances

7 more...
7 more...

Sounds like the 4chan raids of old.

Batten down, report the offender's to the authorities, and then clean up the mess!

Good job so far ^_^

Yeah, in good and in bad, this place is a lot like "the old days of the internet", but thankfully more in the good way than the bad. People keep complaining about shit I haven't seen for a second, constantly there are actions from the mod/admin side about shit I've never even seen etc. Even without proper mod tools and shit it seems like everything is working out quite well. To which I and all of us owe a huge thank you to the people working on this stuff.

Thank you people! Thank you for making this place feel like home and something greater than what we've had for a long ass time.

Mods, Keep yourself safe, be open to your support groups (dont isolate), clear shread your cache often

We will get thru this together

1 more...

How does closing lemmyshitpost do anything to solve the issue? Isn't it a foregone conclusion that the offenders would just start targeting other communities or was there something unique about lemmyshitpost that made it more susceptible?

It stops their instance hosting CSAM and removes their legal liability to deal with something they don't have the capacity to at this point in time.

How would you respond to having someone else forcibly load up your pc with child porn over the Internet? Would you take it offline?

How would you respond to having someone else forcibly load up your pc with child porn over the Internet? Would you take it offline?

But that's not what happened. They didn't take the server offline. They banned a community. If some remote person had access to my pc and they were loading it up with child porn, I would not expect that deleting the folder would fix the problem. So I don't understand what your analogy is trying to accomplish because it's faulty.

Also, I think you are confusing my question as some kind of disapproval. It isn't. If closing a community solves the problem then I fully support the admin team actions.

I'm just questioning whether that really solves the problem or not. It was a community created on Lemmy.world, not some other instance. So if the perpetrators were capable of posting to it, they are capable of posting to any community on lemmy.world. You get that, yeah?

My question is just a request for clarification. How does shutting down 1 community stop the perpetrators from posting the same stuff to other communities?

Fact of the matter is that these mods are not lawyers, and even if they were not liable, they would not have the means to fight this in court if someone falsely, or legitimately, claimed they were liable. They’re hobbits with day jobs.

I also mod a few large communities here, and if I’m ever in that boat, I would also jump. I have other shit to do, and I don’t have the time or energy to fight trolls like that.

If this was Reddit, I’d let all the paid admins, legal, PR, SysOps, engineers and UX folks figure it out. But this isn’t Reddit. It’s all on the hobbyist mods to figure it out. Many are not going to have the energy to put up with it.

It's not meant to solve the problem, it's meant to limit liability.

4 more...
4 more...
4 more...

They also changed the account sign ups to be application only so people can't create accounts without being approved.

It doesn’t solve the bigger moderation problem, but it solves the immediate issue for the mods who don’t want to go to jail for modding a community hosting CSM.

Doesn't that send a clear message to the perpetrators that they can cause any community to be shut down and killed and all they have to do is post CSAM to it? What makes you or anyone else think that, upon seeing that lemmyshitpost is gone, that the perpetrators will all just quit. Was lemmyshitpost the only community they were able to post in?

Yup. The perpetrators win.

If you were in their shoes, would you want to risk going to jail for kiddy porn, risk having your name associated with CSM online, or drain your personal savings account to fight these folks?

These mods are not protected by a well funded private legal team. This isn’t Reddit.

3 more...
5 more...
5 more...
9 more...

Is it possible to (at least temporarily):

  1. Turn off instance image hosting (disable pictrs)
  2. Disallow image and video posts across all communities
  3. As in Firefish, turn off caching of remote images from other instances.

whilst longer term solutions are sought? This would at least ensure poor mods aren't exposed to this shit and an instance could be more positive they're not inadvertently hosting CSAM.

good thing you did it the way you did nobody should have to look at awful stuff like this. keep your mind healthy nobody should have to deal with that

Thank you so much for all of the effort and time all of you are putting into this situation. Having to deal with bad actors is one thing, but you are now dealing with images that are traumatizing to view.

Please, for your sanity and overall well being, PLEASE take care of yourself. Yes, it sucks about having to close !lemmyshitpost, but self-care and support are of the utmost importance.

1 more...

I assume you've contacted the FBI, but if not PLEASE DO.

lemmy.world is based in Finland.

In that case probably Europol or the Finnish police/cybersecurity agency or something?

Most people in the world don't live in that one very specific country, it's somewhat presumptuous to assume they do, and I'm a bit tired of those assumptions only ever coming from people from a very specific place.

I put up with it for a long while without saying anything, but it's been 15 years and they're doing it more now than ever. I'm kinda... worn down now.

Can't we all just accept that the world is vast and amazing, and exists beyond our own borders? :-(

13 more...
22 more...

I'm afraid the fediverse will need a crowdsec-like decentralized banning platform. Get banned one platform for this shit, get banned everywhere.

I'm willing to participate in fleshing that out.

Edit: it's just an idea, I do not have all the answers, otherwise I'd be building it.

What you're basically talking about is centralization. And, as much as it has tremendous benefits of convenience, I think a lot of people here can cite their own feelings as to why that's generally bad. It's a hard call to make.

They didn't say anything about implementation. Why couldn't you build tooling to keep it decentralized? Servers or even communities could choose to ban from their own communities based on a heuristic based on the moderation actions published by other communities. At the end of the day it is still individual communities making their own decisions.

I just wouldn't be so quick to shoot this down.

1 more...
1 more...

There is no way that could get abused... Like say, by hosting your own instance and banning anyone you want.

2 more...
14 more...

Thank you for your work to keep that despicable trash out of our feeds. Sorry you have to deal with it. Fuck those losers.

Thank you for all your work. It sucks that there are people who would do shit like this. Please don't forget to take care of yourselves as well.

Who the fuck has such a problem with this instance?

It's a great question. What's the motivation? Who loses if Lemmy succeeds?

Spez loses. Maybe he's the one behind it cause he's salty that people called him out for being a pedophile.

Yeah it's definitely a targeted campaign of sabotage. I'm glad people have found a way to stem the flow, but we'll need a better solution long term.

4 more...

Rivals such as Reddit, the groups who benefit from the ease at which they can manipulate what people see on Reddit to further their agendas (companies, governments, groups).

And fascists in general, who hate open global community platforms that are hard to control, and hate things that bring people together, things that give people strength to fight together against things bigger than them.

All it takes is one of those interested parties to fund a couple of low rent hackers to poke at Lemmy until it's so unstable and untrustworthy that people stop using it.

Cheap and effective if done right, a good investment in the long run for any discerning fascist.

I doubt it would be company rivals for the time being. Lemmy is nowhere big enough, and even the largest instances are dwarfed by either other social networks, or other projects like Mastodon.

Reddit has much more of a concern for Facebook or Twitter trying to co-opt their model, than some small open source project, like how Meta released Threads during the recent Twitter controversy.

The motivation could also be just trying to get the instance shut down, or otherwise break it, like the user that was spamming communities not that long ago.

Nothing too conspiratorial.

6 more...

The odds are no one. To the trolls this isn't meant to be a personal attack against a server they don't like. They just want to wreck havoc and cause shock to anywhere. It just so happens they discovered lemmy shit posting.

2 more...

... I can think of one possibility - an instance known that lemmy.world recently defederated from.

11 more...

I’m gone for a few days and some assholes trying to fuck up this instance. Smh.

Correction, fkd the instance

Saw that one post (unfortunately). How come people who spread content like that in the "open" internet (not darkweb) don't get arrested?

I hardly know anything about the subject of the dark web, but my take is, they operate from the dark web. They open temporary email accounts, then free computing or free hosting accounts, or are part of a bot net. The rest is relatively easy.

Law enforcement monitors the dark web, though, so if they make a stupid move (and they will), they will eventually get caught.

2 more...

They would need to be reported with some credible and actionable evidence to the appropriate authorities. Just having the material they are posting isn't enough. If they are taking precautions, it's gonna be hard to figure out who and where they are from what the site would be able to log while they use it.

The legal system also takes time. The material could be disseminated fairly widely before an investigation would be started and completed. Especially with bots. Hundreds of bots are a lot faster than a team of humans.

1 more...

I'm in the US and grew up here. My dad is a piece of shit pedophile who exploited me for several websites. None on darkweb but at the time, they didn't really need to be.

Word got out and cops came to interrogate ME the person who was the victim in this situation. They also blamed me for what was going on (how? I don't know, I was a teen who was being exploited underage but cops gonna cop) and they basically intimidated me into dropping out of school and taking the blame for my dad because the other option was that I be sent to a home as an orphan.

My dad got away with a slap on the wrist basically because cops in the US don't do their job. They even cover for people. I ended up literally running away from home.

I think the issue is that people see this topic as all outrage - I mean, look at the comments here. Everyone is so mad they can't even think straight. And I'm personally noticing that some of the outrage doesn't even seem to be directed at the right people. Like everyone is super willing to shit on pedos left and right but I wonder if they would be willing to listen to someone who has been knee deep in this shit before and were innocent because they were being exploited by a pedophile.

Like a lot of comments say "disgusting" but then don't say anything about how the person that it happened to must feel. Everyone's upset they saw something but they don't seem to be upset about who they saw it happening to.

Like I wonder who they think hurts more in this situation. The person who was made into a victim or the people who just saw it happen.

3 more...

Just had like 90 (probably unrelated) arrests in one state in Germany. There are just way too man to arrest them all.

8 more...

These bad actors are wierd, most who spread CSAM dont want to be known about by the masses, these posts are implied to be attempting to be visible as their posting in a well known community. It feels Sus.

Keep up the good work keeping us safe, keep your therapist and/or support network in the loop about this so they can catch you if you fall.

I mean, this is just pulling out the old Reddit playbook.

AHS was known for mass posting CSAM on subreddits that they wanted to be banned by the admins.

Bit different here, but the principle of killing the place is there. CSAM spammed on L.W -> CSAM spreads to other instances because federation and sync -> other instances obviously don't want CSAM on them and defederate from L.W -> L.W dies because nobody is federated with them

This was my first thought as well. I hate that being so cynical is my reaction now, but this is just DDOS by another method. The point is to make the platform unusable so people will go somewhere else. CSAM is just the weapon. It's doubly vile because it's just going to become ammunition in the war against federation altogether. The stakes are high enough that the institutional players have plenty of cash, no ethics, and no accountability, and I wouldn't at all put it past any social media alternative to employ these kinds of tactics to kill the fediverse before it can gain a foothold. And (at the risk of getting too conspiratorial) that's not even considering governments and ordinary black hats.

2 more...
3 more...

This sucks, but we were probably due to have a Grand Registration Security Hash-Out at some point; going forward, any instance that wants others to federate with it is probably going to have to have a system in place to make it impossible for jackasses like this to create endless spam accounts.

It's one of the few things Reddit handles the situation better by being a centralized entity with a dedicated workforce filtering out these content. It's a shame it has to be this way, but I understand why it has to be done.

So, Mastodon has this same problem?

Pretty much. I recently had my mastodon feed spammed with racist, homophobic, and gore-filled posts just because they would post with a list of unrelated hashtags. You could keep blocking the poster or the instance but they would pop back up from another instance or with another account. It eventually stopped but I'm sure it'll happen again. You're apparently able to filter out certain offensive terms with a filter but I think you have to manually enter the terms yourself.

1 more...

There have been issues in the larger instances with slow or unresponsive moderation, leading to occasional bursts of bot activity

3 more...
5 more...

This is seriously fucked up, but won't closing lemmyshitpost just lead them to target other LW communities?

Probably unrelated but I once saw that community get attacked with scat porn, it could be the same person.

There's also another issue. If you upload an image to Lemmy but then cancel, the image is still hosted on the Lemmy instance and one can still access it if one copies the image's URL before canceling. This basically means that there might be other illegal stuff that's being hosted on Lemmy instances without anyone noticing.

Yeah.. There has to be a system in place for this. Jesus.

That should probably be fixed at the Lemmy level.

I also found the opposite: had a post, and the image disappeared, with an "error in image" message, for a jpeg. I wonder if whoever did this, was aiming for exactly that outcome.

Wow, this is awful. Huge cudos to y'all for holding on through this. It's obviously a deliberate attack on the fediverse by malicious actors.

Isn't there semi-automated tools that can detect CP?

Those might be an automated solution to at least cut down on the volume.

The same can go for banned images. These can automatically be identified with perceptual hashing, and automatically be denied when uploading.

We should just track these people and send them the police directly, way more efficient than Symptom control.

There are. They are easy to abuse too, and cause a lot of other issues, along with not being 100% efficient. Apple tried that and was met with appropriate backlash. There are many issues : training data is one, but also it is impossible to rule out false positive/false negative automatically, and it is relatively easy (for now) to doctor pictures to pass through. A nefarious actor could easily bypass these. There's also the case of false positive that can very quickly push some bystander under the bus, and knowing how moderate and understanding internet is… yup.

It is also risky, as this will effectively be a censorship tool; they are often setup in the pretense of "helping", but once they're up, it all depends on who steers it. Such responsibility can hardly fall on moderators/admin of lemmy, and it would also be problematic to handle them on a more nationwide (or more) level, since it would give incredible censorship power to authorities.

And a bigger bottom-line is that working hard to prevent these kind of content from reaching us, while it have an obvious upside, also does nothing in respect to the actual issue of the content existing and being created.

tl;dr: there is no easy solution that doesn't come with one hell of a string attached to it.

On the other hand, it is quite hard to hide yourself from authorities online, and this kind of behavior (I hope, somehow, that the people that posted these only do so to be toxic to lemmy and not to actually disseminate content) should lead to some action from authorities, getting to actual people and subsequently moving upward the stream to actually act on it. Hopefully.

The one thing AI would be good for... Humans shouldn't have to see that shit.

Humans have to see that shit to train ai. That is why it is so difficult to find a model for that

2 more...

You'd need data to train an AI... Yeah that won't happen

IIRC there is a database that law enforcement uses during investigations to obtain access to these groups (they obtain consent from the victims to use the material).

2 more...
4 more...
8 more...

Just saw this: That is fucking awful. Thank you for your service to the community and for protecting us from seeing that shit.

This is so fucked, but this brings up a question -- is this something to be concerned about for instances federated with lemmy.world? As in, if something like this is uploaded to a community that the instance federates with, their instance will now have a copy as well?

yes

Many instances deleted all pictures for the last 24-48h as a precaution or shut down entirely.

2 more...

It definitely is I think.

Imagine being raided for CP and all you did was federate your hobby self-hosted instance to an instance it was posted in.

3 more...

It is absolutely ghastly to realise that something like this was inevitable.

Thank you for being willing to deal with this. Godspeed.

Ya'll are going through one of the worst situations I could imagine, but I'm confident you will figure it out and come out better for it. Keep your heads up. (P.S. - Sorry about the scat, lol)

1 more...

Lads, as a casual Lemmy user, just how much danger am I in of having my mind permanently incinerated by seeing images of children being sexually tortured? I've been using the net since the mid-90s and I have never seen a single piece of CSAM in that time, and I now realise that I've been insanely lucky in that regard. My mind is already host to all manner of unspeakable internet shit (looking at you, cartels), but I don't think I could endure seeing anything like the stuff those evil fucking degenerate nihilist cunts have on their hard drives. I would want to commit murder.

So, stay the hell off Lemmy or... ?

Look I unfortunately ran into one of these pieces of content, and I think it will stay with me forever. I think it's because I sort by "New" in order to try and help promote the good undiscovered content. As long as you focus on Hot/Active, I think you'll be fine.

From my experience, using "new" is dangerous on any platform. I've seen so many cursed thumbnails on youtube...

I'd avoid hot. Unlike Reddit's sort of the same name, Lemmy's hot gives a lot of weight to brand new posts. I regularly saw lots of posts with no votes when I used it. Active or top is probably safer. Though admittedly, if someone is using bots to post content, they could use bots to upvote, too. Lemmy has pretty much nothing to prevent even basic botting. The way federation works is actually way worse for the ability to prevent bots, because bots just need any insecure instance and can spin up their own instance in minutes if they can't find an existing insecure one (at the cost of burning a domain).

1 more...

I just read another comment saying that posts were showing up in hot / active somehow despite being heavily downvoted. It wouldn't surprise me TBH. Hot / active always seems to be buggy.

1 more...
2 more...

If you’re just browsing the front page/hot you should be fine, nothing awful will make it there

7 more...
12 more...

Anyone got any idea why they are doing this? Seems a bit extreme.

This is a very popular Lemmy.World community. This instance has been dealing with regular ddos attacks for quite a while now. Either some person or some group has an issue with this specific instance, or they picked it because it's the biggest and they're trying to take Lemmy down generally. The ddos attacks have chased away a number of people, but lots of us have dug in our heels. This is probably the newest strategy.

The admins have said, based on the ddos strategies, whoever is doing it is familiar with underlying Lemmy implementation, so my guess is it's someone from one of the instances that we defederated from, but it's just a guess.

Potential sabotage. There’s definitely a corporate interest in killing the Fediverse and getting people like you and me back to Reddit and Twitter.

Plus for outside groups with potential interest who aren’t informed of the situation, Lemmy is now branded as that CSAM site so people are not as likely to come here and or leave their established mainstream social platforms

This is sabotage on another level. The people posting this faces seriously time in jail for just having it on their PC.

I think it's been the same people throughout, starting with the defacing, followed by the DDOS'es, and now this. Judging from the content of the defacing, it's bigots who don't like the idea of someplace they're not welcome.

Ah so exploding head fucktards.

The ones that care so much about “grooming” and “leave kids alone”

Who would have thought that was all projection!? Wait we all did

5 more...
5 more...
7 more...
8 more...

Would someone be able to ELI5 why lemmyshitpost has this problem but other communities don't or at least seemingly? I would think if they can do this to one lemmy.world community, wouldn't they be able to do all?

6 more...

I am wondering what kind of moderation tools would be needed.
On the top of my head, I'd say a trust-level system would be great, both for instances and users. New instances and users start out on a low trust level. Posts and commemts federated by them could be set to require approval or get deranked compared to other posts and comments. In time the trust-level increases and the content is shown as usual. If an incident occurs and content is getting reported, the trust level decreases again and eventually will have to be approved first again.

You can couple that with a reporting-trust-level. If a report is legitimate, future report will hold more weight, while illegitimate reports will make future reports hold less.

3 more...

Sincerely appreciate your work to better this instance and the fediverse in it's entirely.

Why would someone post that crap? If you’ve been banning and removing posts all day that seems like someone is malicious trying to do it to get Lemmy in trouble. I don’t know, just a guess but someone needs to go to prison for doing that.

Lemmy devs NEED to get mod tools out asap. Approved posts only, whitelisting users for communities, etc.

1 more...

The only word I can think of is "disgusting", and that is far from strong enough.

Well that's fucking horrible. Jesus christ on a bicycle.

What's CSAM

Honestly why can't we just say what it is instead of giving every-fucking-thing an anagram that people have to ask what it is in the first place.

This is what it is, though. The entire point of switching to the term "CSAM" is because it more accurately describes the content as abusive.

I think they're asking people to type out child sex abuse materials instead of CSAM. It's not a common enough acronym that people know what it means without explanation. As evidenced by every post using CSAM having someone ask what it means.

3 more...
5 more...
12 more...

Bless you for not knowing. It’s the more accurate description of what was called child porn. It stands for child sexual abuse material.

15 more...

What kind of lowlife piece of shit do you need to be to post some shit like that? Some people will stoop to the most depraved levels just to fuck with strangers, it's horrifying

1 more...

Thank you for your work.

That also means that they could post to other communities, so I guess moderators need to be vigilant

3 more...

is there a list somewhere including detailed descriptions of the tools needed?

My field is QA but i'm reasonably competent in a few scripting languages (mainly JS, but can also code in python and C#)

they would probably not be browser embedded, at least not at first, but i can dedicate a couple evenings a week to writing tools so long as there is a specification

This is the kicker about the Lemmy backend. A ton of people on here are familiar with C#/JS-type languages (including me), but since it's written in Rust it significantly narrows the pool of people who can help. Unfortunately it takes a non negligible amount of time to learn to write quality Rust, with the different memory management, paradigms and everything

sure, but the mod tools on reddit were able to be written without backend access. Lemmy does have a REST API that is pretty easy to authenticate against.

my problem is that i don't have a tonne of experience moderating so i don't really know what's needed. that's why i need experienced mods to write specifications

i and others like me can then write the tools

2 more...
2 more...
3 more...

isn't this what 8cjan is for? Seriously what the fuck is wrong with people who think that CSAM is appropriate shit post material. The Internet really is a fucked up place. is there a way to setup something to automatically remove inappropriate posts similar to YouTube's system of automatically removing inappropriate contents?

Legally , in the US , you would have to remove it from public view, keep it stored on the server, collect identifying information about the uploader, and send all that data to the authorities - all that while maintaining a low false-positive rate.
Kind of a tall order.

3 more...

Are they only posting to this one community? If so, why? Would closing this community not just make them post on a different one?

1 more...

My guess is butthurt trumpolinas whose memes could not gain traction and were shat upon.

Eh. Some trolls do it for the lulz.

It's so funny to me that there are peopld who are legit willing to risc a life in jail for trolling some people that may or may not disagree with them...

Those people either have developmental challenges, if they're adults (e.g. they belong to a mental hospital), or are edgy teens.

Dangerous tiktok challenges can say quite a lot about the subject.

1 more...

I had no idea that this was going on. I expect that, like me, most people are horrified.

Have you reported this to the FBI?

You don’t report it to the FBI. They need to register with the National Center for Missing and Exploited Children (NCMEC), where they will be given a login and then they report it through there. They also should make sure to not purge the CSAM as there are specific rules about removal.

1 more...
1 more...

That sucks! I saw one of these things and reported it. If you need moderators for when it opens I'm willing to help.

2 more...

CSAM (Child Sexual Assault Material) posts

The federal governments of several nations should be in pursuit of this, and IP addresses and specific time logs shared.

2 more...

To the person uploading such things, don't they ask themselves, what if that was your child?

No, they don't.

These "people" have no empathy. They literally don't care, and nothing you can say or do will make them care.

They are scum. Simple as that.

1 more...

Thank you so much for the transparency and for all your (and the teams) work to keep things running for us <3

IP ban + report to the ISP or VPN provider should get the ball rolling. IP blacklist for known compromised hosts. Police report in the country with jurisdiction.

2 more...

For some reason I only got this on my feed now. But if you read this mods, thank you for all your hard work

I'm sorry you (and the other admins & mods of the community) have to deal with this shit, it's disgusting. Thank you for doing what you do.

Of all the lack of positive role model behaviour one could exhibit, it had to be this. Seeing that shit kind of fucked me up, NGL. Good health to the mods who are running defense for us!

1 more...

I'd like to do my part and report there are a ton of assholes on lemmy. It's REALLY bad here and I've been around a while.

That's the portion of the user base that didn't leave reddit voluntarily.

Dingding.

I also think we've caught the attention of some haters that just don't like hopeful people or nice things.

I appreciate all the hard work you're doing. Not only must it be exhausting to delete all this perverse filth, but I'm guessing you have to look at it too. At least the thumbnail.

Pedophiles are fighting with the only weapon they have that can fuck up someone’s life-

The irony.

Even if they aren't they are providing it and weaponizing it to destroy communities this is a sad day. To think people would stoop this low is truly unhinged and unforgivable.

1 more...
1 more...