A NSFW detector with CoreML

pexavc@lemmy.world to Open Source@lemmy.ml – 73 points –
GitHub - lovoo/NSFWDetector: A NSFW (aka porn) detector with CoreML
github.com

Other samples:

Android: https://github.com/nipunru/nsfw-detector-android

Flutter (BSD-3): https://github.com/ahsanalidev/flutter_nsfw

Keras MIT https://github.com/bhky/opennsfw2

I feel it's a good idea for those building native clients for Lemmy implement projects like these to run offline inferences on feed content for the time-being. To cover content that are not marked NSFW and should be.

What does everyone think, about enforcing further censorship, especially in open-source clients, on the client side as long as it pertains to this type of content?

Edit:

There's also this, but it takes a bit more effort to implement properly. And provides a hash that can be used for reporting needs. https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX .

Python package MIT: https://pypi.org/project/opennsfw-standalone/

47

You are viewing a single comment

2 of them are lincensed under BSD-3, so not open source. The the 3rd one uses Firebase, so no thanks.

Edit: BSD-3 is open source. I confused it with BSD-4. My bad.

How is BSD-3 not open source? I think you are confusing "Free/Libre" and Open Source. BSD-3/MIT licenses are absolutely open source. GPL is Free/Libre and Open Source (FLOSS)

It's not by OSD definition. Having code source available =/ open source.

And most Lemmy clients I have seen use GPL or AGPL licences, so they couldn't use code licensed under BSD.

Edit: This is incorrect. I confused it with BSD-4. My bad.

What in the BSD-3 license goes against OSD exactly?

You are clearly confused. The BSD-3 isn't only "having the source", it gives you the right to package, distribute, and modify the source code at will. What it doesn't have compared to the GPL is protections from someone not sharing their modifications (for example when used in closed source products). In that sense it is more "freedom" than the GPL, but that freedom comes with a cost to the community, and in a sense the freedom afforded to the original author.

It is literally approved by the OSI itself: https://opensource.org/license/bsd-3-clause/

And yes, BSD-3 libraries are compatible with the GPL: https://fossa.com/blog/open-source-software-licenses-101-bsd-3-clause-license/

Is there a confidently wrong community on Lemmy yet?

You are correct. I'm sorry, I confused it with BSD-4 as that used to be the 3rd clause. I updated my post and thank you for calling me out.

That's still wrong though. The BSD-4 is literally FSF approved. It's just not GPL compatible and not technically OSI approved. But only on a technicality. The only difference between BSD-3 (BSD New) and BSD-4 (BSD Old) is the advertisement clause. It has nothing to do with redistribution, packaging, or modification of the code. OSI doesn't agree with the advertisement clause so it's not officially approved, doesn't mean it isn't Open Source.

That's where I disagree. While it's true that the only difference is the GPL complience it's definetely against the spirit of open source and OSD. So it is source available license, but calling it open source is a stretch. The simple fact that it renders it unsable for GPL projects go against what open source stands for.

True as that maybe be, your original statement "BSD-4" is not open source is still completely wrong, plain and simple. BSD-4 is not just having access to the source, it gives you significant rights over the source as well. The incompatibility lie with a technicality, an inconvenient one, but a technicality nontheless. Even the FSF agrees.

I do agree with you that 4-clause BSD is open-source, but only just barely, and I agree with GP that it goes against the spirit of FOSS even if it is technically "open-source".

Plus the advertising clause is just an obnoxious thing to have in a license regardless.

good point, but was just providing samples. I myself would gladly create a simple package for inferencing using a properly licensed model file.

Edit: Linked a MIT keras model for instance, also thanks for the tip didn't know about GPL / BSD relationship

That person is wrong about the BSD-3 license, so it's not a very good "tip".

oh i see, just saw your other comment as well

By definition you can't have some of these things open source, CSAM/NSFW detection needs to be closed source because people are constantly trying to get around it.

Security through obscurity doesn't work. These systems need to be actually robust, which is only trustworthy with open source

That is literally not the problem, it's not security. It's obfuscation on purpose so things can't be reverse engineered. I agree with you in most other cases, but this is one I don't. It's the same reason there aren't public hash lists of these vile images out there, because then the people out there will change them. Same with fuzzy hashing and other strategies, these lists and bits of code must remain private so they aren't tipped off to their stuff tripping the content.

This can't be a cat and mouse game all the time when it comes to CSAM, it must work for a while. So I'm fully on board with keeping it private while we can, it's the one area I am okay with doing that. If it's open bad actors will just immediately find a way to get around detection and all modes of knowing it will be obsolete until we find another way, and in that time we're waiting to find another way they're going around posting that shit everywhere, then it doesn't matter how open source Lemmy is, because all of our domains will be seized.

Because any detector has to be based on machine learning you can open source all code providing you keep model weights and training data private.

But there's a fundamental question here, that comes from Lemmy being federated. How can you give csam detecting code/binaries to every instance owner without trolls getting access to it?

Some instances will be run by trolls, and blackbox access is enough to create adversarial examples that will bypass the model, you don't need source code.

That discussion is happening, right now the prevailing idea is that it's an instance admin opt-in feature, where you can host it yourself or use a hosted tool elsewhere to prevent it. on top of that, instance admins should be allowed to block federating images, so things uploaded on other instances are not federated to us and instead those images are requested directly from your instance. That would help cut down on the spread of bad material, and if something was purged on the home instance it could be purged everywhere

Just chiming in here to say that this is very much like security through obscurity. In this context the "secure" part is being sure that the images you host are ok.

Bad actors using social engineering to get the banlist is much easier than using open source AI and collectivly fixing the bugs when the trolls manage to evade it. Its not that easy to get around image filters like this, and having to do wierd things to the pictures to be able to pass the filter barrier could be work enough to keep most trolls from posting. Using a central instance that filters all images is also not good, because now the person operating the service is now responsible for a large chunk of your images, creating a single point of failure in the fediverse(and something that could be monetised to our detriment) Closed source can not be the answer either because if someone breaks the filter, the community cant fix it, only the developer can. So either the dev team is online 24/7 or its paid, making hosting a fediverse server dependent on someones closed source product.

I do think however that disabling image federation should be an option. Turning image federation off for some server for a limited time could be a very effective tool against these kinds of attacks!