Man Arrested for Creating Child Porn Using AI

db0@lemmy.dbzer0.com to News@lemmy.world – 362 points –
Man Arrested for Creating Child Porn Using AI
futurism.com

A Florida man is facing 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography, highlighting the danger and ubiquity of generative AI being used for nefarious reasons.

Phillip Michael McCorkle was arrested last week while he was working at a movie theater in Vero Beach, Florida, according to TV station CBS 12 News. A crew from the TV station captured the arrest, which made for dramatic video footage due to law enforcement leading away the uniform-wearing McCorkle from the theater in handcuffs.

300

You are viewing a single comment

How was the model trained? Probably using existing CSAM images. Those children are victims. Making derivative images of “imaginary” children doesn’t negate its exploitation of children all the way down.

So no, you are making false equivalence with your video game metaphors.

A generative AI model doesn't require the exact thing it creates in its datasets. It most likely just combined regular nudity with a picture of a child.

In that case, the images of children were still used without their permission to create the child porn in question

That's not really a nuanced take on what is going on. A bunch of images of children are studied so that the AI can learn how to draw children in general. The more children in the dataset, the less any one of them influences or resembles the output.

Ironically, you might have to train an AI specifically on CSAM in order for it to identify the kinds of images it should not produce.

Why does it need to be “ nuanced” to be valid or correct?

Because the world we live in is complex, and rejecting complexity for a simple view of the world is dangerous.

See You Can’t Get Snakes from Chicken Eggs from the Alt-Right Playbook.

(Note I’m not accusing you of being alt-right. I’m saying we cannot ignore nuance in the world because the world is nuanced.)

That's a whole other thing than the AI model being trained on CSAM. I'm currently neutral on this topic so I'd recommend you replying to the main thread.

How is it different?

It's not CSAM in the training dataset, it's just pictures of children/people that are already publicly available. This goes on to the copyright side of things of AI instead of illegal training material.

It’s images of children used to make CSAM. No matter of your mental gymnastics can change that, nor the fact that those children’s consent was not obtained.

Why are you trying so hard to rationalize the creation of CSAM? Do you actually believe there is a context in which CSAM is OK? Are you that sick and perverted?

Because it really sounds like that’s what you’re trying to say, using copyright law as an excuse.

It's every time with you people, you can't have a discussion without accusing someone of being a pedo. If that's your go-to that says a lot about how weak your argument is or what your motivations are.

It’s hard to believe someone is not a pedo when they advocate so strongly for child porn

You're just projecting your unwillingness to ever take a stance that doesn't personally benefit you.

Some people can think about things objectively and draw a conclusion that makes sense to them without personal benefit being a primary determinant of said conclusion.

You're just projecting your unwillingness to ever take a stance that doesn't personally benefit you.

I’m not the one here defending child porn

You're arguing against a victimless outlet that there is significant evidence would reduce the incidence of actual child molestation.

So let's use your 'logic'/argumentation: why are you against reducing child molestation? Why are you against fake pictures but not actual child molestation? Why do you want children to be molested?

Your claim that it’s victimless is, of course, false since real children are used in the training data without consent. This also ignores the fact that the result is child porn, for which you are arguing in support of.

Lastly, your claim that any of this results in any reduction in child abuse is spurious and unsubstantiated.

8 more...
8 more...
8 more...
8 more...

its hard to argue with someone who believes the use of legal data to create more data is ever illegal.

8 more...
8 more...

Lol you don't understand that the faces AI generated are not real. In any way.

I am not trying to rationalize it, I literally just said I was neutral.

How are you neutral about child porn? The vast majority of everyone on this planet is very much against it.

I'm not neutral about child porn, I'm very much against it, stop trying to put words in my mouth. I'm talking about this kind of use of AI could be in the very same category of loli imagery, since these are not real child sexual abuse material.

I'm not neutral about child porn

Then why are you defending it?

8 more...
8 more...
8 more...
8 more...

Good luck convincing the AI advocates of this. They have already decided that all imagery everywhere is theirs to use however they like.

8 more...
8 more...

Can you or anyone verify that the model was trained on CSAM?

Besides a LLM doesn't need to have explicit content to derive from to create a naked child.

You’re defending the generation of CSAM pretty hard here in some vaguely “but no child we know of” being involved as a defense.

I just hope that the Models aren't trained on CSAM. Making generating stuff they can fap on ""ethical reasonable"" as no children would be involved. And I hope that those who have those tendancies can be helped one way or another that doesn't involve chemical castration or incarceration.

While i wouldn't put it past Meta&Co. to explicitly seek out CSAM to train their models on, I don't think that is how this stuff works.

But the AI companies insist the outputs of these models aren't derivative works in any other circumstances!

8 more...