Man Arrested for Creating Child Porn Using AI

db0@lemmy.dbzer0.com to News@lemmy.world – 362 points –
Man Arrested for Creating Child Porn Using AI
futurism.com

A Florida man is facing 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography, highlighting the danger and ubiquity of generative AI being used for nefarious reasons.

Phillip Michael McCorkle was arrested last week while he was working at a movie theater in Vero Beach, Florida, according to TV station CBS 12 News. A crew from the TV station captured the arrest, which made for dramatic video footage due to law enforcement leading away the uniform-wearing McCorkle from the theater in handcuffs.

291

You are viewing a single comment

I don't see how children were abused in this case? It's just AI imagery.

It's the same as saying that people get killed when you play first person shooter games.

Or that you commit crimes when you play GTA.

Not a great comparison, because unlike withh violent games or movies, you can't say that there is no danger to anyone in allowing these images to be created or distributed. If they are indistinguishable from the real thing, it then becomes impossible to identify actual human victims.

There's also a strong argument that the availability of imagery like this only encourages behavioral escalation in people who suffer from the affliction of being a sick fucking pervert pedophile. It's not methadone for them, as some would argue. It's just fueling their addiction, not replacing it.

The difference is intent. When you're playing a FPS, the intent is to play a game. When you play GTA the intent is to play a game.

The intent with AI generated CSAM is to watch kids being abused.

Whose to say there aren't people playing games to watch people die?

There may well be the odd weirdo playing Call of Duty to watch people die.

But everyone who watches CSAM is watching it to watch kids being abused.

When you're playing a FPS, the intent is to watch people being murdered.

How is this argument any different?

Punishing people for intending to do something is punishing them for thought crimes. That is not the world I want to live in.

This guy did do something - he either created or accessed AI generated CSAM.

I'm not talking about "this guy". I'm talking about what you just said.

Intent is defined as intention or purpose. So I'll rephrase for you: the purpose of playing a FPS is to play a game. The purpose of playing GTA is to play a game.

The purpose of AI generated CSAM is to watch children being abused.

I don't think that's fair. It could just as well be said that the purpose of violent games is to simulate real life violence.

Even if I grant you that the purpose of viewing CSAM is to see child abuse, it's still less bad than actually abusing them just like playing violent games is less bad than participating in real violence. Also, despite the massive increase in violent games and movies, the actual number of violence is going down so implying that viewing such content would increase the cases of child abuse is an assumption I'm not willing to make either.

The purpose of a game is to play a game through a series of objectives and challenges.

Even if I grant you that the purpose of viewing CSAM is to see child abuse

Very curious to hear what else you think the purpose of watching CSAM might be.

it’s still less bad than actually abusing them

"less bad" is relative. A bad thing is still bad. If we go by length of sentencing then rape is 'less bad' than murder. that doesn't make it 'not bad'.

so implying that viewing such content would increase the cases of child abuse is an assumption I’m not willing to make either.

OK?

I didn't claim that AI CSAM increased anything at all. Literally all I've said is that the purpose of AI generated CSAM is to watch kids being abused.

Neither did I claim that violent games lead to violence. You invented that strawman all by yourself.

A person said that there is no victim in creating simulated CSAM with AI just like there isn't one in video games, to which you replied that the difference there is intention. The intention to play violent games is to play games when as with viewing CSAM it's that your intention is to view abuse material.

Correct so far?

Ofcourse the intent is that. For what other reason would anyone want to see CSAM for, than to see CSAM? What kind of argument / conclusion is this supposed to be? How else am I supposed to interpret this than as you advocating for the crimimalization of creating such content despite the fact that no one is being harmed? How is that not pre-emptively punishing people for crimes they've yet to even commit? Nobody chooses to be born with such thoughts or desires, so I don't see the point of punishing anyone for that alone.

I've literally got no idea what you're talking about or what your point is. Are you saying this person hasn't committed a crime? Because that's incorrect. Lots of jurisdictions have laws preventing things like CSAM generated imagery, deepfake porn and a whole raft of other things. 'Harm' doesn't begin and end with something done to an individual for a lot of crimes.

Are you saying this person hasn’t committed a crime?

Yes, and if the law is interpretet in a way that it is considered illegal, and the person is punished for it, then that's a moral injustice and the kind of senselessness we as humans should grow out of. The fact that this "crime" has no victim is the whole point of why punishing for it makes no sense.

CSAM is illegal for a very good reason; producing it without abusing children is by definition impossible. By searching for and viewing such content, the person becomes part of the causal chain that leads to it being produced in the first place. By criminalizing it we attempt to deter people from looking for it and thus bringing down the demand and disincentivizing the production of it.

Using AI that is not trained on such content is out of this loop. There is literally nobody being harmed if someone decides to use it to create depictions of such content. It's not actual CSAM it's producing. By the very definition it cannot be. Not any more than shooting a person in a video game is a murder. CSAM stands for Child Sexual Abuse Material (I hate even saying that) so in other words; proof of the crime having happened. AI generated images are fiction. Nobody is being harmed. It's just a more photorealistic version of a drawing. Treating it as actual CSAM in the court is insanity.

Now. If the AI has been trained on actual CSAM and especially if the output simulates real people, then that's a whole another discussion to be had. This is however not what we're talking about here.

Well, the image generator had to be trained on something first in order to spit out child porn. While it may be that the training set was solely drawn/rendered images, we don't know that, and even if the output were in that style, it might very well be photorealistic images generated from real child porn and run through a filter.

An AI that is trained on children and nude adults can infer what a nude child looks like without ever being trained specifically with those images.

Your argument is hypothetical. Real world AI was trained on images of abused childen.

https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse

Only because real world AI was trained on the dataset of ALL PUBLIC IMAGES, dumbass

So you're admitting they are correct?

No, I'm admitting they're stupid for even bringing it up.

Unless their argument is that all AI should be illegal, in which case they're stupid in a different way.

Do you think regular child porn should be illegal? If so, why?

Generally it's because kids were harmed in the making of those images. Since we know that AI is using images of children being harmed to make these images, as the other posters has repeatedly sourced (but also if you've looked up deepfakes, most deepfakes are of an existing porn and the face just changed over top. They do this with CP as well and must use CP videos to seed it, because the adult model would be too large)... why does AI get a pass for using children's bodies in this way? Why isn't it immoral when AI is used as a middle man to abuse kids?

Since we know that AI is using images of children being harmed to make these images

As I keep saying, if this is your reasoning then all AI should be illegal. It only has CP in its training set incidentally, because the entire dataset of images on the internet contains some CP. It's not being specifically trained on CP images.

You failed to answer my questions in my previous comment.

Ok, if you insist...yes, CP should be illegal, since a child was harmed in its making. It can get a bit nuanced (for example, I don't like that it can be illegal for underage people to take pictures of their own bodies) but that's the gist of it.

That's not all of the questions I asked

They do this with CP as well and must use CP videos to seed it, because the adult model would be too large)… why does AI get a pass for using children’s bodies in this way? Why isn’t it immoral when AI is used as a middle man to abuse kids?

1 more...
1 more...
1 more...
1 more...
1 more...
1 more...
1 more...
1 more...

Yes exactly. That people are then excusing this with "well it was trained on all.public images," are just admitting you're right and that there is a level of harm here since real materials are used. Even if they weren't being used or if it was just a cartoon, the morality is still shaky because of the role porn plays in advertising. We already have laws about advertising because it's so effective, including around cigarettes and prescriptions. Most porn, ESPECIALLY FREE PORN, is an ad to get you to buy other services. CP is not excluded from this rule - no one gets free lunch, so to speak. These materials are made and hosted for a reason.

The role that CP plays in most countries is difficult. It is used for blackmail. It is also used to generate money for countries (intelligence groups around the world host illegal porn ostensibly "to catch a predator," but then why is it morally okay for them to distribute these images but no one else?). And it's used as advertising for actual human trafficking organizations. And similar organizations exist for snuff and gore btw. And ofc animals. And any combination of those 3. Or did you all forget about those monkey torture videos, or the orangutan who was being sex trafficked? Or Daisy's Destruction and Peter Scully?

So it's important to not allow these advertisers to combine their most famous monkey torture video with enough AI that they can say it's AI generated, but it's really just an ad for their monkey torture productions. And even if NONE of the footage was from illegal or similar events and was 100% thought of by AI - it can still be used as an ad for these groups if they host it. Cartoons can be ads ofc.

1 more...
1 more...

How many corn dogs do you think were in the training data?

Wild corn dogs are an outright plague where I live. When I was younger, me and my buddies would lay snares to catch to corn dogs. When we caught one, we'd roast it over a fire to make popcorn. Corn dog cutlets served with popcorn from the same corn dog is popular meal, especially among the less fortunate. Even though some of the affluent consider it the equivalent to eating rat meat. When me pa got me first rifle when I turned 14, I spent a few days just shooting corn dogs.

It didn't generate what we expect and know a corn dog is.

Hence it missed because it doesn't know what a "corn dog" is

You have proven the point that it couldn't generate csam without some being present in the training data

I hope you didn't seriously think the prompt for that image was "corn dog" because if your understanding of generative AI is on that level you probably should refrain from commenting on it.

Prompt: Photograph of a hybrid creature that is a cross between corn and a dog

Then if your question is "how many Photograph of a hybrid creature that is a cross between corn and a dog were in the training data?"

I'd honestly say, i don't know.

And if you're honest, you'll say the same.

But you do know because corn dogs as depicted in the picture do not exists so there couldn't have been photos of them in the training data, yet it was still able to create one when asked.

This is because it doesn't need to have been seen one before. It knows what corn looks like and it knows what a dog looks like so when you ask it to combine the two it will gladly do so.

But you do know because corn dogs as depicted in the picture do not exists so there couldn't have been photos of them in the training data, yet it was still able to create one when asked.

Yeah, except photoshop and artists exist. And a quick google image search will find them. 🙄

And this proves that AI can't generate simulated CSAM without first having seen actual CSAM how, exactly?

To me, the takeaway here is that you can take a shitty 2 minute photoshop doodle and by feeding it thru AI it'll improve the quality of it by orders of magnitude.

I wasn't the one attempting to prove that. Though I think it's definitive.

You were attempting to prove it could generate things not in its data set and i have disproved your theory.

To me, the takeaway here is that you can take a shitty 2 minute photoshop doodle and by feeding it thru AI it'll improve the quality of it by orders of magnitude.

To me, the takeaway is that you know less about ai than you claim. Much less. Cause we have actual instances and many where csam is in the training data. Don't believe me?

Here's a link to it

You were attempting to prove it could generate things not in its data set and i have disproved your theory.

I don't understand how you could possibly imagine that pic somehow proves your claim. You've made no effort in trying to explain yourself. You just keep dodging my questions when I ask you to do so. A shitty photoshop of a "corn dog" has nothing to do with how the image I posted was created. It's a composite between a corn and a dog.

Generative AI, just like a human, doesn't rely on having seen an exact example of every possible image or concept. During its training, it was exposed to huge amounts of data, learning patterns, styles, and the relationships between them. When asked to generate something new, it draws on this learned knowledge to create a new image that fits the request, even if that exact combination wasn't in its training data.

Cause we have actual instances and many where csam is in the training data.

If the AI has been trained on actual CSAM and especially if the output simulates real people, then that’s a whole another discussion to be had. This is however not what we’re talking about here.

Generative AI, just like a human, doesn't rely on having seen an exact example of every possible image or concept

If a human has never seen a dog before, they don't know what it is or what it looks like.

If it's the same as a human, it won't be able to draw one.

we don't know that

might

Unless you're operating under "guilty until proven innocent", those are not reasons to accuse someone.

1 more...

How was the model trained? Probably using existing CSAM images. Those children are victims. Making derivative images of “imaginary” children doesn’t negate its exploitation of children all the way down.

So no, you are making false equivalence with your video game metaphors.

A generative AI model doesn't require the exact thing it creates in its datasets. It most likely just combined regular nudity with a picture of a child.

In that case, the images of children were still used without their permission to create the child porn in question

That's not really a nuanced take on what is going on. A bunch of images of children are studied so that the AI can learn how to draw children in general. The more children in the dataset, the less any one of them influences or resembles the output.

Ironically, you might have to train an AI specifically on CSAM in order for it to identify the kinds of images it should not produce.

Why does it need to be “ nuanced” to be valid or correct?

Because the world we live in is complex, and rejecting complexity for a simple view of the world is dangerous.

See You Can’t Get Snakes from Chicken Eggs from the Alt-Right Playbook.

(Note I’m not accusing you of being alt-right. I’m saying we cannot ignore nuance in the world because the world is nuanced.)

That's a whole other thing than the AI model being trained on CSAM. I'm currently neutral on this topic so I'd recommend you replying to the main thread.

How is it different?

It's not CSAM in the training dataset, it's just pictures of children/people that are already publicly available. This goes on to the copyright side of things of AI instead of illegal training material.

It’s images of children used to make CSAM. No matter of your mental gymnastics can change that, nor the fact that those children’s consent was not obtained.

Why are you trying so hard to rationalize the creation of CSAM? Do you actually believe there is a context in which CSAM is OK? Are you that sick and perverted?

Because it really sounds like that’s what you’re trying to say, using copyright law as an excuse.

It's every time with you people, you can't have a discussion without accusing someone of being a pedo. If that's your go-to that says a lot about how weak your argument is or what your motivations are.

It’s hard to believe someone is not a pedo when they advocate so strongly for child porn

You're just projecting your unwillingness to ever take a stance that doesn't personally benefit you.

Some people can think about things objectively and draw a conclusion that makes sense to them without personal benefit being a primary determinant of said conclusion.

You're just projecting your unwillingness to ever take a stance that doesn't personally benefit you.

I’m not the one here defending child porn

You're arguing against a victimless outlet that there is significant evidence would reduce the incidence of actual child molestation.

So let's use your 'logic'/argumentation: why are you against reducing child molestation? Why are you against fake pictures but not actual child molestation? Why do you want children to be molested?

9 more...
9 more...
9 more...

its hard to argue with someone who believes the use of legal data to create more data is ever illegal.

9 more...
9 more...

Lol you don't understand that the faces AI generated are not real. In any way.

I am not trying to rationalize it, I literally just said I was neutral.

How are you neutral about child porn? The vast majority of everyone on this planet is very much against it.

I'm not neutral about child porn, I'm very much against it, stop trying to put words in my mouth. I'm talking about this kind of use of AI could be in the very same category of loli imagery, since these are not real child sexual abuse material.

I'm not neutral about child porn

Then why are you defending it?

9 more...
9 more...
9 more...
9 more...

Good luck convincing the AI advocates of this. They have already decided that all imagery everywhere is theirs to use however they like.

9 more...
9 more...

Can you or anyone verify that the model was trained on CSAM?

Besides a LLM doesn't need to have explicit content to derive from to create a naked child.

You’re defending the generation of CSAM pretty hard here in some vaguely “but no child we know of” being involved as a defense.

I just hope that the Models aren't trained on CSAM. Making generating stuff they can fap on ""ethical reasonable"" as no children would be involved. And I hope that those who have those tendancies can be helped one way or another that doesn't involve chemical castration or incarceration.

While i wouldn't put it past Meta&Co. to explicitly seek out CSAM to train their models on, I don't think that is how this stuff works.

But the AI companies insist the outputs of these models aren't derivative works in any other circumstances!

9 more...
28 more...