Man Arrested for Creating Child Porn Using AI

db0@lemmy.dbzer0.com to News@lemmy.world – 362 points –
Man Arrested for Creating Child Porn Using AI
futurism.com

A Florida man is facing 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography, highlighting the danger and ubiquity of generative AI being used for nefarious reasons.

Phillip Michael McCorkle was arrested last week while he was working at a movie theater in Vero Beach, Florida, according to TV station CBS 12 News. A crew from the TV station captured the arrest, which made for dramatic video footage due to law enforcement leading away the uniform-wearing McCorkle from the theater in handcuffs.

291

You are viewing a single comment

I wasn't the one attempting to prove that. Though I think it's definitive.

You were attempting to prove it could generate things not in its data set and i have disproved your theory.

To me, the takeaway here is that you can take a shitty 2 minute photoshop doodle and by feeding it thru AI it'll improve the quality of it by orders of magnitude.

To me, the takeaway is that you know less about ai than you claim. Much less. Cause we have actual instances and many where csam is in the training data. Don't believe me?

Here's a link to it

You were attempting to prove it could generate things not in its data set and i have disproved your theory.

I don't understand how you could possibly imagine that pic somehow proves your claim. You've made no effort in trying to explain yourself. You just keep dodging my questions when I ask you to do so. A shitty photoshop of a "corn dog" has nothing to do with how the image I posted was created. It's a composite between a corn and a dog.

Generative AI, just like a human, doesn't rely on having seen an exact example of every possible image or concept. During its training, it was exposed to huge amounts of data, learning patterns, styles, and the relationships between them. When asked to generate something new, it draws on this learned knowledge to create a new image that fits the request, even if that exact combination wasn't in its training data.

Cause we have actual instances and many where csam is in the training data.

If the AI has been trained on actual CSAM and especially if the output simulates real people, then that’s a whole another discussion to be had. This is however not what we’re talking about here.

Generative AI, just like a human, doesn't rely on having seen an exact example of every possible image or concept

If a human has never seen a dog before, they don't know what it is or what it looks like.

If it's the same as a human, it won't be able to draw one.