Fake babies, real horror: Deepfakes from the Gaza war increase fears about AI's power to mislead

L4sBot@lemmy.worldmod to Technology@lemmy.world – 157 points –
Fake babies, real horror: Deepfakes from the Gaza war increase fears about AI's power to mislead
apnews.com

Fake babies, real horror: Deepfakes from the Gaza war increase fears about AI's power to mislead::Among images of the bombed out homes and ravaged streets of Gaza, some stood out for the utter horror: Bloodied, abandoned infants.

44

You are viewing a single comment

Well, not necessarily. How about just embedding the following in the EXIF tag: digital signatures from the original camera; digital hashes of the original image; digital sigs for the publisher and the article where the pics will appear.

Any additional processing by a "social media content creator" - for example, adding captions to make a meme out of it - will also include the prior chain of digital sigs and hashes.

Now when it pops up on social media sites/apps, there can be little info bubbles that link to the original pic or article, or provide info on ownership of the camera along with date and timestamps of the pics.

Garbage will always exist on social media, but at least we can have these little tools to verify authentic images.

How would they be made secure against faking?

If the cryptographic key itself was extractable, it'd be easy to sign fake images with just a bit of custom software.

If it isn't, there's still workarounds. Buy a professional photography camera, disassemble it, extract the chip that does the signature, feed it fake GPS and image data, and you have a modified image signed as legit. A country's intelligence agency could easily do that.

Even if the camera was made completely unmodifiable, you could put it in a Faraday cage, feed it a spoofed GPS signal for fake date/time/location data, and take a picture of a high resolution screen showing your photoshopped image.

Building a system where end users are told "this image is cryptographically confirmed to be legit" just makes it easier to convince users that your fake images are legit.

Oh no. No social media site should ever claim that a post, story, or image is legit.

For some viral pics/posts, it should probably show a warning that the image doesn't have any signatures, no valid signatures, or a revoked signature. Otherwise, it probably just shows a verified signature chain, for example: BleedingHeartInfluencer*[edited]* → NyTimes*[edited]* → AP*[story]* → AhmedMohammed*[photographer,2023-12-03]*.

We can always assume nation states and other powerful people will know how to fake images, GPS, reality, etc. We can also always assume fakes will still be shared by many people without any proper authentication.

The main goal here would just be to reduce proliferation.

In this case you'd still need a way to know who the photographer is and whether they can be trusted. The photographer at the beginning of the chain can sign anything, regardless of if it's a real photograph or edited (or a real photograph of a staged scene with fake location/time data). The cryptography system could only tell you that the image originates with the same person or organisation who is associated with a specific cryptographic key.