A bride to be discovers a reality bending mistake in Apple's computational photography

stopthatgirl7@kbin.social to Technology@lemmy.world – 258 points –
A bride to be discovers a reality bending mistake in Apple's computational photography
appleinsider.com

A U.K. woman was photographed standing in a mirror where her reflections didn't match, but not because of a glitch in the Matrix. Instead, it's a simple iPhone computational photography mistake.

98

You are viewing a single comment

This story may be amusing, but it's actually a serious issue if Apple is doing this and people are not aware of it because cellphone imagery is used in things like court cases. Relative positions of people in a scene really fucking matter in those kinds of situations. Someone's photo of a crime could be dismissed or discredited using this exact news story as an example -- or worse, someone could be wrongly convicted because the composite produced a misleading representation of the scene.

I see your point, though I wouldn't put it that far. It's an edge case that has to happen in a very short duration.
Similar effects can be acheived with traditional cameras with rolling shutter.
If you're only concerned of relative positions of different people during a time frame, I don't think you need to be that worried. Being aware of it is enough.

I don't think that's what's happening. I think Apple is "filming" over the course of the seconds you have the camera open, and uses the press of the shutter button to select a specific shit from the hundreds of frames that have been taken as video. Then, some algorithm appears to be assembling different portions of those shots into one "best" shot.

It's not just a mechanical shutter effect.

I'm aware of the differences. I'm just pointing out that similar phenomenon and discussions have been made since rolling shutter artifacts have been a thing. It still only takes milliseconds for an iPhone to finish taking it's plethora of photos to composite. For the majority of forensic use cases, it's a non issue imo. People don't move that quick to change relative positions substantially irl.

Did you look at the example in the article? It's clearly not milliseconds. It's several whole seconds.

You don't need a few whole seconds to put an arm down.

Edit: I should rephrase. I don't think computational photography algorithms would risk compositing photos that are whole seconds apart. In well lit environments, one photo only needs 1/100 seconds or less to expose properly. Using photos that are temporally too far apart risk objects moving too much in the frame, and thus fail to composite.

There's three different arm positions in a single picture. That doesn't happen in the blink of an eye.

The camera is taking many frames over a relatively long time to do this.

This is nothing at all like rolling shutter, and it's very obvious from looking at the example in the article.

Those arm positions occur over the course of a fluid motion in a single second. How long does it take for you to drop your hands to your side or raise them to clasped from the side? It doesn’t take me more than about half a second as a deliberate movement.

It takes you several seconds to move your arm? I hope you don’t do manual work.

Also did you use the iOS camera app before? You can see how long it takes for the iPhone to take multiple shots for the always-on hdr feature, and it isn’t several seconds.

There's three different arm positions in a single picture. That doesn't happen in the blink of an eye.

It's a lot faster than you might be expecting. I found it helps to visualize it in person. Go to a mirror and start with your hands together like in the right side mirror. Now let your arms down naturally, to the position in the left side mirror. If you don't move your arms at the same exact time, one elbow will still be parallel to the floor while the other elbow has extended already, just like in the middle position.

Thus, we can tell that the camera compiled the image from right to left.

I can also see the three arm positions being a single motion, just in three different time frames. If it really takes seconds to complete a composite, then it should also be very easy to reproduce, and not something so rare it makes it into the news. If I still can't convince you, I guess we agree to disagree then.

then it should also be very easy to reproduce, and not something so rare it makes it into the news.

And it is, according to the article. Just in case you haven't read.

It has made headlines not because it's rare, but because it's outrageous. Just in case you haven't noticed.

Please, feel free to reproduce one yourself then. And no, using the panorama trick doesn't count, which I think the "silly photos" in the article may be actually referencing instead of this.

And is it really "outrageous"? At most I think this is amusing. Nowhere in the article gave me the impression that this is something that people need to be extremely angry about, Mr. Just in case.

It should be. All computational photography has zero business being used in court

All digital photography is computational. I think the word you're looking for is composite, not computational.

Unless the dude is saying only film should be admissible, which doesn't sound all that bad.

Film is also subject to manipulation in the development stage, even if you avoid compositing e.g. dodging and burning. Photographic honesty is an open and active philosophic debate that has been going on since its inception. It's not like you can really draw a line in the sand and blanketly say admissible or not. Although I'm sure established guidelines would help. Ultimately, it's an argument about the validity of evidence that needs to be made on a case by case basis. The manipulations involved need to be fully identified and accounted for in those discussions.

With all the image manipulation and generation tools available to even amateurs, I'm not sure how any photography is admissible as evidence these days.

At some point there's going to have to be a whole bunch of digital signing (and timestamp signatures) going on inside the camera for things to be even considered.

3 more...

I'm still waiting for the first time somebody uses it to zoom in on a car number plate and it helpfully fills it in with some AI bullshit with something else entirely.

We've already seen such a thing with image compression.

https://www.zdnet.com/article/xerox-scanners-alter-numbers-in-scanned-documents/

This was important in the Kyle Rittenhouse case. The zoom resolution was interpolated by software. It wasn't AI per se, but the fact that a jury couldn't be relied upon to understand a black box algorithm and its possible artifacts, the zoomed video was disallowed.

(this in no way implies that I agree with the court.)

The zoom resolution was interpolated by software. It wasn't AI per se

Except it was. All the "AI" junk being hyped and peddled all over the place as a completely new and modern innovation is really just the same old interpolation by software, albeit software which is fueled by bigger databases and with more computing power thrown at it.

It's all just flashier autocorrect.

As far as I know, nothing about AI entered into arguments. No precedents regarding AI could have been set here. Therefore, this case wasn't about AI per se.

I did bring it up as relevant because, as you say, AI is just an over-hyped black box. But that's my opinion, with no case law to cite (ianal). So to say that a court would or should feel that AI and fancy photoediting is the same thing is misleading. I know that wasn't your point, but it was part of mine.

I watched that whole court exchange live, and it helped the defendant's case that the judge was computer illiterate.

As it usually does. But the court's ineptitude should favor the defense. It shouldn't be an arrow in a prosecutor's quiver, at least.

This isn't an issue at all it's a bullshit headline. And it worked.

This is the result of shooting in panorama mode.

In other news, the sky is blue

3 more...