White AI faces judged human more often than actual human faces

cyu@sh.itjust.worksbanned from community to Technology@lemmy.world – 203 points –
47

You are viewing a single comment

We used the 100 AI and 100 human White faces (half male, half female) from Nightingale and Farid. The AI faces were generated using StyleGAN2. The human faces were selected from the Flickr-Faces-HQ Dataset to match each of the AI faces as closely as possible (e.g., same gender, posture, and expression). All stimuli had blurred or mostly plain backgrounds, and AI faces were screened to ensure they had no obvious rendering artifacts (e.g., no extra faces in background). Screening for artifacts mimics how real-world users screen AI faces, either as scientists or for public use, and therefore captures the type and range of stimuli that appear online. Participants were asked to resize their screen so that stimuli had a visual angle of 12° wide × 12° high at ~50 cm viewing distance.

I don't know why people (not saying you, more directed at the top commenter) keep acting like cherry picking AI images in these studies invalidate the results - cherry picking is how you use AI image generation tools, that's why most will (or can) generate several at once so you can pick the best one. If a malicious actor was trying to fool people, of course they'd use the most "real" looking ones, instead of just the first to generate

Frankly the studies would be useless if they didn't cherry pick, because it wouldn't line up with real world usage

Tbh I'm more concerned about how they chose the human faces. I can't explain it, but it feels like they were biased toward choosing 'fake-looking' faces, lol

3 more...
3 more...