Microsoft warns deepfake election subversion is disturbingly easy

boem@lemmy.world to Technology@lemmy.world – 290 points –
Microsoft GM on AI and elections: 'There will be fakes'
theregister.com
31

Well maybe stop shoving the tech that does that down everyone's throats? Just a thought šŸ¤·ā€ā™‚ļø

The best solution to any problem is to go back in time to before the problem was created, sure. That cat's so far out of the bag, and it's only going to multiply and evolve.

I mean, yeah that's true, but harm reduction is also a thing that exists. Usually it's mentioned in the context of drugs, but it could easily apply here.

Interesting take, addiction to the convenience provided by AI driving the need to get more. I suppose at the end of the day it's probably the same brain chemistry involved. I think that's what you're getting at?

I'm any case, this tech is only going to get better, and more commonplace. Take it, or run for the hills.

No, harm reduction would be recognizing that an object as causing harm, that people will use that object anyway, and doing what we can to minimize the harms caused by that use.

It's less about addiction and brain chemistry than simple math. If harm is being caused, and it can be reduced, reduce it.

Ah, so more like self-harm prevention, gotcha.

I guess like any tool, whether it is help or harm depends on the user and usage.

right, so I think the person's point was that microsoft is helping to manufacture the harm, and warning that the harm is there, but not doing much to actually reduce the harm.

Oh, right. Microsoft is a corp. They don't care about the harm they do until it costs them money.

e: also, I love to bash on ms, but they're not the problem here. These things are being built all over the place.. In companies, in governments, in enthusiasts back yard. You can tell Microsoft, Google, Apple to stop developing the code, you can tell nvidia to stop developing cuda. It's not going to matter.

I just heard a news report on OpenAI developing technology to make deep fakes easier. They realized this could cause harm. So they're only releasing it to a few educational institutions.

This is harm reduction. And I realize corporate ethics is something of an oxymoron. But something along these lines was what the original person was meaning by a harm reduction approach by microsoft. If they're aware their technology is going to cause harms to democracy, they have an ethical duty to reduce those harms. Unfortunately, corporations often put ethical duties to increase shareholder value first. That doesn't mean they don't have other ethical responsibilities too.

I suppose, could be harm reduction. Like peeling a bandaid off slowly instead of ripping it off.

They're here, they might not be everywhere yet, but they're here to stay as much as photoshopped images or trick photography are. Just more lies to hide the truth.

All we can do now is get better at dealing with them.

I hear you about it just being an evolution of the propaganda machine. And I think it's going to reveal cracks in the system. That it's going to rip the bandaid off faster than climate change which is the slow peel we're all dealing with already.

Harm reduction would be investing money in government regulation. Lobbying for government regulation. Usually this is seen as a disaster for business, but in this case it would throttle competitors too. And possibly save a lot of lives. Because this sort of automated propaganda is going to create a lot of fascist regimes all over the planet. Propped up by the illusion of democracy.

More so than it already is.

I'm heading for the hills then. I'm perfectly capable of thinking for myself without delegating that to some chatbot.

Everyone is. As time and tech progresses, you're going to find that it becomes increasingly difficult to avoid without going off-grid entirely.

Do you really think corps aren't going to replace humans with AI, any later than they can profit by doing so? That states aren't going eventually to do the same?

1 more...

Is that the same Microsoft company that has poured billions of dollars into that same thing they're warning us about?

Yes, this is the "we're the good ones" flex. And anytime they do this, there has to be a big bad boogeyman elsewhere to blame without evidence or consequence.

He also revealed that about nine months ago his team conducted a deep dive into how these groups are using AI to influence elections.

"In just the last few months, the most effective technique that's been used by Russian actors has been posting a picture and putting a real news organization logo on that picture," he observed. "That gets millions of shares."

information as we know it is over, people have access to the most devilish of AI technology: Copy and Paste

wow this article is bad

I think the point was that it is easier and faster to generate that image you put the logo on than ever before, not that it was a comment on "logo inserting technology"

The only solution I can think of here is cryptographic signatures. That would prove:

  • the device/software in question stamped the video
  • the video was unaltered since it was stamped

Individuals can also stamp their own videos so people can decide whether to trust it. Then players like YouTube, PeerTube, etc could display the stamping information.

This is low hanging fruit and should happen. All devices should cryptographically sign the video and audio they record. It's not fool proof, a state actor could probably extract the keys and forge a signature, but it would be better than nothing.

Each device should have its own key. It's quite difficult to hack a phone, possibly disassembling it, extract the private key from hardware, reassemble the phone, and then forge a signature on fake video. Yeah, it could happen, but if it's a serious issue the court can inspect the phone the video allegedly came from and at least for normal people, they aren't going to be able to forge a signed video.

If we get serious about this devices could have security hardware that is difficult for even state-level actors to break.

As others have said, people will still believe what they want though. With or without fake videos and even with and without evidence.

That's true, but the more transparent and obvious the evidence against misinformation, the more people will disregard it. You'll always get the gullible minority that'll believe in whatever conspiracy, but democracy doesn't hinge on them, it hinges on the quiet majority.

That said, this should be done in a privacy respecting way. You should be able to choose not to have your videos signed, not to associate that signature with a given account, and to supply your own signing key. Device resets should reset the key as well. There should also be a mechanism to associate multiple devices as coming from the same source, while distinguishing between devices (so if one device is compromised, only those videos are suspect).

I think it could be done pretty unobtrusively, and it should start as a standard for journalism before going to the broader public.

I think one of the biggest issues is even if you come up with a way to verify what is and isn't AI generated, it might not actually matter. Already we've seen people just believing the most obviously fake posts, cause they're just that gullible. Like yes we should come up with a system to verify things, but I fear the genie is already out of the bottle.

Well, you can't fix stupid. Fortunately, there are enough not-stupid people that tooling can help fuel a misinformation correction campaign. The more transparent the edits are, the easier it is to fact check.

This is the best summary I could come up with:


As hundreds of millions of voters around the globe prepare to elect their leaders this year, there's no question that trolls will try to sway the outcomes using AI, according to Clint Watts, general manager of Microsoft's Threat Analysis Center.

Watts said his team spotted the first Russian social media account impersonating an American ten years ago.

Initially, Redmondā€™s threat hunters (using Excel, of course) tracked Russian trolls testing their fake videos and images on locals, then moving on to Ukraine, Syria and Libya.

Watts' team tracks government-linked threat groups from Russia, Iran, China, plus other nations around the world, he explained.

He also revealed that about nine months ago his team conducted a deep dive into how these groups are using AI to influence elections.

Videos set in public with a well-known speaker at a rally or an event attended by a large group of people are harder to fake.


The original article contains 624 words, the summary contains 151 words. Saved 76%. I'm a bot and I'm open source!

I like that they choose the least convincing fake pictures. Its not really helping their argument.

The article is about faked images, so it makes sense they'd make the thumbnail image appear fake at a glance.