Microsoft’s VASA-1 can deepfake a person with one photo and one audio track

return2ozma@lemmy.world to Technology@lemmy.world – 293 points –
Microsoft’s VASA-1 can deepfake a person with one photo and one audio track
arstechnica.com
63

“At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus”

Can we maybe stop making these? XD

Like what even is a legitimate use case for these? It just seems tailor made for either misinformation or pointless memes, neither of which seem like a good sales pitch

Say you’re a movie studio director making the next big movie with some big name celebs. Filming is in progress, and one of the actor dies in the most on brand way possible. Everyone decides that the film must be finished to honor the actor’s legacy, but how can you film someone who is dead? This technology would enable you to create footage the VFX team can use to lay over top of stand-in actor’s face and provide a better experience for your audience.

I’m sure there are other uses, but this one pops to mind as a very legitimate use case that could’ve benefited from the technology.

how can you film someone who is dead?

Hot take: don't? They're dead, leave them dead. Rewrite and reshoot if you really have to.

Sure that’s an entirely valid option; but not the one the producing team and the deceased’s family opted for… and they had a much larger say in it than you and I combined.

We've already recreated dead actors or older actors whole cloth with VFX. Plus it still seems like a niche use case for something that can be done by VFX artists that can also do way more

Having done something before doesn’t mean they shouldn’t find ways to make it better though. The “deepfake”-esque techniques can provide much better quality replicas. Not to mention, as resolution demand increases, it would be harder to leverage older assets and techniques to meet the new demands.

Another similar area is what LLM is doing to/for developers. We already have developers, why do we need AI to code? Well, they can help with synthesizing simpler code and freeing up devs to focus on more complicated problems. They can also democratize the ability to develop solutions to non-developers, just like how the deepfake solutions could democratize content creation for non/less-skilled VFX specialists, helping the industry create better content for everyone.

They can also democratize the ability to develop solutions to non-developers,

This is insane. If you don't understand everything a piece of code is doing, publishing it is insanely reckless. You absolutely must know how to code to publish acceptable software.

Try telling that to businesses. Sadly, you’d more likely to be laughed all the way to the door as opposed to being taken seriously. For the non technical people leading businesses, they’d rather something working 90% of the time today than 100% of the time next week.

Gotta crank up that dystopia meter.

This is slowly moving toward having Content On Demand. Imagine being able to prompt your content app for a movie/series you want to watch, and it just makes it and streams it to you.

this is so dystopian. Imagine spending your career honing your skill as an actor, dying and then having a computer replace you with just a photograph as a source. How is that honoring an actor??

An actual, practical example is generating video for VR chats like Apple has somewhat tried to do with their headset. Rather than using the cameras/sensors to generate and animate a 3d model based on you, it could do something more like this, albeit 2d.

Maybe a historical biopic in the style of photos of the time. Like take pictures of Lincoln, Grant, Lee, etc., use voice actors plus modern reenactors for background characters, and build it into a whole movie.

I dunno, I'm probably reaching.

I think you're falling for the overblown fearmongering headline, and pointless memes is a great reason to make things.

Avatars for ugly people who are good at games and want to get into streaming

6 more...
6 more...

Vasa? Like, the Swedish ship that sank 10 minutes after it was launched? Who named that project?

They developed an ai to name all future ai. Ironically it is unnamed.

There are a lot of flying vehicles named after birds who famously plummet to the ground at breakneck speeds.

Combine this with an LLM with speech-to-text input and we could create a talking paintings like in harry potter movies. Heck, hang it on a door and hook it with smart lock to recreate the dorm doors in harry potter and see if people can trick it to open the door.

I like your optimism where this doesn't result in making everything worse.

I was actually discussing this very idea with my brother, who went to the Wizarding World of Harry Potter at Universal Studios, Orrrlandooooo recently and while he enjoyed himself, said it felt like not much is new in theme parks nowadays. Adding in AI driven pictures you could actually talk to might spice it up.

These vids are just off enough that I think doing a bunch of mushrooms and watching them would be a deeply haunting experience

A long time ago, someone from a not free country wrote a white paper on why we should care about privacy, because written words can be edited to level false accusations (charges) with false evidence. This chills me to the bone.

This is turning into some Mistborn shit. “Don’t trust writing not written on metal”

"You shot that man, citizen. Here is video evidence. Put your hands against the wall." - and more coming to you soon!

I'd be less-concerned about the impact on not-free countries than free countries. Dictator Bob doesn't need evidence to have the justice system get rid of you, because he controls the justice system.

Freddie, this is your mom. Look all I want for my birthday is for you to please start using teams new. It's so much better than teams classic. I alread... Microsoft already installed it for you. Okay honey? And could you also start using a microsoft.com account so you can get financially hooked like all the Gmail users? It's pretty smart. Don't you want to be smart like Jonny? Tata!

Since it’s trained on celebrities, can it do ugly people or would it try to make them prettier in animation?

The teeth change sizes, which is kinda weird, but probably fixable.

It’s not too hard to notice for an up close face shot, but if it was farther away it might be hard - the intonation and facial expressions are spot on. They should use this to re-do all the digital faces in Star Wars.

One photo? That’s incredible.

Yeah. Incredibly horrific.

Yes I hate what AI is becoming capable of. Last year everyone was laughing at the shitty fingers, but were quickly moving past that. I'm concerned that in the near future it will be hard to tell truth from fiction.

The "why would they make this" people don't understand how important this type of research is. It's important to show what's possible so that we can be ready for it. There are many bad actors already pursuing similar tools if they don't have them already. The worst case is being blindsided by something not seen before.

how important this type of research

I hope they also figure a way to find the bad actors who might use their tools for harmful purposes. You can't just create something for "research" purposes like this and not find a way to stop bad actors using these for harmful purposes.

Paranoia vibes starting in 3, 2, 1..

Microsoft’s research teams always makes some pretty crazy stuff. The problem with Microsoft is that they absolutely suck at translating their lab work into consumer products. Their labs publications are an amazing archive of shit that MS couldn’t get out the door properly or on time. Example - multitouch gesture UIs.

As interesting as this is, I’ll bet MS just ends up using some tech that Open AI launches before MS’s bureaucratic product team can get their shit together.

This is the best summary I could come up with:


On Tuesday, Microsoft Research Asia unveiled VASA-1, an AI model that can create a synchronized animated video of a person talking or singing from a single photo and an existing audio track.

In the future, it could power virtual avatars that render locally and don't require video feeds—or allow anyone with similar tools to take a photo of a person found online and make them appear to say whatever they want.

To show off the model, Microsoft created a VASA-1 research page featuring many sample videos of the tool in action, including people singing and speaking in sync with pre-recorded audio tracks.

The examples also include some more fanciful generations, such as Mona Lisa rapping to an audio track of Anne Hathaway performing a "Paparazzi" song on Conan O'Brien.

While the Microsoft researchers tout potential positive applications like enhancing educational equity, improving accessibility, and providing therapeutic companionship, the technology could also easily be misused.

"We are opposed to any behavior to create misleading or harmful contents of real persons, and are interested in applying our technique for advancing forgery detection," write the researchers.


The original article contains 797 words, the summary contains 183 words. Saved 77%. I'm a bot and I'm open source!

One use of this I'm in favour of is recreating Majel Barret's voice as an AI for computer systems.

This project doesn't recreate or simulate voices at all.

It takes a still photograph and created a lip synched video of that person saying the paired full audio clip.

There's other projects that simulate voices.

Yep it's part of it to generate the sound track

One of the videos show the voice changing in mid sentence

No, it isn't. In that clip they are taking two different sound clips as they are switching faces. It's not changing the 'voice' of saying some phrase on the fly. It's two separate pre-recorded clips.

Literally from the article:

It does not clone or simulate voices (like other Microsoft research) but relies on an existing audio input that could be specially recorded or spoken for a particular purpose.