Using deepfakes for better lip-sync in animations?
I'm looking for better lipsync in my animations and before i dive deep into deepfakes i wanted to ask if anybody has experience in this? (I've seen Methuman doing nice facial animations but i don't have an iphone).
I've done a little work on this professionally, and while I can't discuss our solution, here are some links that I turned up doing research that might be what you're looking for.
https://www.robots.ox.ac.uk/~vgg/publications/2016/Chung16a/chung16a.pdf
https://arxiv.org/abs/2008.10010
https://arxiv.org/pdf/1809.02108.pdf
http://ailab.kaist.ac.kr/papers/pdfs/ACCV2016Workshop.pdf
I just googled for "AI lip syncing" and got two interesting results: Gooey.ai, and sync.ai. Sync.ai seems to be more developed, but I don't know how much it will cost.
Looks pretty interesting. https://www.emotech.ai/solutions/sync-ai