Could we use AI to update 4:3 media to 16:9?
I've been re-watching star trek voyager recently, and I've heard when filming, they didn't clear the wide angle of filming equipment, so it's not as simple as just going back to the original film. With the advancement of AI, is it only a matter of time until older programs like this are released with more updated formats?
And if yes, do you think AI could also upgrade to 4K. So theoretically you could change a SD 4:3 program and make it 4k 16:9.
I'd imagine it would be easier for the early episodes of Futurama for example due to it being a cartoon and therefore less detailed.
You are viewing a single comment
Do you mean something like this? (warning: reddit link)
Holy cow that is beyond impressive. Sure enough, sometimes it does hallucinate a bit, but it's already quite wild. Can't help but wonder where we'll be in the next 5-10 years.
Eh, doing this on cherrypicked stationary scenes and then cherrypicking the results isn't that impressive. I'll be REALLY impressed when AI can extrapolate someone walking into frame.
The video seems a bit misleading in this context. It looks fine for what it is, but I don't think they have accomplished what OP is describing. They've cherrypicked some still shots, used AI to add to the top and bottom of individual frames, and then gave the shot a slight zoom to create the illusion of motion.
I don't think the person who made the content was trying to be disingenuous, just pointing out that we're still a long ways from convincingly filling in missing data like this for videos where the AI has to understand things like camera moves and object permanence. Still cool, though.
Great points. I agree.
A proper working implementation for the general case is still far ahead and it would be much complex than this experiment. Not only it will need the usual frame-to-frame temporal coherence, but it will probably need to take into account info from potentially any frame in the whole video in order to be consistent with different camera angles of the same place.
It is the first iteration of this technology, things will only improve the more we use it.
That it can do still images is already infinitely more impressive than not being able to do it at all.
just fyi, your link is broken for me
i wonder if it's a new url scheme, as i've never seen
duplicates
in a reddit url before, and if i switch it out forcomments
it works fineThanks! Fixed
I think you're right. It should work with the old frontend (which I have configured as the default when I'm logged in):
https://old.reddit.com/r/StableDiffusion/duplicates/14xojmf/using_ai_to_fill_the_scenes_vertically/
that's weird. it's actually a pretty useful feature, but it's odd they'd add it to old reddit before new reddit, considering it's basically deprecated. maybe it's just an a/b rollout and i don't have it yet
i have old.reddit as default as well, but i'm not logged in on my phone browser and it wouldn't open in my app
Sorry, I think I didn't explain my self correctly. That feature it's a very old one, it has been on old reddit since I remember. It has also worked on new reddit at some point, see the screenshot below from a comment I posted 6 months ago:
::: spoiler "View discussions in X other communities" feature in new reddit : :::
::: spoiler In old reddit it's accessible from the "other discussions" tab :::
how the hell did i use reddit for almost a decade and not know about that feature
it wasn't your poor explanation, it was just me being an idiot i think - i just assumed it was new