When you have moving film, you have 24fps from the same scene, a lens and a compression algorithm.
There is a higher chance for pixel a to be color y if over a span of x frames and compression artifact this pixel shows value z.
You will also have the chance to track details visible from frames/stills/pictures from seconds or even minutes ago if the actor just has the same cloth.
And you can estimate the face of the actor across the movie and create an inherant face.
Nonetheless, besides this type of upscaling, if an AI is trained to upscale based on probability of the real world and the movie is from the real world, it is still more than just random.
Btw. there have been plenty of movie makers making movies the way they did because thats the only way they were able to make it. Low light high end camera equipment is expensive.
And if you watch The Matrix without editing on a oled 60" and higher, it looks off because greenscreen details are more noticable than it has been before. Its aboslutly valid to revisit original material to adjust it to today (at least for me).
NVidia has been investing in Platform strategy for a long time. I have been reading nvidia research papers for a while, they probably invested a lot in their Omniverse and digital twin toolchain too.
NVidia was also the first company to bring real time raytracing into the market.
I don't think nivida was 'just' lucky.
Even the intel ceo was super cooky a few month back when he stated that.
Nonetheless, i do believe that most film makers are actually want to make a film not work around contemprary limitations.