AI-Generated Depth Maps
There are many experimental and research projects that have attempted to create a depth maps from single non-stereoscopic 2D images. Many of these attempts have had greats results, and there are ways to get this effect for yourself if you scower the internet and piece together multiple free and open-source applications, scripts, and machine learning databases. But given that apps like Instagram already do a basic version of this for video in realtime, it's not unrealistic to apply the same effect to videos, especially as computers get faster and the research/resource pool grows in the next couple years. I know Adobe is investing in AI and machine learning, but this particular feature is an essential one to innovate on because it literally unlocks a new dimension in video work. (Imagine being able to add mist for atmospheric perspective in post or composite objects by positioning them in Z-space rather than masking out foregrounds and backgrounds.)
