Researchers with Google, UC Merced and Shanghai Jiao Tong University have detailed the development of DAIN, a depth-aware video frame interpolation algorithm that can seamlessly generate slow-motion videos from existing content without introducing excessive noise and unwanted artifacts. The algorithm has been demonstrated in a number of videos, including historical footage boosted to 4K/60fps.
Rapidly advancing technologies have paved the way for high-resolution displays and videos; the result is a mass of lower-resolution content made for older display and video technologies that look increasingly poor on modern hardware. Remastering this content to a higher resolution and frame rate will improve the viewing experience, but would typically be a costly undertaking reserved only for the most popular media.
Artificial intelligence is a promising solution for updating older video content as evidenced by the growing number of fan-remastered movies and TV shows. Key to these efforts are algorithms trained to upscale and, when necessary, repair the individual frames of videos, which are recompiled into a higher-resolution ‘remaster.’
The newly detailed DAIN algorithm is different — rather than upscaling and repairing the individual frames in a video, this AI tool works by generating new frames and slotting them between the original frames, increasing the video’s FPS for smoother and, depending on how many frames are generated, slower-motion content.
This is a process called motion (video frame) interpolation, and it typically causes a drop in quality by adding unwanted noise and artifacts to the final videos. The DAIN algorithm presents a solution to this problem, offering motion interpolation to boost frames-per-second up to 480fps without introducing any readily noticeable artifacts.
The resulting content is high-quality and nearly visually identical to the source footage, but with the added smoothness that comes with increasing the frames-per-second to 60fps. In addition, DAIN has been demonstrated as capable of transforming ordinary 30/60fps footage into smooth slow-motion videos without choppiness or decreased quality.
According to the researchers, DAIN is ‘compact, efficient, and fully differentiable,’ offering results that perform ‘favorably against state-of-the-art frame interpolation methods on a wide variety of datasets.’ The technology has many potential uses, including recovering lost frames, improving content to be more visually appealing for viewers, generating slow-motion from regular footage and more.
Such technology is arguably necessary for preserving aging media in a useful way, making it possible for new generations of people to experience historical footage, old TV shows and movies, home videos and similar content using modern high-resolution displays. As well, the technology could be useful for content creators of all sorts, enabling them to salvage the footage they already have, improve the quality of old clips for use in documentaries and similar things.
The researchers explain on their project website:
Starting from the birth of photographing in the 18-th centuries, videos became important media to keep vivid memories of their age being captured. And it’s shown in varying forms including movies, animations, and vlogs. However, due to the limit of video technologies including sensor density, storage and compression, quite a lot of video contents in the past centuries remain at low quality.
Among those important metrics for video quality, the most important one is the temporal resolution measured in frame-per-second or fps for short. Higher-frame-rate videos bring about more immersive visual experience to users so that the reality of the captured content is perceived. Therefore, the demand to improve the low-frame-rate videos, particularly the 12fps old films, 5~12fps animations, pixel-arts and stop motions, 25~30 fps movies, 30fps video games, becomes more and more urgent.
The public can view more examples of videos updated using the DAIN algorithm by checking out the related collection playlist on YouTube. As well, the full study is available in PDF form on the Arxiv website here.