Menu

Blog

Feb 12, 2023

Runway Researchers Unveil Gen-1: A New Generative AI Model That Uses Language And Images To Generate New Videos Out of Existing Ones

Posted by in categories: entertainment, robotics/AI

The current media environment is filled with visual effects and video editing. As a result, as video-centric platforms have gained popularity, demand for more user-friendly and effective video editing tools has skyrocketed. However, because video data is temporal, editing in the format is still difficult and time-consuming. Modern machine learning models have shown considerable promise in enhancing editing, although techniques frequently compromise spatial detail and temporal consistency. The emergence of potent diffusion models trained on huge datasets recently caused a sharp increase in the quality and popularity of generative techniques for picture synthesis. Simple users may produce detailed pictures using text-conditioned models like DALL-E 2 and Stable Diffusion with only a text prompt as input. Latent diffusion models effectively synthesize pictures in a perceptually constrained environment. They research generative models suitable for interactive applications in video editing due to the development of diffusion models in picture synthesis. Current techniques either propagate adjustments using methodologies that calculate direct correspondences or, by finetuning on each unique video, re-pose existing picture models.

They try to avoid costly per-movie training and correspondence calculations for quick inference for every video. They suggest a content-aware video diffusion model with a configurable structure trained on a sizable dataset of paired text-image data and uncaptioned movies. They use monocular depth estimations to represent structure and pre-trained neural networks to anticipate embeddings to represent content. Their method gives several potent controls on the creative process. They first train their model, much like image synthesis models, so the inferred films’ content, such as their look or style, correspond to user-provided pictures or text cues (Fig. 1).

Figure 1: Video Synthesis With Guidance We introduce a method based on latent video diffusion models that synthesises videos (top and bottom) directed by text-or image-described content while preserving the original video’s structure (middle).

Comments are closed.