The arrival of Stable Diffusion last year made an impact on the Internet, as did its spiritual siblings DALL-E and Midjourney. These generative models use artificial intelligence to “create from scratch” images from a simple description. Following their emergence, these tools have managed to turn the creative professional sector upside down, culminating in collective complaints by artists against the companies developing these AIs.
As if image and text were not enough, artificial intelligence has also arrived to conquer the world of video. That is the goal of Gen-1, a generative AI model developed by Runway that uses existing videos and modifies them to the point of exhaustion.
In a video uploaded by the creators of Stable Diffusion (Stability AI) we can see what Gen-1 can do, such as transforming a clip of a girl into a moving statue. As with the generative tools mentioned above, Gen-1 works through descriptions or prompts. Runway has high hopes for Gen-1 and expects it to make a similar impact as Stable Diffusion, but in the video realm.
The company’s CEO and co-founder, Cristóbal Valenzuela, believes that “2023 will be the year of video”. Although it is still too early to say so, the truth is that he is not short of desire. Runway was founded in 2018 and has been developing AI-powered video editing software for several years. Although they are not well known, the truth is that their tools are used by content creators around the world and even in film and television studios. In fact, the visual effects team in charge of the hit movie “Everything Everywhere All at Once” used Runway technology for the creation of several scenes.
In 2021, Runway collaborated with researchers at the University of Munich to create the first version of Stable Diffusion. At that time, Stability AI (a UK-based startup) collaborated by paying the costs required to train Stable Diffusion. Given Runway’s track record, it looks promising: are we witnessing the start of a new audiovisual revolution?