Apple continues its advance in the field of artificial intelligence —without making too much noise— with the launch of a tool capable of animating static images using large language models (LLMs) based on simple text prompts. This tool, named “Keyframer”, represents a qualitative leap in the way we can carry out animations with photographs and is one more piece of a puzzle that Apple should show us soon.
Another piece of Apple’s strategy
The new Apple tool, detailed in depth in a recent paper titled “Keyframer: Empowering Animation Design Using Large Language Models”, allows us to transform scalable vector graphics (SVG) images into dynamic animations using simple text commands. For example, when loading an image of a spaceship, the user can request “generate three designs where the sky changes color progressively and the stars blink”, and Keyframer will produce the necessary CSS code to carry out this animation.
The main advantage of Keyframer, however, lies in its ability to allow users to refine their creations through successive indications, thus optimizing the creative process without the need to define all design elements from the beginning. According to Apple, which has interviewed designers and engineers, this tool is “much faster when it comes to making animations” and “does things that used to take several hours to do”.
With Keyframer, Apple reinforces, once again, its position in the field of artificial intelligence and shows us one more card —perhaps its application in Keynote— of its strategy before iOS 18 arrives and completely changes what our iPhone will be able to do with a simple request to Siri.