News
That’s Stable Diffusion 3, a next-generation AI image generator
With each step, reality becomes harder to identify.

- February 23, 2024
- Updated: July 2, 2025 at 12:01 AM

We have reached a point where our eyes are not enough to discern what is true and what is not. While governments and administrations propose solutions, AI continues its path without looking back.
On Thursday, Stability AI announced Stable Diffusion 3, a next-generation open-weight image synthesis model. Building on its predecessors, it generates detailed images of multiple subjects with higher quality and accuracy in text generation.
The brief and unclear announcement has not been accompanied by a public demonstration, but Stability opens today a waiting list for those who wish to try it.
What’s new in Stable Diffusion 3
Stability claims that its family of models Stable Diffusion 3 (which takes text descriptions called “prompts” and converts them into matching images) has a size of between 800 million and 8 billion parameters.
This range of sizes allows different versions of the model to run locally on different devices, from smartphones to servers. The size of the parameters roughly corresponds to the model’s capacity in terms of the amount of detail it can generate.
Since 2022, we have seen Stability release a progression of AI image generation models: Stable Diffusion 1.4, 1.5, 2.0, 2.1, XL, XL Turbo, and now Stable Diffusion 3.
Why does Stable Diffusion work when DALL-E exists
Stability has made a name for itself by offering a more open alternative to proprietary image synthesis models, such as OpenAI’s DALL-E 3, although not without controversy due to the use of copyrighted training data, bias, and potential for abuse.
Stable Diffusion models are open-weighted and open-source, which means that the models can be run locally and adjusted to change their results.
Regarding technological improvements, Stability’s CEO, Emad Mostaque, wrote on Twitter: “This uses a new type of broadcast transformer (similar to Sora) combined with flow adjustment and other enhancements. This leverages the transformer improvements and can not only scale further, but also accept multimodal inputs.”
As Mostaque said, the Stable Diffusion 3 family uses the diffusion transformer architecture, which is a new way of creating AI-generated images that replaces the usual building blocks of images with a system that works on small fragments of the image.
The method is inspired by transformers, which are good at handling patterns and sequences. This method is not only effective at scale, but also produces higher quality images.
Journalist specialized in technology, entertainment and video games. Writing about what I'm passionate about (gadgets, games and movies) allows me to stay sane and wake up with a smile on my face when the alarm clock goes off. PS: this is not true 100% of the time.
Latest from Chema Carvajal Sarabia
You may also like
- News
The first images of season 2 of 'Fallout' are stunning, and they reveal its release date
Read more
- News
We already know the release date of the new season of one of the best political series on Netflix
Read more
- News
Discover a vulnerability in the architecture of Chrome and Google rewards him with 250,000 dollars
Read more
- News
One of the cult classics of 90s youth cinema returns with a series that captures its original spirit
Read more
- News
Sophie Turner and Kit Harington share the spotlight in an upcoming horror movie and also a small controversy: they will be getting in shape
Read more
- News
The Infinity Stones have been appearing since Iron Man 2, but you hadn't noticed
Read more