With artificial intelligence moving to smarter ways of producing images from text, there have been many fears by artists that their work will be replicated without their permission. While there is still a massive debate on whether or not this is legal, Stable Diffusion seems to be moving in the right direction. It now limits how similar the result is to the original masterpiece while also removing pornographic capabilities.
Stable Diffusion 2.0 was released recently with some stunning improvements. With competitors like DALL-E still getting complaints about artists’ work being replicated, the company decided to make a few changes. It specifically addresses these fears, reducing how much the rendered image looks like the original artwork.
While there’s still some contention on whether or not AI-rendered images can be copyrighted, artists don’t like their work being replicated without permission. However, some see it as an opportunity to market their work. Stable Diffusion developers have said they will provide a future update where artists can choose if they want their work protected from AI renderings or if it doesn’t matter.

Another issue that Stable Diffusion 2.0 deals with is NSFW or pornographic content. There seem to have been too many instances where users have placed celebrities or other people in unwelcoming images. To protect others from such technological abuse, it has removed the ability to do this. Users have now taken to Reddit, saying that the developers have nerfed the open-source model.
The most significant complaint is that limiting an open-source model like Stable Diffusion no longer makes it open-source. Some are stating that the decision to replicate artists or include NSFW content should be left up to the user. It definitely makes for an interesting debate about the nature of open-source systems and what should or should not be allowed.