Good morning dear readers of Tecnogalaxy, today we will talk about the news of Stable diffusion 2.1.

Stability AI announced that it would allow artists to remove their work from the training data set for a forthcoming version of Stable Diffusion 3.0. However, the details of how the plan will be implemented remain incomplete and unclear.


Stable Diffusion, an AI image synthesis model, has acquired the ability to generate images by “learning” from a large set of image data taken from the Internet without consulting any rights holder for permission. Some artists are upset because Stable Diffusion generates images that can potentially compete with human artists in unlimited quantities.

To understand how the Stable Diffusion 3 opt-out system should work, create an account on Have I Been Trained and upload an image. After the site search engine found matches in the LAION image database (Large-scale Artificial Intelligence Open Network ), we right-click on several thumbnails individually and selected “Disable this image” in a pop-up menu.
Once marked, we may see the images in a list of images that we had marked as disabled.

Other problems: to remove an image from the training list, it must already be present in the LAION dataset and must be searchable on Have I Been Trained. At the moment it is not possible to exclude large groups of images or the many copies of the same image that might be present in the data set.

The system, as currently implemented, raises questions that have been echoed in ad posts on Twitter and YouTube. For example, if Stability AI, LAION or Spawning made a huge effort to legally verify the property to control who opts for the images, who would pay for it.

In addition, giving the artist the burden of registering for a site with a non-binding connection to Stability AI or LAION to hope that his request will be accepted seems unlikely. In response to the statements on Spawning’s consent in his announcement video, some people noted that the opt-out process does not fit the definition of consent in the General Data Protection Regulation in Europe, and states that consent must be given actively, not assumed by default ( “Consent must be freely given, specific, informed and unequivocal in substance to obtain consent freely, must be given on a voluntary basis.”) In this sense, many argue that the process should only be opt-in and all artworks should be excluded from AI training by default.

Currently, it appears that Stability AI is operating within US and European law to train artificial intelligence using images extrapolated and collected without permission (although this issue has not yet been tested in court). But the company is also moving to recognize the ethical debate that has sparked a big protest against the art generated by artificial intelligence online.

Is there a balance that will satisfy artists and enable the advancement of AI image synthesis technology to continue? For now, Stability CEO Emad Mostaque is open to suggestions, with a tweet, “The @laion_ai team is super open to feedback and wants to create better data sets for everyone and is doing a great job. For our part, we believe that this is an innovative technology that should not be limited too much, as it probably only needs a modification to follow copyright laws.

This is all about Stable Diffusion and the lawsuit they’re filing against him in a forthcoming article.

Read also:

Was this article helpful to you? Help this site to keep the various expenses with a donation to your liking by clicking on this link. Thank you!

Follow us also on Telegram by clicking on this link to stay updated on the latest articles and news about the site.

If you want to ask questions or talk about technology you can join our Telegram group by clicking on this link.

© - It is forbidden to reproduce the content of this article.