Generative AIs are becoming a problem on many levels, and for that reason, Android is tightening its rules to prevent content generated by these bots from causing harm on social media platforms. That’s why the Google division focused on smartphones will require applications to closely monitor these contents and implement control tools.
These tools would act as a link that enables users and networks to provide feedback for reporting harmful content, such as the spread of misinformation generated by AI, the emergence of nude images generated through “deepfakes,” and other potentially dangerous contents.
Control of content generated by AI
As explained by The Verge, Android is proposing to applications that they design a button within their apps that allows users to report harmful content generated by AI. This enables social media workers to moderate content more swiftly, aiming to keep such harmful contents within social networks for the shortest time possible.
Within the contents created through AI, while initially harmless materials were disseminated, such as the viral photos of Pope Francis wearing a stylish coat at the beginning of 2023, in recent months, cases involving much more negative effects or even criminal activities have multiplied.
The importance of using AI responsibly
It is essential for users to use this type of systems ethically, but it is also a utopian idea. For this reason, it is crucial for both large companies and institutions to take a step forward in creating a framework where the use of these technologies is within legal and ethical boundaries, with appropriate consequences if these limits are not adhered to.
Artificial Intelligence poses many challenges in various fields, including the job market, where its use is threatening many jobs in skilled sectors. It is important for Artificial Intelligence to progress, but it is equally important to ensure that these advancements benefit society as a whole and not just a few companies.