For over a year now, we’ve been hearing that AI will take away our jobs, dreams, emotions, and will contaminate everything around us. Yet, in the realm of creativity, all we’ve had so far are images with poorly rendered hands, soulless book covers, and Pixar-style memes that grew old on day one. However, the danger still looms, and now it’s YouTube aiming to rein it in.
AI is worth it
The content generated by AI, for now, is as bland as watching a wall dry, but it can also lead to misinformation by creating images that don’t exist. YouTube has had enough and has taken a stand: anyone including something in their videos created by an AI program must indicate and label it in the description. And if they don’t… Well, nothing much happens.
For now, at least, the video service doesn’t have a way to detect AI or force users to label it, beyond threatening to delete their account. This measure aims to combat misinformation, especially during moments like elections or global health crises. Even within YouTube, there’s a fear that just labeling it “may not be enough to mitigate the risk.”
Of course, not all videos about AI are the same, and there are different uses for it, not all of them negative. If someone uses these methods to simulate someone doing something they didn’t actually do, the victim can request the removal of those videos. For instance, musicians whose voices can be copied by a machine. However, if those same songs are part of an analysis where pieces of artificial intelligence simulate the singer’s voice to aid in understanding, they won’t be removed.
Parody and satire also come into play, opening many doors (not all of them positive). For now, YouTube has clarified that those using “synthetic content” without indicating it could face punishment, such as expulsion from the Partner Program, removal of their content, or even other consequences that the service has not yet disclosed. Tough times are ahead for AI enthusiasts, and frankly, I won’t be the one shedding tears for them.