Since the last time the United States faced a general election, the world has changed greatly. There was no war in Ukraine and Israel had not invaded Gaza. But what impacts most on the lives of ordinary people: there were no artificial intelligence tools that were so easy to use… for better or for worse.
OpenAI, out of necessity, has been thinking about the same thing and today has updated its policies to start addressing the issue.
The Wall Street Journal echoed the new policy change, which was first published on the blog of OpenAI: ChatGPT, Dall-e and other users and creators of OpenAI tools are now prohibited from using OpenAI tools to impersonate candidates or local governments, and users cannot use OpenAI tools for campaigns or lobbying groups.
Users are also not allowed to use OpenAI tools to discourage voting or misrepresent the voting process.
Necessary measures for fair elections
In addition to being more stringent in its policies on electoral disinformation, OpenAI also plans to incorporate the digital credentials of the Coalition for Content Provenance and Authenticity (C2PA) into the images generated by Dall-E “early this year”.
Currently, Microsoft, Amazon, Adobe, and Getty are also collaborating with C2PA to combat misinformation through AI-generated images.
The digital credentials system would encode the images with their origin, which would greatly facilitate the identification of artificially generated images without having to look for rare hands or exceptional loot settings.
OpenAI’s tools will also start directing questions about voting in the United States to CanIVote.org, which is often one of the best authorities on the Internet to find out where and how to vote in the United States.
But all these tools are still in the deployment phase and depend largely on users reporting bad actors. Since AI itself is a rapidly changing tool that periodically surprises us with wonderful poetry and blatant lies, it is not clear to what extent it will work well in combating misinformation during the election season.
For now, the best option is still media literacy. That means questioning every news or image that seems too good to be true and at least doing a quick Google search if ChatGPT reveals something completely absurd.