The world of artificial intelligence opens as many possibilities as threats. And while ChatGPT can give you a summary of Don Quixote, another tool can impersonate the President of the most powerful country in the world.
In the United States, as part of a disinformation tactic ahead of the end-of-year presidential elections, someone has been sending automated calls pretending to be President Joe Biden.
It sounded like Biden was telling people not to vote in primary elections, but it could have been AI. But no one, not even counterfeit detection software vendors, can agree.
The electoral fraud attempt poses a different kind of challenge
An article from Bloomberg this week analyzed what could have been the first dirty trick of an audio deepfake against Joe Biden. But no one knows if it was an actor impersonating him or an AI.
Quoting two other detector manufacturers, ElevenLabs and Clarity, Bloomberg was unable to find any certainty.
The ElevenLabs software considered it unlikely that the attack was the result of biometric fraud. Not so Clarity, which apparently found it likely to be an 80% deepfake.
A team of students and alumni from the University of California – Berkeley claim to have developed a detection method that works with almost no errors.
Of course, that is in a laboratory environment and the research team believes that the method will require “appropriate context” to be understood.
The team provided a raw audio deep learning model to process and extract multidimensional representations. The model uses these representations to distinguish between real and fake. But it still needs to be used in the real world. We’ll have to see what its test with Biden’s audio says.