I’m sure you’re aware of the mess that went down recently with a “dangerous” Facebook experiment. I’m talking about the news that Facebook has shut down an Artificial Intelligence bot in development because it was starting to be too dangerous for humanity.
In recent days, we’ve seen disturbing headlines like “Facebook engineers panic, pull plug on AI after bots develop their own language”; “Facebook shuts down AI after it invents its own creepy language’”; or articles about “the danger that comes with artificial intelligence.”
For crying out loud, just look at some of the images alongside these news articles:
Has Facebook accidentally created a series of bots that talk to each other in a language they invented and conspire against us? No, it was just something that got out of hand, thanks to the media.
So, what happened? What’s the truth behind the hysteria and controversy? It all started a few weeks ago, when FastCo Design, a tech and marketing site, revealed that two bots created by Facebook were talking to each other in a strange way.
These two bots, “Bob” and “Alice,” are part of a project that the Facebook Artificial Intelligence Research unit (or FAIR) publicly announced this June.
The aim of the project was to find out if it’s “possible for dialog agents with differing goals to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes.”
These bots tried to divvy up a series of objects (represented by virtual objects such as books, hats and balls) so that both were in agreement.
Here you’ve got the bots negotiating among themselves:
The idea behind this project was to find out if a bot could learn enough human interaction to negotiate with a user without this user knowing that he/she was talking to a machine. On a practical level, a bot of this calibre could help companies care to client needs 24/7, without having to hire “humans” for this.
The project was a success: “The performance of FAIR’s best negotiation agent, which makes use of reinforcement learning and dialog roll-outs, matched that of human negotiators… demonstrating that FAIR’s bots not only can speak English but also think intelligently about what to say.”
So far so good, right? There’s nothing ominous or strange about it. It got complicated when FAIR decided to take their most intelligent bots, called “Bob” and “Alice,” and have them talk to each other. Out of curiosity.
It’s then that these strange and somewhat creepy conversations happen. Here’s a complete extract, courtesy of Facebook itself:
The press misinterpreted the conversations (or didn’t finish reading the original article from FastCo Design) and thought the end of Facebook’s experiment was due to company fear that their AI had created its own secret language.
Facebook DID shut down the conversation between the two bots. But they did so because they had made a mistake in the experiment. FAIR overlooked something: they didn’t motivate the two bots to communicate using English rules that a human could understand. The bots simply spoke among themselves and created language shortcuts that, yes, give us the creeps because they seem nuts, but are completely harmless.
“Agents will drift off understandable language and invent code words for themselves,” FAIR explained. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”
Facebook tossed out the experiment because they want bots to learn to talk to people. The result wasn’t dangerous. It just wasn’t practical. The bots had learned to divide virtual balls, but without using English rules that we all understand. So, the experiment wasn’t worth much.
It’s easy to see how a story that starts with “two bots manage to exchange virtual balls, but do it in their own way, which isn’t helpful” becomes “Facebook shuts down two bots because they created a new language and could pose a threat to humanity.” The second version is a thousand times more appealing than the first.
In short: neither “Bob” nor “Alice” are conspiring against us, nor are our lives in danger. In any case, the only ones who should be worried are professional negotiators; if these experiments keep going well, they’ll be out of work.