Advertisement

Article

The baffling truth behind the AI Facebook shut down “out of fear it was conspiring against humanity”

The baffling truth behind the AI Facebook shut down “out of fear it was conspiring against humanity”
Daniel Caceres

Daniel Caceres

  • Updated:

I’m sure you’re aware of the mess that went down recently with a “dangerous” Facebook experiment. I’m talking about the news that Facebook has shut down an Artificial Intelligence bot in development because it was starting to be too dangerous for humanity.

In recent days, we’ve seen disturbing headlines like “Facebook engineers panic, pull plug on AI after bots develop their own language”; “Facebook shuts down AI after it invents its own creepy language’”; or articles about “the danger that comes with artificial intelligence.”

For crying out loud, just look at some of the images alongside these news articles:

141110_robot-664x374 t2-judgement-day 0F0-1024x564

Has Facebook accidentally created a series of bots that talk to each other in a language they invented and conspire against us? No, it was just something that got out of hand, thanks to the media.

So, what happened? What’s the truth behind the hysteria and controversy? It all started a few weeks ago, when FastCo Design, a tech and marketing site, revealed that two bots created by Facebook were talking to each other in a strange way.

These two bots, “Bob” and “Alice,” are part of a project that the Facebook Artificial Intelligence Research unit (or FAIR) publicly announced this June.

The aim of the project was to find out if it’s “possible for dialog agents with differing goals to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes.”

These bots tried to divvy up a series of objects (represented by virtual objects such as books, hats and balls) so that both were in agreement.

Here you’ve got the bots negotiating among themselves:

Captura-1024x326

The idea behind this project was to find out if a bot could learn enough human interaction to negotiate with a user without this user knowing that he/she was talking to a machine. On a practical level, a bot of this calibre could help companies care to client needs 24/7, without having to hire “humans” for this.

The project was a success: “The performance of FAIR’s best negotiation agent, which makes use of reinforcement learning and dialog roll-outs, matched that of human negotiators… demonstrating that FAIR’s bots not only can speak English but also think intelligently about what to say.”

So far so good, right? There’s nothing ominous or strange about it. It got complicated when FAIR decided to take their most intelligent bots, called “Bob” and “Alice,” and have them talk to each other. Out of curiosity.

It’s then that these strange and somewhat creepy conversations happen. Here’s a complete extract, courtesy of Facebook itself:

i-1-ais-are-writing-their-own-perfect-languages-should-we-let-them

The press misinterpreted the conversations (or didn’t finish reading the original article from FastCo Design) and thought the end of Facebook’s experiment was due to company fear that their AI had created its own secret language.

Facebook DID shut down the conversation between the two bots. But they did so because they had made a mistake in the experiment. FAIR overlooked something: they didn’t motivate the two bots to communicate using English rules that a human could understand. The bots simply spoke among themselves and created language shortcuts that, yes, give us the creeps because they seem nuts, but are completely harmless.

“Agents will drift off understandable language and invent code words for themselves,” FAIR explained. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

Facebook tossed out the experiment because they want bots to learn to talk to people. The result wasn’t dangerous. It just wasn’t practical. The bots had learned to divide virtual balls, but without using English rules that we all understand. So, the experiment wasn’t worth much.

rrhyrr

It’s easy to see how a story that starts with “two bots manage to exchange virtual balls, but do it in their own way, which isn’t helpful” becomes “Facebook shuts down two bots because they created a new language and could pose a threat to humanity.” The second version is a thousand times more appealing than the first.

In short: neither “Bob” nor “Alice” are conspiring against us, nor are our lives in danger. In any case, the only ones who should be worried are professional negotiators; if these experiments keep going well, they’ll be out of work.

Source: Gizmodo

Daniel Caceres

Daniel Caceres

{ "de-DE": "", "en-US": "I learned how to read thanks to Monkey Island Insult Sword Fightning. I love to write fiction, meditate, and spy on strangers on bars and restaurants. I have an unfinished games backlog. I practice NPL and Impro-theater.", "es-ES": "Aprendí a leer gracias a los duelos de insultos de Monkey Island. Adoro escribir ficción, meditar, espiar a desconocidos en bares o restaurantes, dejar juegos sin acabar... Práctico PNL (cuando quieras te hago unos anclajes reshulones) y me estoy formando como actor de impro-teatro.", "fr-FR": "", "it-IT": "", "ja-JP": "", "nl-NL": "", "pl-PL": "", "pt-BR": "", "social": { "email": "daniel.caceres@softonic.com", "facebook": "", "twitter": "https://twitter.com/lorddevries", "linkedin": "", } }

Latest from Daniel Caceres

Editorial Guidelines