Facebook Kills A.I. That Created Its Own Language
Facebook has pulled the cord on two artificial intelligence bots after researchers discovered that the AIs had developed their own secret language.
The AIs in question, named Bob and Alice, were actually chatbots developed by Facebook’s Artificial Intelligence Research lab (FAIR). They were part of a project to develop bots that could negotiate both with humans and other computer programs. But last month, FAIR researchers made an interesting discovery: the bots had begun to communicate in a language that they had developed without any human input, Business Insider reported.
The bot’s language may appear to be gibberish — for example, one bot’s phrase read: “i can i i everything everything” — but Facebook researchers say that it’s actually a form of computer-derived shorthand. In addition to developing their own language, the bots also learned how to negotiate rather craftily. In one case, researchers found that the bots pretended to care about a specific item simply so they could pretend to “sacrifice” it later on.
It’s not the first time that AI systems have diverged from regular human language in favor of their own, but it’s a discovery that’s both exciting and unsettling at the same time — and it raises a handful of ethical and philosophical implications for the future of artificial intelligence. Interestingly, Facebook killed the AI systems because they deviated from their original purpose, not because they had created their own language. But it’s still incredibly creepy, and it feels like we’re a step closer to singularity.
Smarter AI systems are fast becoming a reality, spearheaded by the likes of Apple, Facebook and other tech giants. But like past technological advancements, there are many — including Telsa CEO Elon Musk and the U.S. Navy — who are cautious about the future of AI and its potential dangers to humanity.