- Tesla and SpaceX CEO Elon Musk has repeatedly said that he thinks artificial intelligence poses a threat to humanity.
- Of the companies working on AI technology, Musk is most concerned by the Google-owned DeepMind project, he said in an interview with The New York Times.
- “The nature of the AI that they’re building is one that crushes all humans at all games,” he said. “It’s basically the plotline in ‘WarGames.'”
- In the 1983 film “WarGames,” starring Matthew Broderick, a supercomputer trained to test wartime scenarios is accidentally triggered to start a nuclear war.
Elon Musk has been sounding the alarm about the potentially dangerous, species-ending future of artificial intelligence for years.
In 2016, the billionaire said human beings could become the equivalent of “house cats” to new AI overlords. He has since repeatedly called for regulation and caution when it comes to new AI technology.
But of all the various AI projects in the works, none has Musk more worried than Google’s DeepMind.
“Just the nature of the AI that they’re building is one that crushes all humans at all games,” Musk told The New York Times in an interview. “I mean, it’s basically the plotline in ‘WarGames.'”
In “WarGames,” a teenage hacker played by Matthew Broderick connects to an AI-controlled government supercomputer trained to run war simulations. In attempting to play a game titled “Global Thermonuclear War,” the AI convinces government officials that a nuclear attack from the Soviet Union was imminent.
In the end (spoiler for those who haven’t seen the 37-year-old movie), the computer runs enough simulations of the end results of global thermonuclear war that it declares there’s no possible winner and the only way to win is to not play. The 1983 film is a direct reflection of its time and place, when fear in the US of nuclear war with the Soviet Union still loomed, along with concerns over increasingly advanced technology.
But Musk wasn’t just talking about old films when he compared DeepMind to “WarGames” — he also said AI could surpass human intelligence in the next five years, even if we don’t see the impact of it immediately.
“That doesn’t mean that everything goes to hell in five years,” he said. “It just means that things get unstable or weird.”
Musk was an early investor in DeepMind, which sold to Google in 2014 for over $500 million, according to reports. He said in a 2017 interview that he made the move to keep an eye on burgeoning AI developments, not for a return on investment.
“It gave me more visibility into the rate at which things were improving, and I think they’re really improving at an accelerating rate, far faster than people realize,” he said in the 2017 interview. “Mostly because in everyday life you don’t see robots walking around. Maybe your Roomba or something. But Roombas aren’t going to take over the world.”
But Musk thinks artificial intelligence should have a different connotation.
“I think generally people underestimate the capability of AI — they sort of think it’s a smart human,” Musk said in an August talk with Alibaba CEO Jack Ma at the World AI Conference in Shanghai. “But it’s going to be much more than that. It will be much smarter than the smartest human.”
It is “hubris,” he said in The Times interview this week, that keeps “very smart people” from realizing the dangers of AI.
“My assessment about why AI is overlooked by very smart people is that very smart people do not think a computer can ever be as smart as they are,” he said. “And this is hubris and obviously false.”