Elon Musk’s last promise
Musk says Teslas have a “mind.”
Though Elon Musk has himself said that his Full Self-Driving predictions haven’t been on the nose, a version of that technology today exists in Teslas worldwide.
Though it is certainly powerful, the FSD that is currently in operation requires driver supervision, making it more of a driver-assist program than one enabling a true self-driving car.
The Tesla CEO said last week that the company (TSLA) – Get Free Report must crack only one piece of the FSD puzzle before regulatory hurdles kick in: vehicle control. This component, Musk said, is constrained not by feats of modern engineering but by training the AI that powers the tech. He said in April that he expected Tesla to overcome this particular issue this year.
The system that makes Tesla’s FSD possible is powered by artificial intelligence. A series of cameras spaced around the car creates a three-dimensional birds-eye view of the vehicle’s environment, complete with pedestrians and other vehicles, that enables the car to navigate its surroundings.
Musk, responding Monday to a video of a Tesla driving itself through the notoriously hilly city of San Francisco, said the car had a “mind.”
“I think we may have figured out some aspects of AGI,” he said. “The car has a mind. Not an enormous mind, but a mind nonetheless.”
AGI refers to artificial general intelligence, or AI with intelligence that is equal to or greater than that of humans. AGI, call it superintelligent AI, has spearheaded the debate over AI in recent months.
Many have expressed concern about extinction risk from an AI model that is smarter than the species creating it. The issue, for many, revolves around control: how humans can control a system that is smarter than the species.
Skepticism About Achieving AGI
Despite the seemingly freakish intelligence of consumer-facing models like ChatGPT, many experts remain skeptical that humans will ever achieve AGI. A recent study expressed that AGI serves to emulate human cognition through computation, something that requires science to actually understand human cognition.
“I assert with confidence that AGI is not coming soon,” said one of the authors of the study, Iris van Rooij, professor of computational cognitive science at Radboud University Nijmegen in the Netherlands.
Musk, who in 2014 said AI was “potentially more dangerous than nukes,” was among those calling for a six-month moratorium on the development of more powerful AI systems.
The tech billionaire’s method of ensuring his latest venture, xAI, produces safe AI is to make it both curious and truth-seeking.
“Humanity is much more interesting than not-humanity,” he said, outlining a strategy that to some experts has a few fundamental misunderstandings about the technology.
Tesla is currently facing two ongoing investigations into the safety of its FSD and Autopilot technology.