AI could be the end of humanity as we know it. Many experts in the field believe this, and they have been warning the population and governments for a long time. Either we regulate and control AI, or it will replace us.
Many people are concerned about the dangers of AI advancing to the point where humans lose control of these systems, which could have apocalyptic consequences.
But it’s one thing for a politician or your neighbor to complain, and another when it comes from one of the most important figures in the tech industry, like the CEO of Arm, who claims that the idea of this scenario keeps him up at night.
Do we have a problem with AI? No, but we’re going to have one
In an interview with Bloomberg, Arm’s CEO, Rene Haas, said that artificial intelligence needs some form of override or backdoor that can shut down systems.
“What concerns me the most is humans losing control,” said the chip designer’s CEO, when asked what keeps him up at night thinking about artificial intelligence.
Haas estimates that 70% of the world’s population interacts in some way with products designed by ARM: 99% of the 1.4 billion smartphones sold each year use Arm designs or its technology.
Of course, he’s not against AI and is aware of the role Arm will play in the new technological revolution. The CEO has already stated, “You really can’t make AI work without Arm.” Haas simply believes that a security mechanism is needed.
“I believe [AI] will permeate everything we do and all aspects of how we work, live, and play,” explains Haas, who became CEO last year. “It’s going to change everything in the next five to ten years.”
Haas isn’t the first in the industry to acknowledge concern about AI. OpenAI’s CEO, Sam Altman, warned in February that the world might not be far from a potentially “terrifying” artificial intelligence.
The company ChatGPT also warned in July about the possibility of developing an AI that’s smarter than humans, which could cause the extinction of the human race.
There’s also xAI’s CEO, Elon Musk, who has consistently spoken about technology as an existential threat to humanity, while artificial intelligence pioneer Geoffrey Hinton left Google in May due to the risks associated with emerging AI.
Many experts and CEOs have gone as far as comparing the dangers of AI to those of a nuclear war or a pandemic.