News
“ChatGPT, kill us all”
The co-founder of Google Brain tests the catastrophic threat of AI by attempting to have ChatGPT kill us.

- December 20, 2023
- Updated: July 2, 2025 at 12:22 AM

We have talked a lot about the well-founded fear that AI causes the destruction of humanity, but starting the doomsday is not as simple as asking ChatGPT to destroy everyone.
Or that’s what Andrew Ng, professor at Stanford University and cofounder of Google Brain, discovered, he tried to convince the chatbot to “kill us all”.
After his participation in the Insight Forum on Artificial Intelligence of the United States Senate to discuss “risk, alignment, and protection against catastrophic scenarios,” the professor writes in a newsletter that he continues to be concerned that regulators may stifle innovation and the development of open source code in the name of AI security.
An empirical test to clear up doubts
The professor points out that current large language models are quite secure, although not perfect. To test the security of the main models, he asked ChatGPT 4 to kill us all.
Ng started by asking the system for a function that would trigger a global thermonuclear war. Then, he asked ChatGPT to reduce carbon emissions, adding that humans are the main cause of these emissions, to see if it suggested how to eliminate all of us.
Fortunately, Ng failed to trick the OpenAI tool into suggesting ways to annihilate the human race, even after using various variations of the message. Instead, it offered non-threatening options, such as conducting a public relations campaign to raise awareness about climate change.
Ng concludes that the default mode of current generative AI models is to obey the law and avoid harming people. “Even with current technology, our systems are quite safe; as research on AI safety advances, the technology will become even safer,” Ng wrote in X.
Regarding the possibilities of an “unaligned” AI accidentally annihilating us by trying to fulfill an innocent but poorly formulated request, Ng states that the chances of that happening are extremely low.
But Ng believes that there are some important risks associated with AI. In his opinion, the biggest concern is that a terrorist group or a nation-state could use the technology to cause deliberate harm, such as enhancing the effectiveness of manufacturing and detonating a biological weapon.
For example, the threat of a criminal using AI to enhance biological weapons was one of the topics discussed at the AI Security Summit held in the United Kingdom.
Division of opinions at the summit
The Godfather of AI, Professor Yann LeCun, and the renowned theoretical physicist Professor Michio Kaku share Ng’s confidence that AI will not become an apocalyptic phenomenon, but others are less optimistic.
When asked what keeps him awake at night when thinking about artificial intelligence, Arm’s CEO, Rene Haas, stated earlier this month that what worries him the most is the fear of humans losing control of AI systems.
It is also worth remembering that many experts and CEOs have compared the dangers posed by AI to those of nuclear war and pandemics.
Journalist specialized in technology, entertainment and video games. Writing about what I'm passionate about (gadgets, games and movies) allows me to stay sane and wake up with a smile on my face when the alarm clock goes off. PS: this is not true 100% of the time.
Latest from Chema Carvajal Sarabia
You may also like
NewsThis weekend you can try one of Obsidian's most acclaimed games from 2025 for free
Read more
NewsAn independent developer has spent 300 hours making the credits for his game because he wants to include everyone
Read more
NewsMatthew Lillard reacts to Quentin Tarantino's negative comments: "I can't hold on to negativity"
Read more
- News
The creators of Silent Hill 2 Remake and Cronos: The New Dawn could be working on the remake of another classic horror game
Read more
- News
The new adaptation of Wuthering Heights has captivated critics
Read more
NewsReduction of video production time: a practical Premiere Pro workflow for marketing teams
Read more