
- Neil deGrasse Tyson is calling for a global treaty to ban the development of AI superintelligence
- He argues that highly advanced AI could pose risks on the scale of nuclear weapons if left unchecked
- The debate highlights a growing tension between rapid AI progress and concerns about long-term safety
Neil deGrasse Tyson is not usually the person in the room calling for a global ban on anything. He is better known for explaining black holes with a smile than for advocating international treaties.
But in a recent talk that has been circulating widely online — see below — the astrophysicist delivered a stark warning about artificial intelligence that sounded less like a science lecture and more like a line from a disaster movie.
“That branch of AI is lethal,” he said. “We’ve got to do something about that. Nobody should build it.”
Article continues below
The “branch” he is referring to is artificial superintelligence, a hypothetical future form of AI that would surpass human intelligence across nearly all domains. For Tyson, the concern is not incremental improvements in chatbots or image generators. It is the possibility of something far more powerful, something that could outthink, outmaneuver, and potentially outlast its creators.
Most people’s daily experience of AI is a chatbot drafting emails, a phone organizing photos, or a navigation app rerouting around traffic. Tyson’s warning, though, taps into a growing debate that has moved from academic papers into mainstream conversation. How far should AI be allowed to go?
Neil DeGrasse Tyson calls for an international treaty to ban superintelligence: “That branch of AI is lethal. We’ve got do something about that. Nobody should build it. And everyone needs to agree to that by treaty. Treaties are not perfect, but they are the best we have as humans.” from r/ChatGPT
Super AI
The idea of banning superintelligence is not new. Researchers and public figures have been discussing it for years, often framing it as a precaution against an “intelligence explosion,” where AI systems rapidly improve themselves beyond human control.
Some proponents argue that once such systems exist, it may be impossible to contain them or align them with human values. The counterargument is that these fears are speculative and risk slowing down beneficial innovation.
Tyson’s contribution stands out for its clarity and a suggestion of global cooperation on a ban.
“Everyone needs to agree to that by treaty,” he said. “Treaties are not perfect, but they’re the best we have as humans.”
International treaties are one of the few mechanisms humanity has for managing existential risks. Nuclear weapons, chemical weapons, and even ozone-depleting substances have all been subject to global agreements. The logic is simple, even if the execution is not.
If a technology is too dangerous for any one country to handle alone, then it becomes everyone’s problem. But AI is software, not a bomb, and software has a way of slipping across borders.
AI proliferation and fear
High-profile voices have continuously warned that AI could be dangerous enough to warrant global intervention, even as the technology becomes ubiquitous. You might use AI to plan a weekend trip or summarize a meeting, all while hearing that the same underlying technology could one day become uncontrollable.
Tyson’s call for a treaty does not resolve that tension. If anything, it sharpens it. As regulation has often lagged behind innovation, his call for a treaty when superintelligence seems purely theoretical isn’t absurd. Usually, by the time governments act, a technology has already become widespread.
AI may be different in that its potential risks are being discussed before its most advanced forms exist. That creates an opportunity, but also a dilemma. Acting too early could stifle progress. Acting too late could make control impossible.
What Tyson is suggesting is that the answer should not be left to chance. But like most collective decisions, it is likely to be messy, contested, and far from unanimous.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
