décembre 16, 2025

Mounting Public Support for Superintelligence Bans and Regulation  

A powerful statement making its rounds among hundreds of prominent figures, and now on the internet, might keep superintelligence at bay. The document published in October 2025 (having just over 850 people in support of it at the time), now has over 120,000 signatures (and it’s still rising), including the signatures of Steve Wozniak, co-founder of Apple, computer scientists and « godfathers » of modern AI Yoshua Bengio and Geoff Hinton, and leading AI researchers like Stuart Russell of UC Berkeley.   

Companies are competing to release the most advanced large language models (LLMs), sparking debates over how intelligent we want AI to be. Signatories of the impactful statement warn that a future with superintelligence « [raises] concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction. »    

The statement calls for a ban on developing superintelligent AI without first garnering strong public support and having a scientific consensus that it can be built and controlled safely. According to current research, only 5% of adults support « the status quo of fast, unregulated development, » and 73% want comprehensive regulation to safeguard against it. Another 64% say it must be proven safe and controllable before it can be developed, or it should never be made at all.    

In 2015, even Sam Altman, creator of OpenAI, wrote these words of warning: « [The] development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity. » While visionaries like Bengio can see the positive impact superintelligence can have on global challenges, he admits, « AI systems could surpass most individuals in most cognitive tasks within a few years. »  

AI and tech figures aren’t the only ones to support the cause. The statement also contains the names of academics, media personalities, religious leaders, and a bipartisan group of U.S. politicians and officials, including National Security Advisor Susan Rice. Prince Harry, Duke of Sussex, and his wife, Meghan, Duchess of Sussex, also added their names to the list, with the prince sharing the following poignant message: « The future of AI should serve humanity, not replace it. The true test of progress will be not how fast we move, but how wisely we steer. »  

Does Advanced AI Pose Real Existential Risks? 

Public anxiety around superintelligent AI is not just theoretical, even if the technology remains hypothetical for the time being. The idea is that, should it come to pass, advanced AI could exceed human intelligence across all domains, including creativity, problem-solving, and social skills, potentially causing widespread, irreversible harm to human existence.   

2024 U.S. Department of State-commissioned report warned about « substantial national security risks » and an « extinction-level threat to the human species » if the U.S. government did not act « quickly and decisively » to mitigate or avert risks of advanced AI and AGI (artificial general intelligence). AGI seems to be the more immediate risk, with many experts expecting it to arrive within the next five years (by 2030) or less.  

However, many researchers refute the idea of « human extinction, » but agree that AI safety is wise. Some concur that a bigger threat is job security, with several students dropping out of college due to the potential impact of AI on career prospects. Still, others argue that « it’s just marketing hype, » with New York University professor emeritus Gary Marcus stating that, even with amplified data and computing power, AI models continue to fail at sophisticated human tasks.   

In fact, today’s LLMs aren’t showing the same levels of improvement we saw in prior years. Accordingly, OpenAI had to downgrade its GPT-5 project to GPT 4.5 after results showed only « modest » improvements, with the technology « hallucinating » (or making up answers) about 37% of the time, versus the previous version’s 60%. 

However, newer reasoning systems are proving to be even less reliable than the initial models, with some AI researchers believing the focus on language is the problem. Some predict that without supplemental machine learning paradigms, such as symbolic reasoning systems or environmental interactions, « scaling up current AI approaches, » is « unlikely, » or « very unlikely to produce general intelligence. » 

Oxford Can Help 

As organizations grapple with the opportunities and challenges presented by advanced AI, partnering with a trusted advisor is more important than ever. Our deep expertise in AI, IT, and digital transformation uniquely positions us as a partner of choice for companies seeking not only to harness the power of cutting-edge technology but also to navigate the complexities and risks involved. By working with us, you gain access to both technical excellence and strategic guidance, ensuring your AI initiatives are implemented with safety, responsibility, and a focus on sustainable value.  

Whether you are embarking on your first AI project or driving large-scale digital transformation, our services offer the experience, innovation, and collaborative approach needed to achieve your goals while safeguarding your organization’s future.   

 

Quality. Commitment.
Trust.

Whether you want to advance your business or your career, Oxford is here to help. With 40 years’ experience, we know that a great partnership is key to success. Start a conversation today.

Share This