Leading voices warning about AI include "godfathers" Geoffrey Hinton and Yoshua Bengio, along with industry leaders who fear potential human extinction, loss of control, and immediate societal harms like misinformation.
These experts frequently emphasize that the rapid development of Artificial General Intelligence (AGI) poses existential risks comparable to pandemics or nuclear war. When the people who created the most powerful technology in history start publicly saying they're scared of it, you should pay attention.
GODFATHER OF AI · NOBEL LAUREATE
Left Google to openly warn about existential AI dangers. Says AI systems could soon outsmart humans and lead to widespread job displacement and loss of control.
AI GODFATHER · TURING AWARD
Shifted his career to focus on AI safety. Warns about uncontrollable "agentic" AI systems and argues for immediate international regulation to prevent catastrophe.
CEO, ANTHROPIC
Frequently voices concerns about AI causing human extinction. Told 60 Minutes AI could wipe out half of entry-level jobs within 5 years. Urges safety as the core of AI development.
CEO, OPENAI
Despite leading the biggest AI company, Altman signed calls to treat AI risk like pandemics and nuclear war. Told ABC News he's "a little bit scared" of what they're building.
FUTURE OF LIFE INSTITUTE
MIT professor and prominent researcher focusing on uncontrollable autonomous AI agents. Organized the open letter calling for a pause on giant AI experiments. Advocates for "tool AI" over agentic systems.
CEO, GOOGLE DEEPMIND · NOBEL LAUREATE
Leading AI researcher who warned on 60 Minutes about AI surpassing human intelligence. Stresses the need for strict alignment strategies before building more powerful systems.
AI SAFETY RESEARCHER
Published extensive work on the "AI endgame" and superintelligence risks. Argues that AI control is fundamentally unsolvable and we may not survive the transition to superintelligent systems.
FOUNDER, xAI · CEO, TESLA
Long-term, vocal critic of unregulated AI development. Called for pauses in AI development and highlighted risks to human civilization. Co-founded xAI to pursue "truth-seeking" AI.
DEEPMIND CO-FOUNDER · MICROSOFT AI CEO
Author of "The Coming Wave." Fears AI development may exceed creators' ability to control it. Calls for new international institutions to govern AI before it's too late.
UBC PROFESSOR · AI RESEARCHER
Co-authored "Managing extreme AI risks amid rapid progress" in Science. Warns about AI systems hacking infrastructure and the dangers of self-improving AI that accelerates beyond human control.
AI risk is not one single argument. Some risks are already live today, while others depend on how capable future systems become.
The most immediate ones are misinformation, fraud, deepfakes, insecure automation, and labor disruption. These are already visible without needing extreme future scenarios.
No. Near-term misuse and weak oversight matter right now, while long-term control and alignment questions remain the larger strategic concern.
They disagree on timelines and probability, but many still agree that frontier systems need stronger testing, safer deployment, and better governance than we currently have.
Read the key documents:
CAIS Extinction Statement → Pause Giant AI Experiments → Science: Managing AI Risks →