Warning: 009

AI Dangers

Leading voices warning about AI include "godfathers" Geoffrey Hinton and Yoshua Bengio, along with industry leaders who fear potential human extinction, loss of control, and immediate societal harms like misinformation.

Why the Builders Are Warning Us

These experts frequently emphasize that the rapid development of Artificial General Intelligence (AGI) poses existential risks comparable to pandemics or nuclear war. When the people who created the most powerful technology in history start publicly saying they're scared of it, you should pay attention.

Geoffrey Hinton

GODFATHER OF AI · NOBEL LAUREATE

Left Google to openly warn about existential AI dangers. Says AI systems could soon outsmart humans and lead to widespread job displacement and loss of control.

NYT Article → 60 Minutes →

Yoshua Bengio

AI GODFATHER · TURING AWARD

Shifted his career to focus on AI safety. Warns about uncontrollable "agentic" AI systems and argues for immediate international regulation to prevent catastrophe.

Science Paper → Blog Post →

Dario Amodei

CEO, ANTHROPIC

Frequently voices concerns about AI causing human extinction. Told 60 Minutes AI could wipe out half of entry-level jobs within 5 years. Urges safety as the core of AI development.

CBS 60 Minutes → Dwarkesh Podcast →

Sam Altman

CEO, OPENAI

Despite leading the biggest AI company, Altman signed calls to treat AI risk like pandemics and nuclear war. Told ABC News he's "a little bit scared" of what they're building.

ABC News → TED Talk →

Max Tegmark

FUTURE OF LIFE INSTITUTE

MIT professor and prominent researcher focusing on uncontrollable autonomous AI agents. Organized the open letter calling for a pause on giant AI experiments. Advocates for "tool AI" over agentic systems.

Pause AI Letter → Lex Fridman Pod →

Demis Hassabis

CEO, GOOGLE DEEPMIND · NOBEL LAUREATE

Leading AI researcher who warned on 60 Minutes about AI surpassing human intelligence. Stresses the need for strict alignment strategies before building more powerful systems.

60 Minutes → CNN Interview →

Roman Yampolskiy

AI SAFETY RESEARCHER

Published extensive work on the "AI endgame" and superintelligence risks. Argues that AI control is fundamentally unsolvable and we may not survive the transition to superintelligent systems.

Lex Fridman #431 → arXiv Paper →

Elon Musk

FOUNDER, xAI · CEO, TESLA

Long-term, vocal critic of unregulated AI development. Called for pauses in AI development and highlighted risks to human civilization. Co-founded xAI to pursue "truth-seeking" AI.

Lex Fridman #400 → Pause AI Letter →

Mustafa Suleyman

DEEPMIND CO-FOUNDER · MICROSOFT AI CEO

Author of "The Coming Wave." Fears AI development may exceed creators' ability to control it. Calls for new international institutions to govern AI before it's too late.

TED Talk → The Coming Wave →

Jeff Clune

UBC PROFESSOR · AI RESEARCHER

Co-authored "Managing extreme AI risks amid rapid progress" in Science. Warns about AI systems hacking infrastructure and the dangers of self-improving AI that accelerates beyond human control.

Science Paper → MLST Podcast →

What They're All Saying

  • Existential risk is real. Multiple signatories of the CAIS statement say mitigating AI extinction risk should be a global priority alongside pandemics and nuclear war.
  • We're losing control. AI systems are already exhibiting deceptive behavior in testing. The more capable they get, the harder they are to align with human values.
  • Jobs will transform fast. Dario Amodei estimates half of entry-level white-collar jobs could be automated within 5 years.
  • Misinformation will scale. AI-generated deepfakes, fake news, and synthetic media are already being weaponized at industrial scale.
  • Regulation is behind. Technology is advancing faster than governments can respond. International cooperation is critical but barely happening.

Risk Types

Near-Term vs Long-Term Risk

AI risk is not one single argument. Some risks are already live today, while others depend on how capable future systems become.

Near-term harms

  • Deepfakes, impersonation, fraud, and misinformation
  • Weak automation deployed without enough human review
  • Hiring, lending, and safety systems with opaque bias

Systemic risks

  • Concentrated power in a small number of labs and platforms
  • Regulation that lags behind real capability growth
  • Security failures when agents get more autonomy

Long-term concerns

  • Loss of control over increasingly capable systems
  • Autonomous decision-making at infrastructure scale
  • Alignment failures that outpace our governance tools

FAQ

AI Risks FAQ

What are the biggest AI risks right now?

The most immediate ones are misinformation, fraud, deepfakes, insecure automation, and labor disruption. These are already visible without needing extreme future scenarios.

Are AI risks only about extinction scenarios?

No. Near-term misuse and weak oversight matter right now, while long-term control and alignment questions remain the larger strategic concern.

Why do AI leaders disagree about risk?

They disagree on timelines and probability, but many still agree that frontier systems need stronger testing, safer deployment, and better governance than we currently have.

Read the key documents:

CAIS Extinction Statement → Pause Giant AI Experiments → Science: Managing AI Risks →