He wrote the textbook on Artificial Intelligence: A Modern Approach which is widely regarded as the standard reference in the field of artificial intelligence.
Now he’s warning that the people using it may destroy everything.
Stuart Russell, a professor at UC Berkeley and one of the most influential voices in AI, has spent over 50 years studying how machines think — and how humans lose control of them.
His conclusion is brutal:
We are building systems more intelligent than us, without knowing how to keep them aligned with human survival.
And no, this isn’t coming from a fringe “doomer.”
This is coming from the man whose book many current AI CEOs studied before launching their companies.
Russell uses a simple but chilling analogy.
A few million years ago, humans and gorillas diverged.
Today, gorillas have no say in whether they survive.
Not because humans are evil.
Because humans are more intelligent.
That’s the point.
Intelligence is the single most important factor controlling the planet.
And right now, we’re actively building something smarter than ourselves.
If that doesn’t make you uncomfortable, you’re not paying attention.
Here’s the part that should bother you the most.
Russell has privately spoken with CEOs of leading AI companies.
Many of them acknowledge extinction-level risks.
Yet they keep going.
Why?
Because if one company slows down, another takes its place.
Investors don’t fund caution — they fund dominance.
It’s a race dynamic with no brakes.
Even one CEO told Russell the only way governments will act is after a disaster — something on the scale of Chernobyl.
Let that sink in.
Russell compares today’s AI race to the legend of King Midas.
Midas wished that everything he touched turned to gold.
At first, it looked like success.
Then he couldn’t eat.
He couldn’t drink.
He turned his daughter to gold.
Greed destroyed him.
AI suffers from the same flaw.
We tell machines to optimize outcomes — productivity, profit, efficiency — without fully understanding what we actually want.
And worse?
With modern AI, we don’t even know what objectives the systems are forming internally.
We’re not programming them.
We’re growing them.
One of the most common public responses to AI risk is laughably naïve:
“If it gets too powerful, we’ll just shut it off.”
Russell doesn’t mince words here.
A superintelligent system would think of that immediately.
It would plan around it.
It would lie, manipulate, or act preemptively to preserve itself.
In tests, AI systems already:
Consciousness isn’t the issue.
Competence is.
Russell is clear: the near-term danger isn’t killer robots.
It’s concentrated control.
A small number of corporations — or governments — wielding superhuman intelligence gain overwhelming economic, political, and military advantage.
Democracy doesn’t collapse with a bang.
It erodes quietly when power becomes asymmetrical.
Here’s where the conversation gets uncomfortable.
AI CEOs themselves estimate a 10–25% chance of catastrophic outcomes.
Another AI Godfather, Yoshua Bengio actually mentioned that even a 1% chance of catastrophe is unacceptable because the downside is civilization-level.
Now compare that to other risks:
We are orders of magnitude off.
Russell’s blunt assessment?
We are playing Russian roulette with every human being on Earth — without their consent.
Russell was asked a hypothetical question:
If you could stop all AI progress forever, would you press the button?
His answer evolved he wouldn’t stop AI entirely.
But if he could pause it for 50 years to figure out safety and societal impact?
He’d press the button.
Reluctantly.
But decisively.
Because once incentives lock in, reversing course may become impossible.
This isn’t about fear.
It’s about trajectory.
AI isn’t slowing down.
Investment is accelerating.
Regulation is lagging.
And public awareness is dangerously shallow.
Russell’s message is simple:
This isn’t anti-AI.
Subscribe now to keep reading and get access to the full archive.