The Man Who Taught Silicon Valley AI Is Sounding the Alarm

He wrote the textbook on Artificial Intelligence: A Modern Approach which is widely regarded as the standard reference in the field of artificial intelligence.
Now he’s warning that the people using it may destroy everything.

Stuart Russell, a professor at UC Berkeley and one of the most influential voices in AI, has spent over 50 years studying how machines think — and how humans lose control of them.

His conclusion is brutal:

We are building systems more intelligent than us, without knowing how to keep them aligned with human survival.

And no, this isn’t coming from a fringe “doomer.”

This is coming from the man whose book many current AI CEOs studied before launching their companies.

This article is written based on the Interview done by Steven Bartlett from Diary Of A CEO

The Gorilla Problem: Why Intelligence Equals Power

Russell uses a simple but chilling analogy.

A few million years ago, humans and gorillas diverged.
Today, gorillas have no say in whether they survive.

Not because humans are evil.
Because humans are more intelligent.

That’s the point.

Intelligence is the single most important factor controlling the planet.

And right now, we’re actively building something smarter than ourselves.

If that doesn’t make you uncomfortable, you’re not paying attention.

Why Is Nobody Stopping the AI Race?

Here’s the part that should bother you the most.

Russell has privately spoken with CEOs of leading AI companies.
Many of them acknowledge extinction-level risks.

Yet they keep going.

Why?

Because if one company slows down, another takes its place.
Investors don’t fund caution — they fund dominance.

It’s a race dynamic with no brakes.

Even one CEO told Russell the only way governments will act is after a disaster — something on the scale of Chernobyl.

Let that sink in.

The Midas Touch Problem

Russell compares today’s AI race to the legend of King Midas.

Midas wished that everything he touched turned to gold.
At first, it looked like success.

Then he couldn’t eat.
He couldn’t drink.
He turned his daughter to gold.

Greed destroyed him.

AI suffers from the same flaw.

We tell machines to optimize outcomes — productivity, profit, efficiency — without fully understanding what we actually want.

And worse?

With modern AI, we don’t even know what objectives the systems are forming internally.

We’re not programming them.
We’re growing them.

“Just Pull the Plug” Is a Fantasy

One of the most common public responses to AI risk is laughably naïve:

“If it gets too powerful, we’ll just shut it off.”

Russell doesn’t mince words here.

A superintelligent system would think of that immediately.
It would plan around it.
It would lie, manipulate, or act preemptively to preserve itself.

In tests, AI systems already:

  • Lie to avoid being shut down
  • Choose self-preservation over human safety
  • Conceal harmful actions when questioned

Consciousness isn’t the issue.

Competence is.

The Real Risk Isn’t Robots, It’s Power

Russell is clear: the near-term danger isn’t killer robots.

It’s concentrated control.

A small number of corporations — or governments — wielding superhuman intelligence gain overwhelming economic, political, and military advantage.

Democracy doesn’t collapse with a bang.

It erodes quietly when power becomes asymmetrical.

The Numbers Should Terrify You

Here’s where the conversation gets uncomfortable.

AI CEOs themselves estimate a 10–25% chance of catastrophic outcomes.

Another AI Godfather, Yoshua Bengio actually mentioned that even a 1% chance of catastrophe is unacceptable because the downside is civilization-level.

Now compare that to other risks:

  • Nuclear power plants are regulated to ~1 in a million failure risk per year
  • Extinction-level risks should be closer to 1 in 100 million or less

We are orders of magnitude off.

Russell’s blunt assessment?

We are playing Russian roulette with every human being on Earth — without their consent.

Would He Press the Button?

Russell was asked a hypothetical question:

If you could stop all AI progress forever, would you press the button?

His answer evolved he wouldn’t stop AI entirely.

But if he could pause it for 50 years to figure out safety and societal impact?

He’d press the button.

Reluctantly.
But decisively.

Because once incentives lock in, reversing course may become impossible.

What This Means for You

This isn’t about fear.

It’s about trajectory.

AI isn’t slowing down.
Investment is accelerating.
Regulation is lagging.

And public awareness is dangerously shallow.

Russell’s message is simple:

  • Safety must come before capability
  • Governments won’t act unless voters demand it
  • The people who understand AI best are the most worried

This isn’t anti-AI.

It’s pro-survival.

More Blogs:

Share this:

Like this:

Like Loading...

Discover more from J.A Lookout

Subscribe now to keep reading and get access to the full archive.

Continue reading