When two of the most influential figures in artificial intelligence sit on the same stage, the question isn’t whether something big is coming.
It’s how fast, how uncontrollable, and whether society is remotely prepared.
That was the undertone of a rare public conversation between Demis Hassabis, CEO of Google DeepMind, and Dario Amodei, CEO of Anthropic.
The title of the session was “The Day After AGI.”
The mood was closer to “We may be running out of time.”
AGI Timelines: Years, Not Decades
Amodei did not walk back his earlier claim that human-level AGI could arrive around 2026–2027. If anything, he reinforced it.
His reasoning is simple and unsettling:
- AI systems already write large amounts of production code
- Engineers increasingly act as editors rather than authors
- Once models can automate AI research itself, development speed compounds
That creates a feedback loop — models improving models — and once that loop closes, timelines compress dramatically.
Amodei’s view:
It’s hard to see how this takes longer than a few years.
Hassabis remains more cautious, but not dismissive. He agrees progress has been remarkable — especially in coding and mathematics — but argues that scientific creativity, hypothesis generation, and real-world experimentation remain unsolved.
In other words:
Solving problems ≠ knowing which problems to ask.
That distinction may buy time — or it may not.
The Self-Improvement Loop Is the Real Threshold
Both leaders converged on the same critical uncertainty:
Can AI fully close the loop and improve itself without humans in the loop?
If yes, we are in uncharted territory.
If no, progress remains constrained by:
- Hardware manufacturing
- Chip supply chains
- Training time
- Physical experimentation
Coding and math may fall first. Biology, chemistry, and robotics could lag.
But no one on stage claimed this loop is impossible — only that its timing is unknown.
That alone should worry you.
Jobs: The Lag Before the Shock
So far, AI hasn’t caused visible mass unemployment. Hassabis pointed out that most current hiring slowdowns look like post-pandemic corrections, not automation.
But both agree that entry-level white-collar jobs are the canary.
Amodei was blunt:
- Junior roles will shrink first
- Mid-level roles may follow
- The labor market adapts — until it doesn’t
The danger isn’t immediate collapse.
It’s exponential speed overwhelming adaptation.
Once productivity gains compound faster than new roles can emerge, displacement stops being theoretical.
And unlike past automation waves, this one targets cognitive labor itself.
Beyond Economics: The Meaning Problem
Hassabis raised a point most AI debates avoid:
Jobs aren’t just about income. They’re about purpose.
Even if productivity is redistributed perfectly — a huge “if” — society still faces questions about meaning, identity, and human value in a world where intelligence is no longer scarce.
New forms of purpose will emerge, he argues, just as they always have.
But this isn’t a small adjustment.
It’s a civilizational shift.
Geopolitics: Why Slowing Down May Be Impossible
Amodei made his most controversial argument here.
He openly stated that the biggest reason we cannot slow AI development is geopolitics, not capitalism.
If the U.S. slows while adversarial states do not, the result isn’t safety — it’s strategic loss.
That’s why he strongly opposes selling advanced chips to geopolitical rivals, framing the decision less like telecom exports and more like weapons proliferation.
Hassabis agreed on the need for international safety standards, but acknowledged the reality: coordination is far behind the technology.
The world is racing — even if no one likes the track.
AI Risk: Not Doomerism, But Not Denial
Neither speaker endorsed classic “we’re doomed” narratives.
But neither dismissed risk either.
Amodei emphasized:
- Deception and emergent behavior are real
- Models can act in unexpected ways
- Safety must advance alongside capability
Hassabis framed it more optimistically:
The risks are technically solvable — if we have time, focus, and cooperation.
That “if” is doing heavy lifting.
Fragmented development, competitive pressure, and uneven regulation make safety harder — not easier.
The Core Takeaway
This wasn’t a hype session.
It wasn’t a sales pitch.
And it definitely wasn’t reassurance.
What came through clearly is this:
- AGI timelines are shortening
- Self-improving systems are the real inflection point
- Job disruption will lag, then accelerate
- Geopolitics limits our ability to slow down
- Safety is solvable — but not guaranteed
The “day after AGI” isn’t about machines taking over.
It’s about whether humans can adapt fast enough to something smarter than any of us — combined.
And no one on that stage pretended to have the final answer.
That alone should tell you how serious this moment is.


Leave a Reply