Two predictions. Same conclusion. Different timelines.
Ben Goertzel, CEO of SingularityNET, says artificial intelligence will surpass humans in high-level strategic thinking in about two years.
Gracy Chen, CEO of Bitget, says AI trading bots are “interns” now but will be “full employees” in 3-5 years, capable of replacing most human traders.
Both are telling us the same thing: the human brain’s advantage is fading fast.
And honestly? I wouldn’t be surprised if we’re already past the point of no return.
AI Development Is Moving Faster Than We Can Process
Here’s what makes this different from previous waves of automation: the speed.
AI isn’t gradually improving over decades. It’s leaping forward in months. Countries are racing to develop the most advanced and powerful AI systems. Tech companies are pouring hundreds of billions into compute infrastructure. Models that seemed cutting-edge six months ago are obsolete today.
The timeline Goertzel suggests—two years until AI surpasses human strategic thinking—might even be conservative.
Think about what’s happened in just the past 18 months. AI went from being a niche research topic to dominating business strategy across every industry. Companies that ignored AI are now scrambling to integrate it or risk irrelevance. Entire job categories are being redefined around what AI can and can’t do.
And the pace isn’t slowing down. It’s accelerating.
So when Goertzel says we should “enjoy our imaginative edge for a couple more years,” he might be giving us more credit than we deserve. AI development is moving too fast, and it’s taking over industries faster than most human minds can catch up.
If the question is whether Goertzel is being overly optimistic about AGI or whether the exchange CEO is being too conservative, I lean toward Goertzel being closer to reality. The technology is advancing at a pace that makes even aggressive predictions look moderate in retrospect.
The Last Human Edge: Imagination and the Unknown
Goertzel makes an interesting claim: “The human brain is better at taking the imaginative leap to understand the unknown.”
For now.
But let’s think about what “imagination” actually is. It’s thinking about various probabilities of something that happened but you didn’t witness directly. It’s considering probable ways something could play out. It’s navigating uncertainty by constructing mental models of possible futures.
In other words, imagination is probabilities. And probabilities are mathematics.
Which is exactly what machine learning excels at.
If venturing into the unknown and calculating possible probabilities is something that helps us advance, then AI could probably do it much better and faster than we can. It can process more data, simulate more scenarios, update its models in real-time, and eliminate cognitive biases that distort human judgment.
That’s a scary thought. But it is what it is.
The idea that humans have some special creative spark that machines can never replicate is comforting. But it might also be wrong. Or at best, it might be a temporary advantage that disappears the moment AI systems become sophisticated enough to model uncertainty at scale.
When that happens—when AI can not only process known information but also navigate the unknown better than we can—what’s left?
Black Swan Events: The Last Test for Human Judgment?
Both articles point to a current limitation of AI: it struggles with totally unfamiliar market events.
AI trading bots are trained on historical data. They’ve never seen massive single-day liquidations like the 10/10 event. They find extreme volatility “very unfamiliar.” In those moments, human intervention is still needed.
But does that mean humans will always be needed for crisis management?
I don’t think so.
Experience is the greatest teacher. And if AI is designed to learn from real-world events, then every crisis it experiences should add to its learning. It should improve. It should avoid similar mistakes when similar events come up.
The difference is that AI learns faster than humans. One human trader might experience a handful of black swan events in their entire career. An AI system can be trained on every historical crisis simultaneously, simulate thousands of variations, and update its models instantly when new data arrives.
And here’s the kicker: AI might be able to handle unknown events better than humans precisely because it lacks emotional intelligence.
When markets panic, humans freeze. They make irrational decisions driven by fear. They sell at the bottom because they can’t handle the psychological pressure. AI doesn’t have that problem. It processes information, evaluates probabilities, and executes without hesitation.
Now, whether that’s a good thing for us is a different question. But from a pure performance standpoint, emotionless decision-making in a crisis might actually be superior.
Emotion vs. Logic: Who Do You Trust When Markets Collapse?
The startup founder at Consensus Hong Kong made a blunt claim: “90% of day traders lose money. As humans we are too emotional. We can’t compete with AI solutions.”
Is he right that emotion is a weakness? Or is human intuition actually an advantage AI can’t replicate?
Trusting humans is something I would automatically do, only because the AI concept is still quite new to me.
But that’s not a rational argument. That’s tribalism. That’s “I trust my own species over a machine, even if the machine is objectively better.”
And honestly, I get it. There’s something fundamentally uncomfortable about siding with a machine over another human. If we’re ever in a situation where a technology we believe to be smarter than us opposes what humans believe to be the right direction, and I have to choose between my own race and a superior intelligence… I’d have to go with my gut feeling.
But gut feeling isn’t strategy. It’s emotion. And emotion is exactly what the startup founder says makes humans inferior traders.
When markets panic—when everything is collapsing and no one knows what’s happening—who do you trust more? An AI model trained on decades of data and optimized for probabilistic decision-making, or an experienced human trader who’s just as terrified as everyone else?
Logically, the answer should be AI. But emotionally, most of us would still choose the human.
And that might be our downfall.
What Happens When We’re All Competing Against Machines?
If AI really does replace most human traders and strategists in 3-5 years, what does that mean for the average person trying to make money in crypto or traditional finance?
Are we all just going to be competing against machines we can’t beat?
That’s the existential question. And there are two camps:
Camp 1: Human ingenuity will always find new problems to solve. The natural nature of humans—our drive to want more, to create, to explore—will keep us in a loop of innovation. AI might handle trading and strategy, but humans will move on to higher-order challenges we couldn’t even conceive of before AI handled the lower-level tasks.
Camp 2: AI will completely overtake everything. Especially when robotics reaches a prime state, there will literally be no more need for human labor. We become obsolete. The economy runs without us. We’re passengers in a world optimized by machines.
I personally believe we will still find more things to work on.
But I also think the transition will be brutal. A lot of people who built their careers on skills that AI now does better are going to get left behind. The idea that “everyone will just retrain for new jobs” is optimistic to the point of delusion.
What actually happens is this: a small percentage of people adapt, learn to work alongside AI, and thrive. The majority struggle to keep up, lose competitive advantage, and face declining economic relevance.
That’s not dystopian speculation. That’s what happens every time a major technological shift occurs. The printing press. The industrial revolution. The internet. Each one created new opportunities—but not for everyone, and not immediately.
AI will be the same. Except faster.
The Uncomfortable Reality
Here’s what these two predictions are really telling us:
- AI is advancing faster than we can psychologically process. The two-year timeline might be optimistic. We might already be there in certain domains.
- The last human advantage—imagination, strategic thinking, navigating the unknown—is based on probability calculation, which is exactly what AI does better than us.
- AI’s inability to handle black swan events is temporary. Experience teaches, and AI learns from experience faster than humans ever could.
- Emotion makes humans inferior traders in most scenarios, but we still trust humans over machines because of tribalism, not rationality.
- The average person will struggle to compete in markets dominated by AI, and while new opportunities will emerge, the transition will be painful and exclusionary.
This isn’t about whether AI will surpass human strategic thinking. It’s about accepting that it probably already has in many domains, and the rest is just a countdown.
Goertzel says we should enjoy our edge for a couple more years. But I think the edge is already gone. We just haven’t fully realized it yet.
And when we do, the question won’t be “Can we compete with AI?”
The question will be “What do we do when we can’t?”
References & Sources
Primary Source 1: “The human brain’s edge is fading. AI could outhink us in 2 years, Ben Goertzel says” – CoinDesk, Olivier Acuna (Edited by Sheldon Reback)
Primary Source 2: “In unfamiliar market conditions, historical data-driven AI trading bots will falter” – CoinDesk, Ian Allison (Edited by Jamie Crawley)


Leave a Reply