Hassabis: AGI Likely Within Five Years — Ten Industrial Revolutions at Ten Times the Speed
Source: 20VC with Harry Stebbings | Published: 2026-04-07T14:04:57Z
DeepMind predicted in 2010 that AGI was roughly 20 years away. Fifteen years later, Hassabis says they're largely on track — the biggest missing piece is that current systems stop learning once training ends.
In 2010, when almost nobody was working on AI research, DeepMind co-founder Shane Legg wrote a prediction on his blog: AGI would arrive in roughly 20 years. Fifteen years later, Demis Hassabis says they're basically still on track.
AGI's Definition Isn't Mysticism — It's an Engineering Standard
Hassabis's definition of AGI has never changed: a system possessing all the cognitive capabilities of the human brain. He stresses the word "all" — because the brain is the only known proof that general intelligence is feasible. The practical value of this definition is that it gives the team a clear engineering target, not a goalpost that can be moved at will.
His timeline: a "high probability" of achieving it within five years. This isn't a recent call — extrapolating from the compute and algorithmic progress curves since DeepMind's founding in 2010, the 20-year prediction has held up remarkably well.
The Compute Bottleneck Isn't Just "Make the Model Bigger"
When asked about the biggest bottleneck, Hassabis's answer was more nuanced than usual. Compute matters, of course, but he pointed to a commonly overlooked factor: experimentation itself is extraordinarily compute-intensive. The cloud is a researcher's workbench — a new algorithmic idea must be validated at sufficient scale, or it won't hold up when integrated into the main system. When you have a large number of researchers generating a large number of new ideas, compute demand far exceeds model training alone.
The computer, the cloud — that's our workbench. If you have a new idea, you have to test it at a reasonable scale, otherwise it won't hold up when you put it into the main system.
Scaling Laws Haven't Hit a Wall — the Returns Are Just Shifting
On the claim that "scaling laws have plateaued," Hassabis thinks the verdict is too simplistic. Early on, each generation of large models nearly doubled in performance — that kind of exponential growth obviously can't last forever. But diminishing returns don't mean zero returns — he says the gains frontier labs are seeing from scaling compute remain "very substantial."
What he cares about more is the trend that follows: once existing approaches are squeezed dry, labs that can invent new algorithms will pull further ahead. Coding tools and math tools are themselves helping build the next generation of systems, creating an accelerating flywheel. The lead held by the three or four top labs is widening.
The Key Missing Pieces in Today's AI
Hassabis laid out a checklist of unsolved problems, and several point to the same issue: these systems stop learning once training ends.
Continual learning is the gap he emphasized most. The human brain elegantly integrates new information into existing knowledge through memory replay during sleep, but no lab has found a reliable way to keep a trained large model learning. Memory systems are another frontier — he sees today's long context windows as essentially brute force, cramming everything in, with more elegant architectures yet to be invented. Long-term planning is also missing; current systems can't do hierarchical planning on multi-year timescales.
He used a vivid phrase: these systems have "jagged intelligence" — ask a question one way and the performance is stunning; rephrase it and they stumble on elementary problems. A truly general intelligence shouldn't have these gaps.
How DeepMind Caught Up
Hassabis was candid about what drove DeepMind's recent acceleration: it wasn't a technical breakthrough — it was organizational change. He believes that over the past decade-plus, roughly 90% of the breakthroughs underpinning the modern AI industry — from AlphaGo to reinforcement learning to Transformers — came out of Google Brain, Google Research, or DeepMind. The talent and technical reserves were always there; they were just scattered across the company.
What changed was consolidating all of that talent behind a single direction and pooling all compute resources to build the largest models, instead of running two or three versions simultaneously inside the company. His exact words: "operate like a startup" — focus, speed, no fragmentation.
Open-Source Models Will Always Trail the Frontier by Half a Step
On open source, Hassabis's take is pragmatic: open-source models will lag roughly six months behind the absolute frontier — that's the time the open-source community needs to reimplement and absorb new ideas. DeepMind will continue releasing results in scientific applications — AlphaFold being the prime example — while shipping best-in-class open-source models through the Gemma series for small developers, academia, and edge computing.
As for the post-LLM world, he disagrees with Yann LeCun. Hassabis believes foundation models won't be replaced, only augmented — future AGI systems will likely use LLMs as a core component, layered with world models and other modules. He puts the odds of needing genuinely new breakthroughs at fifty-fifty.
The Path to Curing Disease Comes in Two Steps
Hassabis points to science and medicine as AGI's greatest positive impact. After AlphaFold, he founded Isomorphic Labs to tackle the entire drug design pipeline beyond protein folding — compound design, toxicity screening, safety property verification. He expects this general-purpose drug design engine to be ready within five to ten years.
But drug design is only step one. Clinical trials still take years. His envisioned step two: once a dozen or so AI-designed drugs have completed the full pipeline, regulators will have enough data to back-test the accuracy of model predictions. Another decade out, it may be possible to skip certain stages — no more animal testing, faster dose escalation — because model predictions will have earned sufficient trust. This is a transformation that takes twenty years to fully unfold, but the path is clear.
AI Safety Needs an "Atomic Energy Agency"
Hassabis worries about two things: bad actors misusing these dual-use technologies, and whether we can maintain control at the technical level as systems become more autonomous. His ideal solution is an organization modeled on the International Atomic Energy Agency, backed by national AI safety institutes providing technical support and setting minimum standards and benchmarks.
He gave a specific example: AI systems should not output tokens that humans can't read — communication in some machine language would introduce new security vulnerabilities. Most leading labs would agree to rules like this; the key is having an independent body to inspect and audit. He also acknowledged the timing is terrible — humanity's most consequential technology is arriving at the most fractured moment in the international order.
Ten Times the Industrial Revolution, at Ten Times the Speed
On labor displacement, Hassabis didn't take Marc Andreessen's line and simply say "history will repeat itself." He acknowledged that every past technological revolution created more and better jobs, but he was equally clear: this time the scale is different. His way of quantifying it: "AGI is the equivalent of ten Industrial Revolutions, unfolding at ten times the speed" — ten years instead of a hundred.
He's read extensively on the Industrial Revolution. Before it, child mortality was 40%; modern medicine is its product — no one would wish it hadn't happened. But this time we should do a better job cushioning the blow. His proposals include pension funds investing in AI companies so everyone shares the upside, sovereign wealth funds, and using AI-driven productivity gains to fund infrastructure.
AI Will Solve Its Own Energy Problem
When asked about AI's energy demands, Hassabis's answer was straightforward: in the medium to long term, AI will more than pay for itself. He outlined three layers: optimizing existing infrastructure (grid efficiency can improve 30–40%), climate and weather modeling to pinpoint problems, and most fundamentally — using AI to drive breakthroughs in nuclear fusion, next-generation batteries, and superconductors. If fusion is achieved, humanity will have near-unlimited energy, and could even produce cheap rocket fuel by electrolyzing seawater.
Staying in London Is a Structural Advantage
Hassabis never moved to Silicon Valley. His reasoning isn't sentimental — it's strategic: the UK has Cambridge, Oxford, Imperial, and UCL, producing world-class graduates and PhDs, but that talent had never been organized into a deep-tech venture ambitious enough to absorb it. That's exactly what DeepMind did, and in London the competition for talent is lower while still attracting the best people from across Europe.
Distance from the Valley has its benefits too. You don't get distracted by the latest hype cycle, and for a project that knew from day one it was a twenty-year bet, that space for deep thinking is valuable. Palmer Luckey has made a similar point about Anduril — being 400 miles from Silicon Valley is a core condition for original thinking.
As for whether Europe can produce a trillion-dollar company, Hassabis says he intends to find out with Isomorphic Labs. But he also identified the real bottleneck: it's not early-stage startup capability — it's growth-stage capital markets. Where do billion-dollar funding rounds come from? That was missing when he raised for DeepMind a decade ago, and it's still missing today. Loosening pension fund investment restrictions for growth-stage companies may be the single highest-leverage policy change.