AI Summary
In a Google I/O fireside chat, DeepMind CEO Demis Hassabis and Google co-founder Sergey Brin discussed the future of AI, from current frontier models to the philosophical implications of Artificial General Intelligence (AGI). They highlighted the importance of both scaling current techniques and pursuing breakthroughs to achieve AGI, emphasizing the power of "test-time compute" for improved AI reasoning. While acknowledging today's AI limitations, they believe AGI is achievable, with Brin noting a "constant leapfrog" among companies in the AI space and Hassabis stressing the critical need for safety in initial AGI systems.
May 22 2025 15:43In a fireside chat at
Google I/O, Demis Hassabis, CEO of Google DeepMind, and Google co-founder Sergey Brin shared their insights on the cutting edge of artificial intelligence. Moderated by Alex Kantrowitz of Big Technology podcast, the discussion explored everything from the current state of frontier models to philosophical questions about AGI (Artificial General Intelligence) and simulation theory.
The Current State of AI Models
When asked about the potential for improvement in today's frontier models, Hassabis emphasized the remarkable progress being made with existing techniques while acknowledging that reaching AGI may require "one or two more breakthroughs." He advocated for a dual approach: maximizing scale with current techniques while simultaneously investing in innovation that might yield exponential improvements.
Brin agreed, noting that both algorithmic and computational improvements are essential. "Historically, if you look at things like the N-body problem simulating gravitational bodies, the algorithmic advances are probably going to be more significant than the computational advances. But both of them are coming up now."
The Power of Test-Time Compute
The conversation highlighted the significance of "thinking" paradigms in AI systems. Hassabis pointed to DeepMind's history with this approach, dating back to AlphaGo and Alpha Zero:
AlphaGo and Alpha Zero with thinking turned off... maybe it's like master level or something like that. But if you turn the thinking on, it's way beyond champion level. It's like 600 ELO [higher].
This test-time compute approach—exemplified in the newly announced Deep Think capability—allows models to reason through multiple parallel processes that check each other. As Brin noted, "Most of us get some benefit by thinking before we speak," a capability that makes AI systems "much stronger."
The Path to AGI
When discussing AGI, Hassabis offered a nuanced perspective on what constitutes true artificial general intelligence:
What I would call AGI is more a theoretical construct which is: what is the human brain as an architecture able to do? The human brain is an important reference point because that's the only evidence we have in the Universe that general intelligence is possible.
He emphasized that today's systems, despite their impressive capabilities, still have obvious limitations that can be discovered within minutes of testing. For something to qualify as AGI, Hassabis suggested it would need to be much more consistent across domains—to the point where "it should take a couple of months for maybe a team of experts to find a hole in it."
On whether AGI will be achieved by a single company, Brin acknowledged that one entity would likely reach it first but suggested multiple organizations would follow quickly:
In our AI space, when we make a certain kind of advance, other companies are quick to follow. And vice versa... it's kind of a constant leapfrog. So I think there's an inspiration element that you see. And that would probably encourage more and more entities to cross that threshold.
Hassabis stressed the importance of safety in these first systems: "I think it's important that those first systems are built reliably and safely. And I think after that, if that's the case, we can imagine using them to shard off many systems."
Emotions and Self-Improving AI Systems
An interesting question arose about whether AI needs emotions to be considered AGI. Hassabis suggested that while understanding emotions would be necessary, implementing them would be "sort of almost a design decision." He noted it might not be "necessary or in fact not desirable for them to have this sort of emotional reaction that we do as humans."
The discussion touched on self-improving systems like "Alpha Evolved," which helps design better algorithms and improve LLM training. While acknowledging the potential for such systems to accelerate progress, Hassabis cautioned that the real world is "far messier and far more complex" than the game domains where self-improvement has proven successful.
Sergey Brin's Return to Google
When asked why he returned to Google, Brin offered a passionate response about the historic moment in computer science:
As a computer scientist, it's a very unique time in history. Like, honestly, anybody who is a computer scientist should not be retired right now and should be working on AI... There's just never been a greater sort of problem and opportunity, greater cusp of technology.
He described his current role as being "deep in the technical details" of Gemini text models and other AI systems, calling it a luxury to focus on the algorithms while leaders like Hassabis manage the organization.
Google's Vision for AI Products
The conversation touched on Google's approach to AI assistants, which often emphasize visual understanding through cameras rather than just voice interfaces. Hassabis explained this reflects multiple threads coming together:
We've always been interested in agents... We're trying to build AGI which is a full general intelligence. Clearly that would have to understand the physical environment, physical world around you.
He identified two major use cases: a truly useful assistant that can come around with you in your daily life and understand your physical context, and advanced robotics that can finally realize their potential with sufficient software intelligence.
The Simulation Question
In a lighter moment, the discussion turned to whether we might be living in a simulation. Hassabis offered a nuanced view, suggesting that while the underlying physics of our universe is computational in nature, it's not necessarily "just a straightforward simulation."
Brin added a recursive perspective: "If we're in a simulation, then by the same argument, whatever beings are making the simulation are themselves in a simulation for roughly the same reasons, and so on and so forth."
The conversation concluded with both leaders expressing excitement about the transformative potential of AI, which Brin suggested would ultimately have an even greater impact than the web or mobile revolutions.
Recent Posts