geeky NEWS: Navigating the New Age of Cutting-Edge Technology in AI, Robotics, Space, and the latest tech Gadgets
As a passionate tech blogger and vlogger, I specialize in four exciting areas: AI, robotics, space, and the latest gadgets. Drawing on my extensive experience working at tech giants like Google and Qualcomm, I bring a unique perspective to my coverage. My portfolio combines critical analysis and infectious enthusiasm to keep tech enthusiasts informed and excited about the future of technology innovation.
DeepMind Demis Hassabis on AI's Future and Why He Won't Rush Into Advertising
AI Summary
Demis Hassabis, the Nobel-winning CEO of Google DeepMind, envisions a future where AI transcends current hurdles—such as long-term planning and "true creativity"—through recursive self-learning, provided the field can bridge the gap in robust world models and verification. While he remains cautious about integrating advertising into AI assistants due to the potential for eroding user trust, he suggests that fierce commercial demand for reliable enterprise tools will paradoxically drive better practical safety guardrails as we move toward AGI.
January 25 2026 14:34
Demis Hassabis has spent much of his career thinking about what comes next. As the co-founder and CEO of Google DeepMind, he's been at the forefront of some of artificial intelligence's most dramatic breakthroughs. From AlphaGo's historic victory over a world champion Go player to winning the 2024 Nobel Prize in Chemistry for AlphaFold's work on protein structure prediction, Hassabis has earned a reputation for turning theoretical AI concepts into reality.
So when he speaks about the future of AI, people listen. In a recent conversation at Davos with Axios chief technology correspondent Ina Fried, Hassabis laid out his vision for where AI is heading in the next few years and revealed the thorniest questions keeping him up at night.
What AI Can't Do Yet
Today's AI systems are impressive. They can write code, analyze images, and carry on conversations that feel remarkably human. But Hassabis is candid about their limitations.
"There's flaws that still need to be addressed," he said. The systems still struggle with continuous learning, the ability to keep learning after their initial training is complete. They can't handle true long-term planning. And despite generating text and images that can seem creative, they lack what Hassabis calls "true creativity."
These aren't just technical quibbles. They represent fundamental barriers to AI systems becoming genuinely useful as autonomous agents in workplaces or homes. An AI assistant that can't learn from its interactions with you or plan beyond the immediate conversation isn't much of an assistant at all.
The question is whether simply scaling up existing techniques will solve these problems, or whether the field needs one or two major breakthroughs. Hassabis isn't sure. But he's betting DeepMind can figure it out either way.
The Promise and Peril of Self-Learning AI
One of the most intriguing areas Hassabis discussed is recursive self-learning, systems that improve themselves by learning from their own outputs rather than waiting for humans to train new versions every few months.
DeepMind pioneered this approach years ago with AlphaGo and AlphaZero. These systems learned to play Go and chess by playing millions of games against themselves. The results were striking. Starting with just the rules of chess in the morning, an AlphaZero system could surpass a chess master by lunch and reach world champion level by evening. All in less than 24 hours.
"It's quite extraordinary to see something like that improvement curve in real time," Hassabis said.
But games are simple compared to the real world. They have clear rules, defined win conditions, and you can verify whether a move was good or bad. The real world is messy.
Hassabis thinks self-learning could work in specific domains like coding and mathematics because you can verify answers. Write a program, run it, see if it works. Prove a mathematical theorem, check if it's valid. That verification loop is what makes self-improvement possible.
The catch? Once you close the loop on self-improvement with no human oversight, things can get unpredictable. Capabilities could proliferate in ways you didn't anticipate. Even with coding assistants today, human developers make the final architectural decisions. But what happens when they don't?
The Two Missing Pieces
Hassabis identified two major obstacles preventing self-learning AI from working in the real world today.
First, we don't have good world models. In a game, the rules are fixed. Move a chess piece, and the outcome is deterministic. But in the real world, if you want an AI system to plan a robot's route to a conference center, it needs to simulate countless variables. What if the robot encounters ice? What if there's unexpected traffic? Without a realistic simulation of the world to test plans before executing them, self-learning systems can't safely improve.
Second, even if you have a plan that works, you need to verify it leads to a good outcome. In chess, that's easy: count the pieces, see who won. But most real-world objectives aren't so clear-cut. How do you know if a business strategy was optimal? Or a medical treatment? These are the verification problems that make self-learning AI so challenging beyond games and code.
The Advertising Question
When OpenAI recently announced plans to introduce advertising to its products, it sparked intense debate about the future of AI chatbots. The argument for ads is straightforward: they could fund services for people who can't pay, just as they've funded the consumer internet for decades.
But Hassabis is skeptical about moving too quickly.
"I was a little bit surprised they've moved so early into that," he said, noting that while ads can be useful when done well, they introduce a fundamental tension. Today's chatbots present themselves as helpful assistants with a single goal: answering your questions accurately. Introduce advertisers, and you suddenly have two customers with potentially conflicting interests.
"You want to have trust in your assistant," Hassabis said. "So how does that all work?"
He drew a distinction between advertising in search, where user intent is already clear, and advertising in chat-based assistants. Search is straightforward: you're looking for something, and relevant ads can genuinely help. But assistants are different. As they become more powerful and know more about your life, the potential for manipulation grows.
Google DeepMind has no current plans to introduce ads into Gemini, Hassabis said. They're thinking about it, brainstorming different approaches, but there's no rush. "We don't feel any immediate pressure to make knee-jerk decisions," he said, emphasizing that the company wants to be "scientific and rigorous and thoughtful" about each step.
It's a notable position given that his boss, Google CEO Sundar Pichai, runs a company that makes most of its money from advertising. But Hassabis said he feels no pressure from above. The focus remains on building the most useful technology possible.
The Safety Conversation Has Changed
Over the past few years, discussions about AI safety have shifted from theoretical to practical. It's no longer about debating hypothetical scenarios of artificial general intelligence gone wrong. It's about making sure the systems deployed today, used by billions of people, behave appropriately.
"These systems are out in the world, billions of people are using them," Hassabis said. "What are the practical guard rails and how should those work?"
He views this as a valuable learning opportunity. Working out safety measures while the stakes are relatively low prepares companies for when AI systems become more autonomous and the consequences more severe. It's a proving ground for the harder challenges ahead.
But is safety really the guiding factor, or has competitive pressure pushed it to the background? Hassabis acknowledged the competition is "ferocious" with an unprecedented concentration of talent and resources. However, he argued that commercial pressures might actually drive better safety practices.
Enterprise customers, he explained, won't adopt AI systems unless they have guarantees about data security, customer privacy, and predictable behavior. "That demand is going to push the AI frontier labs to be more responsible," he said. These commercial requirements serve as a training run for the higher stakes that come with AGI.
Too Few Companies or Too Many?
One of the central debates in AI concerns industry concentration. Is it better to have a small number of well-resourced companies at the frontier, or should we prefer more competition and diversity?
Hassabis sees pros and cons. The industry appears headed toward having several frontier labs, maybe half a dozen. That's good for consumer choice and competition, which drives prices down and accelerates progress. But it also creates intense commercial pressure.
"We got to all remember, all of us that are leading frontier labs, that there's a bigger picture at stake," he said. Safety and responsibly shepherding AGI into the world should be the overriding priority, even above commercial pressures.
More companies at the frontier could make collaboration on safety standards harder. But Hassabis expressed hope that the industry can find the right balance between competition and cooperation on the issues that matter most.
Are We Ready for What's Coming?
Hassabis doesn't think we're prepared for AGI. Institutions, governments, and businesses are adapting too slowly to the pace of change.
"I don't think we're ready," he said bluntly. "Unfortunately."
The good news, in his view, is that we have a bit more time than some of his peers believe. While others in the industry predict AGI could arrive within a year or two, Hassabis puts the timeline at five to ten years. That's still not much time given how transformative AGI will be.
He hopes that as more agentic, autonomous systems emerge, it will become clearer to the broader world that we need to get organized quickly. That means international cooperation, minimal safety standards, and serious dialogue about economic impacts.
When asked what decision the AI community is most likely to regret five years from now, Hassabis reframed the question. The progress is amazing, he said. AI is advancing material science, fusion research, quantum computing, and disease treatment. DeepMind's Isomorphic Labs, building on AlphaFold, is working on cures for diseases. These are technologies society needs as fast as possible.
"I genuinely believe that's the way we're going to deal with climate change," he said, pointing to AI's potential to design new materials and energy sources.
The challenge is that rapid progress leaves less time to prepare institutions and complete necessary safety work. "It's not just a question of the technology companies doing that on their own," he emphasized. "Society doesn't, you know, that's not enough. It needs to be broader society that's involved in that. It can't just be the technology companies."
A Childhood Lesson
The conversation ended with a glimpse into Hassabis's childhood, revealing something about how he became who he is today. As a young chess prodigy, his parents made an unusual choice. After he won the London Under-8 Championship at age six, they stopped entering him in tournaments he could easily win.
Instead, they put him in competitions with 14-year-olds, then adults. He didn't win many medals. Often he finished second or third. But the constant challenge accelerated his development.
"They were trying to always put me at the most challenging end of what I could deal with," Hassabis recalled.
Looking back, he admitted he wouldn't have minded winning a few more trophies along the way. But the approach worked. Maybe, he suggested, he got a medal in the end that mattered more than those childhood tournaments: the Nobel Prize.