geeky NEWS: Navigating the New Age of Cutting-Edge Technology in AI, Robotics, Space, and the latest tech Gadgets
As a passionate tech blogger and vlogger, I specialize in four exciting areas: AI, robotics, space, and the latest gadgets. Drawing on my extensive experience working at tech giants like Google and Qualcomm, I bring a unique perspective to my coverage. My portfolio combines critical analysis and infectious enthusiasm to keep tech enthusiasts informed and excited about the future of technology innovation.
Why the Godmother of AI Says We Are Getting Regulation All Wrong
AI Summary
Fei-Fei Li, the "godmother of AI," advocates for a human-centered approach to AI development, emphasizing its dual nature as a tool with immense potential but also inherent risks like bias and job displacement. She stresses the importance of diversity in the AI field, increased university funding for foundational research, and pragmatic, application-specific regulation. Li believes AI education is crucial for responsible use, urging humanity to intentionally shape AI's future to reflect democratic values and enhance human agency.
May 27 2025 15:18
The computer scientist known as the "godmother of AI" recently received a lifetime achievement award at the Webbys for her groundbreaking work teaching computers to see and understand the world around them. But today, she's more concerned with what humans can't see coming.
"Even when humans discover fire, it could be deadly," Li tells Firing Line Margaret Hoover, drawing a parallel that has become central to her thinking about AI. "So every technology is a double-edged sword."
It's a measured response from someone who has spent decades at the forefront of AI development, watching her field transform from academic curiosity to the technology reshaping everything from healthcare to warfare. As co-founder of Stanford's Institute for Human-Centered AI, Li has become one of the most thoughtful voices calling for a different approach to how we develop and deploy these powerful systems.
The Intelligence Question
Ask Li how intelligent AI really is, and she doesn't give you the typical tech industry hype. "It's very intelligent," she says. "But can it think like humans? I don't think so yet."
This distinction matters more than you might think. While AI systems can now write poetry, diagnose diseases, and beat humans at complex games, Li points out they still lack what she calls "emotional intelligence or uniquely human creativity." It's a gap that shapes how she thinks we should approach AI regulation and development.
The current AI boom has created systems that excel in narrow domains while remaining fundamentally limited in others. Language models can craft convincing prose but struggle with basic reasoning about the physical world. Computer vision systems can identify objects in photos but fail to understand context in ways that would be obvious to a child.
The Bias Problem We Can't Ignore
Li's concerns about AI aren't theoretical. She's seen firsthand how bias creeps into these systems, often reflecting the narrow perspectives of their creators. In her research, she's documented troubling examples:
AI systems mislabeling Black Americans as gorillas in photo recognition software
Self-driving cars that are less likely to detect pedestrians with darker skin
Image generators that produce explicitly racist or sexist content when prompted
"When we designed this technical system, every step of the way, people are involved," Li explains. "So when we invite more people with different backgrounds, their insights, their knowledge, their emotional understanding of the downstream application will impact the input of the system."
The problem runs deeper than individual algorithms. The AI field itself lacks diversity, with few women and people of color in leadership roles. This homogeneity shows up in the data sets used to train AI systems, the problems researchers choose to solve, and the safeguards they think to implement.
Despite making up roughly half the population, women hold only about 20% of AI research positions at major tech companies. The underrepresentation of people of color is even more stark, particularly in senior technical roles.
The Resource Drain Crisis
While tech giants pour billions into AI development, Li sees a troubling trend that few people are talking about: universities are being left behind. "The chips are not in universities. The data are very rarely available in universities. And a lot of talents are going only into industry," she says.
This shift represents more than just a brain drain. Universities have traditionally been where the most fundamental AI research happens, the kind of curiosity-driven exploration that leads to breakthrough discoveries. They're also where the next generation of AI researchers learns to think about the broader implications of their work.
"When students come to the universities and study under the best researchers, getting to labs, go to lectures, that they can glean the latest knowledge, this is a fundamentally a critical thing for our society," Li argues.
The solution, she believes, requires treating AI development like a relay race, where government funding supports basic research in universities, which then gets handed off to industry for product development. Right now, that handoff is happening too early, with companies focusing on immediate commercial applications rather than longer-term scientific understanding.
Regulation Without Stifling Innovation
The policy debate around AI often gets stuck between two extremes: those who want to heavily regulate the technology and those who fear any oversight will kill innovation. Li offers a third path, one she calls "pragmatic governance."
Her approach focuses on specific applications rather than trying to regulate AI as a whole. Just as we have the FDA to oversee drugs and medical devices, Li suggests updating existing regulatory frameworks to account for AI's role in different industries.
This application-specific approach would mean different rules for AI in healthcare, transportation, and finance, tailored to the unique risks and benefits in each domain. It's a more complex regulatory framework, but Li argues it's also more effective than broad restrictions that might hamper beneficial uses of the technology.
The China Challenge
The recent breakthrough by Chinese AI startup DeepSeek, which created a chatbot that outperformed American models at a fraction of the cost, has rattled policymakers in Washington. The development highlights a growing concern: what if authoritarian countries outpace democratic ones in AI development?
Li's response is characteristically thoughtful. "It matters what values we care about," she says. "There's no independent machine values. Machine values are human values."
This framing shifts the competition from purely technical to fundamentally moral. If AI systems reflect the values of their creators, then the question isn't just who builds the most powerful AI, but whose values those systems embody. Democratic societies that prioritize human dignity, individual agency, and transparency will create different AI systems than authoritarian regimes focused on control and surveillance.
The challenge is ensuring that democratic values don't become a competitive disadvantage. Vice President JD Vance has argued that "excessive regulation of the AI sector could kill a transformative industry just as it's taking off." Li agrees that balance is crucial but warns against abandoning ethical considerations in the race to compete.
Kids and AI: A Teaching Moment
Perhaps nowhere is Li's pragmatic approach more evident than in her thinking about children and AI. Google's recent decision to make its Gemini chatbot available to kids has sparked debate about whether AI is ready for young users.
Li, who is herself a mother, sees this as a teaching opportunity rather than a threat. "In general, I think anyone who is a learner, and our kids learn since the beginning of their life, should use AI as a tool," she says.
The key is education. Rather than banning students from using AI tools or worrying about cheating, Li argues we should be teaching responsible use. "If that happens, it's the failure of education, not the failure of the students," she says about concerns over AI-assisted cheating.
Her analogy is characteristically practical: "As mothers, we teach our kids to use fire. Think about the day you teach them how to turn on the stove, right? It's kind of frightening, but we still have to teach them. They have to learn both the utility and the harm of fire."
The Agency Question
The threat to humanity from AI isn't necessarily the technology itself, but our response to it. If we abdicate responsibility for shaping how AI develops and deploys, we risk losing control over our own future.
Li's vision of human-centered AI isn't just about building better technology. It's about maintaining human agency in a world where machines can increasingly do things we once thought only humans could do. That requires active engagement, not passive acceptance.
Her path forward isn't about slowing down AI development or accepting whatever the market produces. Instead, it's about being intentional. It's about ensuring diverse voices shape the technology. It's about creating regulatory frameworks that protect people without stifling innovation. It's about teaching the next generation to use these tools responsibly.
Most importantly, it's about remembering that AI isn't inevitable in any particular form. The choices we make today about funding, regulation, education, and values will determine what kind of AI future we get.
I do believe humanity invents tools by and large with intention to make life better, make work better. Most tools are invented with that intention, and so is AI.