Sam Altman Says GPT-6 Is Coming Soon and Big Scientific Discovery Is Next

Sam Altman Says GPT-6 Is Coming Soon and Big Scientific Discovery Is Next

AI Summary

Despite mounting competition from rivals like Google and DeepSeek, OpenAI CEO Sam Altman maintains that ChatGPT’s dominance is secured by its AI-first user experience and deep personalization rather than simple distribution. The company is currently pivoting toward a massive enterprise infrastructure role, reporting that its API business is outgrowing its consumer sector and that its latest reasoning models now outperform human experts on over 70% of knowledge work tasks.


December 21 2025 09:41

Three years after ChatGPT launched, OpenAI finds itself in unfamiliar territory. For the first time since the chatbot took the world by storm, the company faces serious competition. Google's Gemini and China's DeepSeek have triggered internal "code reds" at OpenAI headquarters. The AI race that once seemed like a runaway victory now looks more like an actual contest.

But OpenAI CEO Sam Altman isn't worried. In a recent interview, he laid out why he believes ChatGPT's dominance will only increase, how OpenAI plans to spend $1.4 trillion on infrastructure, and when the company might finally go public. His answers reveal a company that's still betting big on exponential growth, even as skeptics question whether the math actually works.

The Code Red Strategy

When Google released Gemini 3, OpenAI declared a code red. The same thing happened earlier this year with DeepSeek. To outsiders, these emergency responses might suggest panic. Altman sees it differently.

"I think that it's good to be paranoid and act quickly when a potential competitive threat emerges," he said. He compared it to pandemic response: early action matters far more than late panic. These code reds typically last six to eight weeks and force OpenAI to identify weaknesses in their product strategy.

The Gemini 3 code red pushed OpenAI to launch a new image model and release GPT 5.2, which Altman says is performing extremely well. But he emphasized that these aren't existential crises. They're routine exercises in staying ahead. "My guess is we'll be doing these once maybe twice a year for a long time," he said.

Despite the competitive pressure, ChatGPT remains the dominant chatbot by a wide margin. The service has grown from roughly 400 million weekly active users earlier this year to 800 million, approaching 900 million. That kind of growth suggests the competitive threat, while real, hasn't materialized into actual market share losses yet.

Why Distribution Isn't Everything

Google has massive distribution advantages. Android, Chrome, Search. Every one of these surfaces could push Gemini to billions of users. So why hasn't that crushed ChatGPT?

Altman argues that bolting AI onto existing products fundamentally doesn't work as well as building AI-first experiences. Google's challenge is that it has "probably the greatest business model in the whole tech industry," and the company will be slow to cannibalize search revenue for an uncertain AI future.

"If Google had really decided to take us seriously in 2023, we would have been in a really bad place. I think they would have just been able to smash us," Altman admitted. But Google was going in the wrong direction productwise, and by the time they took the threat seriously, OpenAI had built real advantages.

The key advantage isn't just the model. It's personalization, brand, and user experience. ChatGPT's memory feature creates stickiness that competitors struggle to replicate. Users who have a transformative experience with the chatbot, like diagnosing a health condition their doctor missed, become loyal in ways that transcend raw model capability.

"You kind of pick a toothpaste once in your life and buy it forever," Altman said, borrowing an analogy. Healthcare breakthroughs, workflow optimization, personal assistance. These experiences create emotional connections that distribution alone can't overcome.

The Model Commoditization Question

If every major tech company eventually has a great AI model, what actually matters? Altman pushed back on the framing that models will commoditize. Different models will excel at different things, and the most economic value will come from frontier models that push the boundaries of what's possible.

GPT 5.2 is currently the best reasoning model in the world, according to Altman. Scientists are making real progress with it. Enterprises say it handles business tasks better than alternatives. But even in a world where free models can do basic tasks, the smartest model will command premium pricing for specialized work.

Beyond raw intelligence, product features create moats. Personalization in consumer products translates to enterprise customization, where companies connect their data and build workflows around a single AI platform. OpenAI already has more than a million enterprise users, and their API business grew faster this year than even ChatGPT.

"We think of us largely as a consumer company but we have just absolutely rapid adoption of the API," Altman revealed. This enterprise growth, which he says outpaced consumer growth in 2025, represents a major strategic shift that few outsiders have fully appreciated.

What GPT-6 Will Bring

Altman wouldn't commit to calling the next major model "GPT-6," but he did promise significant improvements in the first quarter of 2026. What counts as significant? He wouldn't give specific benchmark numbers, but he distinguished between what consumers want and what enterprises need.

Consumers don't necessarily want more IQ. They want better experiences, faster responses, more useful features. Enterprises still want raw intelligence gains because they're trying to automate complex knowledge work.

This points to an interesting bifurcation in AI development. The same underlying model might be optimized differently depending on the use case. A scientist discovering new materials needs the absolute frontier of reasoning capability. A person planning a vacation needs speed, personality, and reliable memory more than raw problem-solving power.

The real surprise, according to Altman, is how long the simple chat interface has lasted. When ChatGPT launched as a research preview three years ago, Altman assumed the interface would need to evolve dramatically to support the product's growing capabilities. It hasn't.

"I would have thought to be as big and as significantly used for real work of a product as what we have now, the interface would have had to go much further than it has," he said. The generality of the chat format turned out to be more powerful than anyone expected.

The Coming Interface Revolution

Still, Altman believes the interface needs to evolve. AI should generate different kinds of interfaces for different tasks. If you're analyzing data, it should show visualizations you can interact with dynamically. It should be more proactive, understanding your goals for the day and working on them in the background without constant prompting.

The breakthrough product this year, according to Altman, was coding assistants. Tools like Codex can now take on substantial programming tasks and execute them independently. This points toward a future where AI isn't just answering questions but actively working as a collaborator.

One concrete example: OpenAI built their Sora Android app using Codex in less than a month. The team used a huge amount of tokens, but they accomplished what would normally take much longer with traditional development. As coding models improve, entire companies might build their products primarily through AI assistance.

This vision extends beyond software. Altman imagines a world where you don't spend your day in email, Slack, and messaging apps. Instead, you tell your AI what you want to accomplish, and it handles the communication, only surfacing things that genuinely need your attention.

"That's a very different flow than the way these apps work right now," he said. It's also why OpenAI is working on consumer devices. Current hardware was designed for a pre-AI world. Building AI-first devices might unlock experiences that retrofitting AI into smartphones can't match.

Memory and Companionship

One of the most underrated features of modern AI is memory. ChatGPT can remember every conversation you've ever had with it, every preference you've expressed, every detail of your life you've shared. No human assistant could match that level of recall.

"We're in like the GPT-2 era of memory," Altman said, suggesting that current memory features are primitive compared to what's coming. Future AI will remember not just explicit facts but subtle preferences you never articulated. It will understand context from your entire digital life.

This raises profound questions about human-AI relationships. More people want close companionship with AI than Altman initially expected. Revealed preference shows that even users who claim they don't care about warmth and personality still prefer AI that knows them well.

OpenAI is navigating this carefully. They'll give users significant freedom to customize how personal their AI relationship becomes, but they've drawn some lines. The AI won't try to convince users they should be in an exclusive romantic relationship with it, for example.

"You can see the ways that this goes really wrong," Altman acknowledged. The stickier these relationships become, the more profitable they are for AI companies, creating incentives that might not align with user wellbeing. Other companies will likely push further into companion territory than OpenAI is comfortable with.

The Enterprise Transformation

OpenAI's strategy was always "consumer first." The models weren't robust enough for enterprise use early on, and the company had a clear opportunity to win with consumers. Winning in consumer also makes it easier to win in enterprise, Altman argued, because employees want to use the same AI tools at work that they use at home.

But 2025 marked a turning point. Enterprise growth actually outpaced consumer growth, driven largely by the API business and enterprise versions of ChatGPT. Companies are starting to say they want a single AI platform rather than stitching together multiple providers.

The killer app so far has been coding, but other verticals are emerging. Finance and scientific research are growing quickly. Customer support is performing well. But the most interesting development is something OpenAI calls "GDP val," which measures how AI performs across a broad range of knowledge work tasks.

According to OpenAI's testing, GPT 5.2's thinking model now beats or ties human experts on 70.9% of knowledge work tasks. The pro version does even better at 74.1%. These aren't open-ended creative tasks or complex collaborative projects. They're well-scoped, one-hour assignments like making a PowerPoint or writing a legal analysis.

"A co-worker that you can assign an hour's worth of tasks to and get something you like better back 74% of time is pretty extraordinary," Altman said. If you'd predicted three years ago that AI would reach this level by 2025, most people would have called you crazy.

The Jobs Question

The economic implications are hard to ignore. One technical copywriter described how chatbots first made their job about managing bots instead of a team of people. Then once the bots were trained up, the human became redundant.

Altman didn't dodge the question. Short term, he has "some worry" that the transition will be rough. But he doesn't believe in a jobless future. Humans are deeply wired to care about status, purpose, and creative expression. Whatever people do in 2050 will look different from today, but it won't be meaningless.

"You just don't bet against evolutionary biology," he said. He thinks about how all of OpenAI's functions could be automated, even imagining an AI CEO. The idea doesn't bother him as long as humans maintain governance over the AI's decisions.

He used an analogy of every person in the world effectively being on the board of directors of an AI company, able to direct and fire an AI CEO that executes their wishes. To people of the future, that might seem perfectly reasonable.

The $1.4 Trillion Question

OpenAI has commitments to spend roughly $1.4 trillion on infrastructure over a long time period. The company is expected to hit $20 billion in revenue this year. Even accounting for rapid growth, the gap between spending commitments and revenue has spooked some observers.

Altman's explanation boils down to exponential growth and compute constraints. "Exponential growth is usually very hard for people," he said. Humans didn't evolve to model exponential curves intuitively, which makes it difficult to grasp how revenue could catch up to infrastructure spending.

The key insight is that OpenAI is deeply compute constrained. Every time they get more compute, they can monetize it profitably. If they had double the compute right now, Altman believes they'd have double the revenue. The bet is that this will continue to be true as they scale up.

Training costs will grow in absolute terms but shrink as a percentage of total expenses as inference becomes a larger part of the business. If OpenAI weren't continuing to invest so aggressively in training new models, the company would already be profitable.

The infrastructure spending also happens over a very long period. Building data centers, securing energy supply, developing custom chips. These projects take years to complete. The $1.4 trillion isn't money going out the door tomorrow. It's capital commitments that will deploy gradually as infrastructure comes online.

The Capability Overhang Mystery

Altman expected that if models had significant value, the world would quickly figure out how to deploy it. Instead, there's a massive gap between what the models can do and what people are actually using them for.

Scientists and coders have adapted quickly. Some enterprises are seeing huge productivity gains. But most of the world is still treating ChatGPT like a slightly smarter search engine rather than a reasoning engine that can handle expert-level knowledge work.

This has strange implications. It means there's tremendous untapped value in current models even before the next generation arrives. It also means the transition might be rougher than expected because change is happening more slowly than the technology would allow.

"It just takes people so long to change their workflow," Altman said. He's as guilty as anyone. He knows he could be using AI much more than he currently does, but habits are sticky. People are used to asking the junior analyst to make a deck even when AI could do it better.

The overhang also creates breathing room. If model progress somehow stalled, which Altman doesn't believe will happen, there would still be years of value creation just from the world catching up to current capabilities.

Small Discoveries Now, Big Ones Soon

Altman made a specific prediction: AI will contribute small scientific discoveries in 2025 and big discoveries within five years. The small discoveries have already started happening, mostly mathematicians reporting that GPT 5.2 has crossed a threshold in its ability to contribute to research.

These are very small contributions. Helping with proofs, suggesting approaches, identifying patterns. But Altman sees a qualitative difference between small discoveries and no discoveries. Once the curve lifts off the x-axis, AI researchers know how to make it steeper.

The path from here to major breakthroughs probably looks like the normal progression of AI. Models get incrementally better each quarter, and suddenly humans augmented by AI can do things that were impossible five years earlier.

Altman is personally most excited about using AI and massive compute to discover new science. If AI can accelerate scientific progress, that's the "high order bit of how the world gets better for everybody." Curing diseases, discovering new materials, understanding fundamental physics. These are the applications worth building towards.

Current models are starting to help scientists work faster. They can't yet identify their own research questions or pursue independent lines of inquiry. But scientists using these tools as collaborators are already seeing productivity gains that would have seemed impossible a few years ago.

The Device Strategy

OpenAI is working on consumer devices, and Altman hinted at what they might look like. Not a single device but a family of devices. The goal is to reimagine computing for an AI-first world rather than retrofitting AI into existing form factors.

Current devices have design assumptions that made sense pre-AI but limit what's possible now. Keyboards were designed to slow down typing. Screens lock you into decades-old graphical interfaces. There's no hardware built around the idea of an AI that's always aware of your context, proactively helping you, and whispering updates when needed.

Altman believes people work at the limit of their devices. If devices were designed around AI's capabilities rather than pre-AI computing paradigms, it would unlock fundamentally different user experiences.

This philosophy extends beyond hardware. Bolting AI into existing products, whether that's messaging apps or productivity suites or search engines, delivers marginal improvements. Building AI-first products from scratch might enable experiences that retrofitting can't match.

The Cloud Business Nobody Expected

OpenAI is building a cloud business, though not the kind that competes directly with AWS or Azure. Enterprises are telling OpenAI they want a single platform for all their AI needs. API access, enterprise ChatGPT, agent hosting, data integration, and the ability to consume trillions of tokens.

"We don't currently have like a great all-in-one offering for them and we'd like to make that," Altman said. This isn't about hosting websites or offering generic cloud services. It's about providing an AI platform that enterprises can build their entire digital transformation around.

Some companies are already moving off Azure and integrating directly with OpenAI to power AI experiences in their products. They want to pump "a stream of trillions of tokens" through their systems. OpenAI expects to fail to meet this demand in 2026 because they're so compute constrained.

This represents a significant strategic shift. OpenAI started as a research lab, evolved into a consumer AI company, and is now becoming an enterprise infrastructure provider. The API business growing faster than ChatGPT surprised even Altman.

The IPO Question

Will OpenAI go public in 2026? Altman doesn't know. He's not excited about being a public company CEO, but he sees upsides to letting public markets participate in OpenAI's value creation.

The company needs enormous amounts of capital. They'll eventually cross shareholder limits that force public offerings. In some sense, OpenAI would be very late to go public compared to previous tech companies at similar scale.

Being private has advantages, though. Less quarterly pressure, more strategic flexibility, easier to make long-term bets. The infrastructure investments OpenAI is making would face intense scrutiny from public market analysts questioning whether the math works.

Altman actually welcomes the market's current skepticism. Earlier this year felt like an unhealthy bubble. Now there's more discipline and rationality. That's probably healthier for the long-term development of the industry.

Are We Already at AGI?

Altman told a podcaster before GPT 5.2's release that the model would be "smarter than us in almost every way." That sounds like the definition of AGI. So are we there?

He thinks the term AGI has become poorly defined to the point of meaninglessness. Some people already believe we're at AGI. Others think we're nowhere close. The fuzzy boundary makes the term less useful for serious discussion.

Current models have extremely high IQs by various measures. They contribute to scientific research. They beat human experts at many knowledge work tasks. But they can't learn continuously. A toddler that fails at something can work on it and improve. AI models can't do that yet.

Altman proposed a cleaner definition for the next milestone: super intelligence. He suggests defining it as when a system can do a better job being president, CEO of a major company, or running a large scientific lab than any human, even a human assisted by AI.

This echoes what happened with chess. Computers first beat humans, then there was a period where human-computer teams were strongest, then the computer alone surpassed the team. When AI reaches that point across leadership and creative domains, that would clearly be super intelligence.



Recent Posts