OpenAI Adopts Anthropic's Model Context Protocol (MCP) for Enhanced Connectivity

Updated: March 29 2025 10:27

OpenAI has announced it will implement Anthropic's Model Context Protocol (MCP) across its product line, marking a noteworthy collaboration between the two AI rivals. This adoption signals a growing consensus around how AI models should interact with external data sources, potentially accelerating the development of more capable AI assistants. OpenAI CEO Sam Altman also shared the news via X (formerly Twitter):


The implementation will roll out in phases, with immediate availability in the Agents SDK, followed by support for the ChatGPT desktop app and Responses API.

For developers, this means they can now build MCP servers that will work seamlessly with both Anthropic's Claude and OpenAI's ChatGPT, simplifying the development process and expanding the potential user base for their tools.

While OpenAI has promised more details about their MCP plans in the coming months, the initial announcement suggests a commitment to making the protocol a core part of their AI ecosystem.

New ChatGPT voice mode and complex instructions updates

BTW, OpenAI also announced new updates for ChatGPT with Advanced Voice Model, where the post-training researcher Manuka Stratta explains that it will no longer interrupt during pauses. It is also better at following detailed instructions, especially prompts containing multiple requests. It has improved capability to tackle complex technical and coding problems and improved intuition and creativity, with fewer emojis.

“The model interrupts you less, giving you more time to gather your thoughts without feeling pressured to fill every silence,” Stratta says.

To demonstrate, Stratta engages in a conversation with ChatGPT, intentionally pausing in unusual places. The model adapts, waiting for her to finish before responding. See below video for more details:



What is Model Context Protocol (MCP)?


MCP is an open protocol created by Anthropic that standardizes how applications provide context to Large Language Models (LLMs). Think of it as a standardized interface that allows AI models to interact with various data sources, tools, and applications in a consistent way.

The protocol enables developers to build two-way connections between data sources and AI-powered applications. This means AI models can more effectively draw data from business tools, software, content repositories, and development environments to complete tasks with greater relevance and accuracy.

Before MCP, AI models like ChatGPT and Claude faced a fundamental limitation: they were effectively isolated from the systems where useful data resides. Even as these models became more sophisticated in their reasoning abilities, they remained cut off from databases, content repositories, and development environments that contain critical context for many tasks.

MCP addresses this problem by providing a universal standard for connecting AI systems with data sources. Instead of fragmented integrations that require custom implementations for each new data source, developers can use a single protocol to give AI assistants access to the information they need.

This standardization makes it much easier for developers to build applications that leverage AI capabilities while maintaining access to their existing data infrastructure. It also enables more complex workflows where AI models can maintain context as they move between different tools and datasets.

How MCP Works


MCP operates through two main components:

  • MCP Servers: Expose data through standardized interfaces
  • MCP Clients: Applications and workflows that connect to those servers

Developers can deploy MCP in two different ways:

  • STDIO servers: Run as subprocesses of applications (functioning locally)
  • HTTP over SSE servers: Run remotely and connect via URLs

The Agents SDK from OpenAI now has built-in support for MCP, allowing developers to leverage a wide range of MCP servers to provide tools to their AI agents.

Industry Adoption of MCP

The industry response to MCP has been largely positive, with Mike Krieger, Anthropic's chief product officer, welcoming OpenAI's adoption:

Since Anthropic open-sourced MCP, numerous companies have integrated it into their platforms, including Block, Apollo, Replit, Codeium, and Sourcegraph. Anthropic's chief product officer Mike Krieger welcomed OpenAI's adoption in an X post:


Since Anthropic open-sourced MCP, numerous companies have added support for their platforms, including Block, Apollo, Replit, Codeium, and Sourcegraph. This broad adoption suggests MCP is well on its way to becoming the de facto standard for connecting AI models to external data sources.


Developer Reactions and Security Concerns

While many developers have welcomed MCP as a major step forward for AI interoperability, some developers have raised concerns about MCP's complexity, particularly for remote HTTP servers. As one commenter on Hacker News noted: "MCP is just too complex for what it is supposed to do... A simpler HTTP-based OpenAPI service would have been a lot better and it is already well supported in all frameworks."

One developer who implemented MCP integration shared a positive experience: "I have a little 'software team in a box' tool... v2, I wiped that out and have a simple git, github and architect MCP protocol written up. Now I can have Claude as a sort of mastermind, and just tell it 'here are all the things you can do, please'. It wipes out most of the custom workflow coding and lets me just tell Claude what I'd look to do."

Other developers have suggested MCP may be overengineered for simpler use cases, especially when it comes to remote HTTP connections. Some argue that traditional REST APIs with OpenAPI specifications could accomplish many of the same goals with less complexity.

Several developers have highlighted security concerns, with one noting: "This sounds like a security nightmare." Another pointed out that "You're putting a LOT of trust in both the model and the system prompt when you start attaching MCPs that provide unfettered access to your file system, or connect up to your REST API's POST endpoints."

To address these concerns, implementations like Claude Code include permission systems where the AI must request approval before taking actions. This allows users to maintain control while still benefiting from the AI's capabilities.

The Future of AI Connectivity

OpenAI's adoption of MCP represents a significant step toward standardizing how AI models interact with the external world. This collaboration between competitors suggests a recognition that some degree of interoperability benefits the entire AI ecosystem.

As more companies implement MCP, we can expect to see more sophisticated AI applications that seamlessly integrate with existing systems. This integration will likely be crucial for AI to deliver on its promise of enhancing productivity across various domains.

However, the protocol is still evolving, and there may be further refinements based on developer feedback and practical experience. The ongoing challenge will be balancing power and flexibility with security and simplicity.

Github: Model Context Protocol Open Source Repo


Recent Posts