The four companies that have dominated the AI race for the past three years just did something unusual: they decided to cooperate.
Microsoft, Google, OpenAI, and Anthropic have announced the formation of the Agentic Artificial Intelligence Foundation, a new body managed by the Linux Foundation. The goal is to develop open-source tools and shared standards specifically for AI agents, the next wave of AI that can take actions, make decisions, and operate across multiple systems rather than just responding to prompts.
Key contributions to the foundation include Anthropic’s Model Context Protocol (MCP), which standardises how AI agents connect to external applications and data sources. OpenAI contributed Agents.md, a specification for coding agent instructions. Block, the fintech company behind Square and Cash App, contributed its open-source agent framework Goose.
This is a rare moment of aligned self-interest from companies that otherwise compete aggressively on every front.
Why Open Standards for AI Agents Matter
For the past year, AI agent development has been fragmented. Every major vendor has built their own protocols, their own integration patterns, and their own ways of connecting agents to the applications businesses already use. For companies trying to deploy AI agents, this has created real friction: which standard do you build to? What happens when your vendors use incompatible approaches?
The Linux Foundation model has worked before. It is how the internet standardised HTTP and TCP/IP, how the enterprise software world standardised on containerisation through Kubernetes, and how open-source software created a foundation stable enough for trillion-dollar companies to build on.
If MCP and the other contributed protocols become the industry standard for agentic AI, it has two major practical effects. First, AI agents from different vendors can interoperate, meaning a business can use the best agent for each job rather than being locked into one vendor’s full stack. Second, the integration work for connecting agents to existing business software becomes a solved problem rather than a bespoke engineering challenge every time.
For businesses evaluating AI agent deployment right now, this is news worth tracking.
The Vendor Lock-In Conversation Changes
One of the most common objections Enterprise DNA hears when businesses are evaluating AI agents is the lock-in concern. “What if we build everything on one vendor and they get acquired, pivot, or triple their prices in three years?”
It is a legitimate concern. Every enterprise software deployment in history has had to navigate this question, and the solutions have ranged from expensive middleware to painful migrations.
Open standards do not eliminate vendor lock-in entirely, but they change the risk profile considerably. If the underlying protocols are standardised and open, the switching cost for the application layer drops dramatically. You can move from one agent framework to another without rebuilding every integration from scratch.
The fact that Anthropic specifically contributed MCP is notable. MCP is already in active production use and well-regarded among developers who have worked with it. Having it become an industry standard rather than an Anthropic-proprietary specification is a meaningful commitment.
What This Signals About Where AI Agents Are Heading
The formation of a formal standards body is not something that happens in early, experimental markets. Standards bodies emerge when a technology has matured enough that multiple major players believe they will all benefit more from a stable common base than from continuing to compete on the foundation layer.
This move signals the industry’s confidence that AI agents are not a passing trend. The biggest players are acting as if agent-based workflows will be a permanent part of business infrastructure, in the same way cloud infrastructure and API-based software became permanent.
For businesses still in a wait-and-see posture on AI agents, this is a credible signal that the floor is being laid. Waiting another 12 months to evaluate means watching competitors who deployed early build operational advantages that compound.
What This Means for Business
If you are currently evaluating AI agents for your business, this announcement is relevant in three ways.
Reduced fragmentation risk. The major vendors are aligning on shared protocols. The risk that you build on a standard that becomes obsolete in two years just got smaller.
Better integration options ahead. As MCP and the other contributed standards get adopted, connecting AI agents to your existing software stack will become progressively easier. The early implementations are already good; the mature ecosystem will be better.
The time to build is now, not later. Open standards accelerate adoption curves. When interoperability becomes easy, deployment barriers fall. The businesses that have already built operational experience with AI agents will have a significant head start when the ecosystem reaches full maturity.
Enterprise DNA’s Omni Ops service is built around agent deployments that integrate with your existing business tools. If you want to understand what an agent workforce could look like in your specific business, book a session with our team.
Source
Tom's Hardware