MCP adoption moved faster than anyone expected. What started as an Anthropic open standard is now the de facto protocol for agent-to-tool communication across OpenAI, Google, Microsoft, and most major frameworks. The question has shifted from “should we support MCP” to “how do we run it at enterprise scale.”
Google Cloud launched managed remote MCP servers in December 2025. Apigee handles the governance layer.
The Tool Problem for AI Agents
An AI agent that can only reason without acting is not very useful in enterprise applications. To do anything meaningful, an agent needs to call external systems: look up a customer record, trigger a workflow, query a database, send a notification. These are tool calls. The agent needs to know what tools exist, what each one does, what parameters it takes, and what it returns.
Before MCP, every agent framework handled this differently. OpenAI had its function-calling format. Anthropic had tool use. LangChain had its own abstraction. Developers had to write integration code for each framework separately, and when they switched models, they rewrote it.
MCP standardizes all of that. An MCP-compatible agent can discover available tools from any MCP server, read their schemas, and call them without hardcoding every integration. It is the same standardization moment REST was for web APIs two decades ago, and it is happening fast.
What Google Cloud Actually Built
The managed MCP server offering does something concrete: it takes existing REST APIs and surfaces them as MCP tools without requiring any API changes. Apigee handles the REST-to-MCP transcoding automatically. You configure an Apigee MCP proxy, point it at your existing APIs, and they become tools that any MCP-compatible agent can discover and call.
The governance layer is what makes this enterprise-ready. Every MCP tool call routes through Apigee, which applies the same policies as any other API: OAuth 2.0 authorization, rate limiting, token quotas, and full audit logging. An agent cannot call a tool it is not authorized to call. A runaway agent hitting the same tool call repeatedly will hit rate limits and stop. Every call logs to Cloud Logging, and compliance documentation generates automatically.
On top of that, Apigee API Hub registers all MCP proxies as a searchable tool catalog. Agents discover available tools at runtime rather than from a fixed hardcoded list. As the catalog grows, dynamic discovery scales in ways that static configuration simply cannot.
Why the Open Standard Part Matters
Building on MCP rather than a proprietary format gives you portability. AWS Bedrock uses its own tool-use format, so integrations built for Bedrock do not work with Claude, Gemini, or open-source agents without a rewrite. By contrast, MCP integrations built on Google Cloud work across any MCP-compatible agent: Claude, Gemini, GPT-4, LangChain, AutoGen, and whatever comes next.
For ISVs integrating with customers’ AI workflows, that matters. A customer running Anthropic agents and a customer running Google agents can both call the same MCP tools through the same Apigee-governed endpoint. The ISV writes the integration once.
That said, MCP does not solve everything. It handles how agents talk to tools, not how agents talk to each other. Discovering what other agents exist, what they can do, and how to delegate tasks between them is a separate problem, one that A2A is designed to address. The two protocols are complementary, and serious agentic architectures will need both.
The Practical Implication
If you have an existing REST API catalog and you are thinking about agentic features, the path is shorter than it was. Your existing APIs become the tool library that agents can call, with enterprise governance at the infrastructure layer and no new servers to build. The question is no longer “how do we build MCP tooling.” It is “which of our existing APIs do we want agents to use, and at what access tiers.”
A few things worth considering: Are your customers already asking whether their AI agents can call into your product? If MCP becomes the default integration protocol for enterprise agentic AI, what does your API catalog look like as a competitive asset? And if a competitor surfaces their APIs as governed MCP tools before you do, how does that change the conversation?
Want to go deeper? Here are a few links worth your time
- MCP support for Apigee, How Apigee manages MCP servers, handles REST-to-MCP transcoding, and integrates with API Hub for tool discoverability.
- Official MCP support for Google services, First-party managed MCP servers for BigQuery, Cloud Storage, and other GCP services.
- Google debuts managed MCP servers (InfoQ), Third-party technical coverage of the December 2025 launch.
