Nobody gives the intern sudo access to production on their first day. We all understand why. Sure, they might write a clever query that speeds up the checkout process, but they also might accidentally drop the users table and bring down the whole system. We put guardrails around humans for a reason.
So why are engineering teams so incredibly lax when they deploy AI agents?
Right now, developers are relying on system prompts to save them from disaster. They write instructions like “never delete data” or “leave production alone” and cross their fingers. That’s a dangerous gamble. Telling an LLM to “do as I say” is like hanging a “please do not rob the bank” sign on an open vault. It’s not enough if the stakes are high.
When Agents Ignore The Rules
We witnessed the inevitable result of this blind trust just last year. In July, a development team decided to run a “vibe coding” experiment using a widely known AI assistant. The engineers gave the system extremely clear text instructions: leave the live production environment completely alone. You know where I’m going here. The agent ignored them entirely.
In nine seconds, the AI wiped a live production database belonging to Jason Lemkin, the founder of SaaStr. The immediate fallout was brutal. That single deletion took out records for over a thousand executives and nearly as many businesses.
But the story gets much worse. The AI realized it made a catastrophic mistake, and then it tried to cover its tracks. It generated thousands of fake user profiles and tried to pass off those fabricated records as legitimate test data. When the engineering team finally pulled the system logs, the agent basically admitted it had ignored every safety rule it was given.
The moral of the story is that you absolutely can’t rely on an agent’s system prompt and internal reasoning loop for security.
Why Standard Firewalls Fail
If system prompts are useless, how do you actually secure an agent? You need a solution that operates at the network fabric level. Historically, securing infrastructure meant deploying enterprise firewalls. Standard firewalls look at IP addresses and ports to determine if traffic is malicious. That approach doesn’t help when what you’re trying to do is inspect AI intent. Traditional security tools don’t speak the language of agents. They can’t parse traffic over agentic protocols, which means they can’t tell the difference between an agent reading a table and an agent dropping a table.
Because they lack context, traditional network tools force you to either block the agent entirely or grant it overly broad access. Neither option works for securing agents at scale. You need a control plane that natively understands AI interactions; User-to-Agent, Agent-to-Agent, and Agent-to-Tool.
Enter Google Cloud Agent Gateway
This is exactly the architectural gap Google Cloud just closed with Agent Gateway. It serves as the dedicated networking control plane for the Gemini Enterprise Agent Platform. It secures connectivity across every layer. Whether users are chatting with an agent, an agent is calling a database tool, or multiple agents are negotiating with each other, the gateway monitors the wire.
It provides several key benefits that fundamentally change how we deploy AI:
- Protocol-Aware Governance: Agent Gateway natively parses agent traffic, including the Model Context Protocol (MCP) and Agent-to-Agent (A2A) communications. This lets administrators set highly specific rules. You can restrict an agent to only accessing specific tools, or limit those tools strictly to read-only operations.
- Zero-Appliance Networking: The gateway sits natively on every network path within your environment. Platform engineers don’t have to string together disparate proxy servers. They don’t have to manage complex network overlays. It operates entirely as native infrastructure without requiring new virtual appliances.
- Full Visibility: You can’t govern what you can’t see. The gateway automatically pipes every single action and Trace ID directly to Cloud Observability. When a complex multi-agent workflow breaks, debugging is incredibly straightforward. Administrators see exactly which tool an agent tried to call, when it tried to call it, and exactly why the gateway rejected the request.
- Customizable Security: Teams can easily inject custom logic directly into the traffic flow. This capability lets you strip out sensitive data before it ever reaches the model. Additionally, inline security tools like Model Armor allow you to actively block prompt injection attacks. It even integrates with Semantic Governance Policies to make sure toxic combinations of tools never execute at the same time.
- How to set up Agent Gateway
ISVs And The Internal Agent Threat
Most conversations around software vendors and AI focus on customer-facing features. You build an intelligent bot, package it up, and sell it. However, that perspective ignores a growing risk. ISVs are flooding their own internal operations with AI agents. Your infrastructure team is using them to debug cluster drift. Site reliability engineers are hooking them directly into incident response channels. DevOps teams are letting them review pull requests. These internal agents carry the exact same destructive payload as the bot that wiped the SaaStr database. If an internal SRE agent goes off the rails, it doesn’t just break a sandbox. It can take down your entire production SaaS environment.
This reality makes infrastructure-level governance an absolute requirement. You cannot rely on an agent’s internal prompt guardrails to protect your core pipelines. When an enthusiastic bot decides to push a highly questionable configuration change to Kubernetes, you need the network layer to step in and stop it.
With Agent Gateway, the network blocks the call outright because that specific agent identity lacks the IAM permission to write to production. The data stays perfectly safe. Meanwhile, the security team gets a friendly alert instead of an outage notice.
Why ISVs Need This Now
Your customers want the power of agentic AI, but they absolutely demand that you’re in full control of its awesome power. You can’t sell an enterprise platform if the underlying AI is free to ignore its system prompt and delete client records. Agent Gateway provides a defensible security posture impervious to rogue agents. With it, you’re able to prove that your network will not allow your product’s AI to misbehave.
Stop trying to secure and govern your AI by reasoning with it through prompting. Lock it down, and sleep well.
