
AI agents have moved from experimentation to execution. Across enterprises, autonomous systems are organizing data, writing code, and triggering workflows with growing authority. This is not an incremental change; it is a structural shift.
As AI moves from advising to acting, risk changes shape, as machine-speed execution can amplify small errors into cascading incidents. Accountability remains with leadership, yet many lack the runtime controls needed to contain autonomous behavior in real time.
A year ago, AI agents dominated headlines and strategy decks, but they were largely limited to pilots and copilots. Humans remained the final circuit breaker. That moment is over.
Gartner predicts that by 2028, 60% of brands will use agentic AI to deliver streamlined one-to-one interactions, signaling that autonomous systems are moving rapidly from experimentation into operational reality across business functions.
Today, these AI agents have real authority:
Human error unfolds at human speed. Autonomous systems operate at machine speed, chaining actions across applications and data sources in seconds. A new class of “long-running” AI agents can operate autonomously in the background, dynamically selecting tools, opening and manipulating web pages, and executing workflows across live systems. A flawed instruction, a simple prompt injection, or an unintended permission can cascade across environments before anyone can intervene.
Accountability, however, does not transfer to the machine. Leaders remain responsible for outages, cost spikes, and compliance failures. This leads to costly data breaches, reputational damage, and manual remediation that costs time, money, and changes to infrastructure.
At the same time, shadow agent sprawl is emerging. As tools become easier to use, business units deploy their own automations outside governed pathways. What begins as innovation can quickly outpace oversight.
Enterprise leaders are already feeling this tension. At a recent WSJ CIO Summit, 29% of technology leaders cited cybersecurity and data privacy as their primary concern around deploying AI agents.
And when incidents occur, reconstruction becomes difficult. Many agentic workflows lack clear, end-to-end traceability, creating exposure in a regulatory environment that increasingly demands transparency. Emerging regulations such as the EU AI Act are increasing expectations for traceability and accountability in automated systems, raising the stakes for organizations that cannot fully reconstruct how autonomous decisions were made.
The Lesson: Policy is not a guardrail if it does not sit in the execution path.
If machines are acting, the next question is where?Increasingly, the answer is the browser.
With 85% of modern work occurring in the browser, it has become the primary interface for accessing applications, data, and RAG (Retrieval-Augmented Generation) browsing.
AI sidebars now operate directly inside enterprise browsers, executing tasks within SaaS applications in real time. Agents log into business platforms through web sessions, navigating dashboards, updating records, triggering workflows, and pulling reports. Retrieval-augmented generation pulls live web content into decision chains. AI coding agents access cloud repositories, CI/CD pipelines, and infrastructure consoles through browser interfaces and APIs.
For the first time, humans and machines are executing side by side inside the same web session. This means the browser is no longer just a user interface but a shared execution surface where work happens, where data moves, and where actions are taken across enterprise systems and the open web. This changes the security model and how organizations must shift to a human and AI security mindset. A shift that isn’t always feasible for modern enterprises that lack the time or know-how to pivot before it’s too late to prevent misuse.
To secure dynamic web sessions, the browser must be more than the runtime for agentic work, it must become the control point.
While legacy controls that rely on the endpoint protect the operating system and device posture, they don’t see or govern what happens inside the browser DOM or across chained agent workflows. Network controls secure the pipe, inspecting traffic as it moves from point A to point B, but they cannot understand or constrain in-session behavior once a web session is established.
Sure, policy platforms like DSPM or Microsoft Purview can define what should happen with sets of access rules, permissions, and acceptable use standards, but defining a rule is not the same as enforcing it at the moment an action occurs. If a control does not sit directly in the execution flow, it cannot stop a decision made in milliseconds. AI agents do exactly that. They do not pause between steps unless something forces them to.
Gartner believes that agentic systems require admission control before actions begin. They require execution gating to validate what can run and under what conditions. Privileges must be tightly scoped, not broadly inherited. And when something deviates from expected behavior, circuit breakers and defined kill paths must be in place to halt execution immediately.
Translated to the browser, this becomes concrete. You must control what agents can access inside web sessions. You must govern what web content is allowed to execute and interact with enterprise data. You must enforce boundaries on what information can be copied, transferred, or exfiltrated. And when behavior crosses a defined threshold, you must be able to contain it in-session, not after the fact.
Rollback assumes damage has already occurred. Containment limits the blast radius before it spreads. In a machine-speed environment, that distinction is everything.
As AI agents take on more responsibility, organizations need clear evidence that those systems behave within defined limits. Gartner suggests that agentic systems require admission control before actions begin and execution gating to validate conditions. In the browser, this means:
Containment vs. Rollback: Rollback assumes damage has already occurred; containment limits the blast radius before it spreads. In a machine-speed environment, that distinction is everything.
Over-restricting AI may feel safe in the short term, but it carries its own risk. Companies that block or severely limit agent adoption will struggle to match the speed, efficiency, and output of peers who integrate autonomy into daily operations.
At the same time, under-securing AI is equally dangerous. Deploying autonomous agents without runtime guardrails invites operational incidents, compliance exposure, and reputational damage. A single uncontrolled cascade at machine speed can erase the autonomy gains promised.
Autonomy must be observable, with full visibility into the actions taken, the data accessed, and the decisions made. And most importantly, it must be controllable. When autonomous agents are allowed to cross thresholds, execution becomes hard to stop. Which brings us back to runtime protection and the guardrails that must be employed to agentic AI to serve as a kill path for exfiltrated data, unprivileged access, and threat actor interference.
The time to build those guardrails is now. Agent adoption is accelerating. Soon, automated workflows will outnumber the humans supervising them. Organizations that embed deterministic controls into the execution layer today will scale with confidence tomorrow.
If you are evaluating how to enable AI agents without expanding risk, it starts with the runtime. Start a conversation with our team to see how Menlo Security can serve as the execution-layer guardrail between your users, your agents, and the web.
Menlo Security
