
The traditional corporate network perimeter has not just shifted—it has entirely dissolved. Today, the "office" is wherever a web browser is open. Because employees rely heavily on the browser to access internal data, SaaS applications, and Generative AI tools, the browser has inherently become the new security edge for the modern enterprise.
But with this shift comes a new wave of sophisticated, AI-driven cyberattacks. In a recent interview hosted by Information Security Media Group (ISMG), Ramin Farassat (Chief Product Officer, Menlo Security) and Daniel Lees (Cloud Security Architect, Google Cloud) sat down to dissect how threat actors are exploiting Large Language Models (LLMs) and why securing the browser is the most critical component of a modern enterprise security strategy.
In this guide, we will explore the surge in Highly Evasive Adaptive Threats (HEAT), why legacy firewalls are failing in the age of AI, and how Menlo Security is partnering with Google Cloud to build a secure enterprise browser experience.
We are witnessing a fundamental shift in how cybercriminals operate. Rather than simply brute-forcing networks, attackers are now leveraging the same Generative AI tools that businesses use to drive productivity. These modern attacks are specifically designed to bypass traditional security stacks.
One of the most prevalent threats facing organizations today is prompt injection. Also known as "jailbreaking" an LLM, this technique mirrors the old days of attempting to break into a computer's operating system.
Attackers use natural language to trick an LLM into bypassing its built-in safety filters and guardrails. By feeding the AI carefully crafted, manipulative instructions, threat actors can force the model to extract and leak sensitive corporate data. Furthermore, prompt injection can be used to execute unauthorized API commands, granting attackers access to backend systems and data repositories they would otherwise be blocked from viewing.
The danger does not stop at direct chat inputs. As Daniel Lees highlighted during the discussion, attackers are increasingly using indirect methods, such as data poisoning.
In these scenarios, threat actors embed malicious code or hidden prompts within documents—such as a poisoned PDF file. When an unsuspecting employee uploads this document into a corporate LLM for summarization or analysis, the hidden prompt executes. This poisons the data the LLM is processing, allowing the attacker to compromise the integrity of the model from the inside out.
Attackers are also conducting reconnaissance at an unprecedented scale. By leveraging APIs, they can systematically fingerprint the tools, connections, and external databases an LLM has access to. Once they map out these connections, they can pivot their attacks, using the LLM as a backdoor to breach deeper enterprise systems. Without deep visibility into how data moves through the browser, identifying these API-level attacks is incredibly difficult.
If your organization relies on traditional firewalls or legacy Secure Web Gateways (SWGs) to stop AI-driven threats, your enterprise is exposed.
As Daniel Lees eloquently explained, a traditional firewall acts like a security guard checking IDs at the front gate. It examines IP addresses and network-layer headers. Once the "ID" is verified, the traffic is allowed to pass through unimpeded.
The problem is that AI inputs are inherently unstructured. They are based on natural language, meaning a traditional firewall cannot distinguish between a helpful, legitimate user question and a malicious prompt injection. It simply does not understand semantic intent.
In the past, security teams relied on Data Loss Prevention (DLP) tools that used static rules—scanning traffic for specific "bad words," known malware signatures, or rigid data patterns. Today, static rules are obsolete.
To secure the browser against GenAI threats, security systems must understand the contextual intent of the data being transmitted. Protection requires continuous behavioral analysis. A modern secure browser solution must look for "behavioral drift"—detecting when a seemingly legitimate request is actually attempting to trick an AI model into performing an unauthorized action.
Looking ahead to 2026, the cybersecurity conversation is heavily focused on "agent identity" and the agency granted to AI.
When autonomous AI agents were first introduced, organizations were eager to deploy them, often giving them excessive permissions to act on behalf of human users. However, this creates a massive risk for agent impersonation. If a malicious actor hijacks an AI agent, they can execute high-stakes transactions disguised as a legitimate employee.
To combat this, enterprises must pull back excessive permissions and implement what is known as Contextual Governance.
Instead of granting blanket access, organizations must use cryptographic controls to verify the identity of the AI agent and the context of the human user issuing the request. Before an AI agent is allowed to communicate with external models or execute a high-stakes transaction, the secure browser layer verifies its identity and context through specific protocols. If the behavior deviates from the established baseline, the action is blocked, and human-in-the-loop verification can be triggered.
AI isn't just attacking models; it's also revolutionizing phishing and social engineering. Threat actors use GenAI to create zero-hour phishing sites that look visually perfect, complete with flawless grammar and accurate branding. These sites vanish and mutate so quickly that traditional, reputation-based domain filters cannot flag them in time.
To stop these evasive attacks, you cannot wait for an end-user to make a mistake. You need proactive, predictive threat prevention.
Ramin Farassat detailed how Menlo Security tackles this by utilizing Google Gemini’s powerful AI technology. Instead of relying on a list of known bad domains, Menlo Security’s technology acts in real time. It uses a combination of computer vision and machine learning to analyze a webpage precisely how a human would—but with the added ability to read the underlying source code simultaneously.
By executing web content within a secure enterprise browser platform in the cloud, Menlo Security ensures that active, potentially malicious code never touches the user's local endpoint.
While the content is safely processed in the cloud, our Gemini-powered engine scans for fake logos, scam text, and anomalous page structures. If an anomaly is detected, the system instantly neutralizes the threat, rendering the page in a safe, read-only mode, or blocking it entirely before the user can input their corporate credentials. This secure, cloud-native architecture effectively air-gaps the user from the threat without impacting their native browsing experience.
The collaboration between Menlo Security and Google Cloud is central to redefining how we protect the modern workspace. By running our advanced security controls natively alongside Google Chrome and deeply integrating Google Gemini’s intelligence, we are co-engineering a next-generation enterprise browser experience.
A prime example of this partnership is the development of Menlo Security's HEAT Shield technology. Defending against Highly Evasive Adaptive Threats requires a multi-stage process:
Because AI threats evolve so rapidly, this tight, continuous feedback loop between Menlo Security and Google Cloud engineering teams ensures that our preventative controls adapt faster than the adversaries can innovate.
As we continue to push the boundaries of browser security, the future of edge protection relies on localized, intelligent decision-making.
To ensure split-second security decisions without compromising performance, the next step in enterprise browser security is pushing smaller, highly efficient language models directly to the edge—running close to the user's device. This hybrid approach ensures that content is pre-filtered for privacy and safety before it even traverses the broader network, providing a local gateway for rapid threat mitigation.
Perhaps the most exciting advancement is the concept of self-evolving security policies. Managing corporate security rules is historically manual and prone to human error. Soon, the AI within the secure browser layer will learn from the environment. If it detects a "messy" or high-risk scenario within a specific organizational unit, the system will automatically tighten security parameters for those users—without requiring a human administrator to intervene.
The browser is the most utilized application in your entire enterprise. It is time we start treating it as your most critical security asset. By embracing a secure enterprise browser strategy, you can empower your workforce to leverage the immense productivity benefits of GenAI and SaaS applications without sacrificing data integrity or falling victim to evasive threats.
Want to dive deeper into the conversation? Watch the full video interview with Ramin Farassat and Daniel Lees hosted by ISMG to learn more about how attackers are exploiting LLM guardrails to breach enterprise APIs.
What is prompt injection in AI? Prompt injection is a cyberattack where a user inputs manipulative or hidden instructions into a Large Language Model (LLM) to trick it into ignoring its safety guidelines. This can lead to the AI leaking sensitive data or executing unauthorized commands.
How does a secure enterprise browser stop phishing? A secure enterprise browser processes active web content in a remote, cloud-based environment. It uses computer vision and machine learning to analyze page source code, logos, and text in real time, neutralizing visually convincing zero-hour phishing sites before a user can enter their credentials.
Why are traditional firewalls ineffective against GenAI threats? Traditional firewalls evaluate network traffic based on static rules, IP addresses, and specific data signatures. Because GenAI inputs are written in unstructured natural language, firewalls cannot understand the contextual intent behind the text, allowing malicious prompts to pass through undetected.
What is contextual governance for AI agents? Contextual governance is a security framework that limits the autonomy of AI agents. It uses cryptographic verification to continuously authenticate the identity of both the AI agent and the human user, ensuring they are authorized before allowing the execution of high-stakes tasks or data transfers.
Menlo Security
