
The initial panic over AI in cybersecurity often painted a picture of "Uber Malware"—an unstoppable, AI-generated super-virus that would dismantle security stacks overnight. But as the dust settles, the reality of the AI threat landscape is both more subtle and more dangerous.
Attackers aren’t just building better malware; they are building better lures. They are using Large Language Models (LLMs) to craft sophisticated social engineering campaigns that target the weakest link in the enterprise: the user. And they are doing it almost exclusively through the browser.
In our recent webinar, "Browser Threat Prevention in the World of AI," we sat down with experts from Menlo Security and Google to dissect this shift. We explored how attackers are weaponizing AI and, more importantly, how we are partnering with Google to fight back.
Attackers have pivoted. Why spend weeks searching for a zero-day vulnerability in a firewall when you can just ask a user to open the door for you?
This isn't just a theory—it's what Google is seeing across its vast threat landscape. Aaron Sutton, Financial Services Technical Solutions Lead at Google, joined the discussion to confirm that this pivot is happening at scale.
"We've been seeing a drastic increase in the use of LLMs for malicious behavior," Sutton noted. "Specifically, we are seeing evasive capabilities... and a huge increase in the amount of fraud especially related to users."
This new wave of attacks focuses on evasion and speed. As detailed in our State of Browser Security Report, threat actors are using legitimate LLMS, as virtually every major AI vendor has reported, to craft elements of attack ranging from recon to harvesting results, including:
Amelia Squires, Senior Threat Intelligence Analyst at Menlo, walked us through real-world examples of these highly evasive threats, which are making the most of GenAI capabilities, leading analysts to conclude “The impact of generative AI on social engineering is undeniable.3” One standout tactic in the world of social engineering is ClickFix.
In a ClickFix attack, a user visits a compromised or malicious site and is presented with a fake error message (e.g., "Word failed to load" or a fake CAPTCHA). The "fix" offered is to copy a script to their clipboard and paste it into a Windows Run prompt or terminal window.
To the user, they are just fixing a glitch. To the security team, this is a nightmare. Because the malicious payload is often generated locally in the browser or fetched via legitimate system tools (like PowerShell), it bypasses traditional network inspection. It’s a "fileless" attack that leverages human trust.
The success of ClickFix attacks, which grew over 500% in the last year, has spawned a legion of similar exploits, including FileFix. Another vulnerability Amelia demonstrated was the malicious use of remote monitoring and management (RMM) tools. Because RMM tools are legitimate IT tools, they will not be detected by AV or malware sandboxes, making them an example of “good tools gone bad.”
So, how do you stop an attack that looks legitimate, leverages human behavior, and hides in the browser? You need a defense that sees everything.
This is where Menlo Security’s Secure Enterprise Browser solution changes the game. As traffic passes through the Menlo Cloud, we build a replica of the user’s browser in a virtualized container in the cloud. This solution enables us to deliver deep visibility into the web session—rendering content safely in the cloud—so we have total insight into the document object model (DOM), page structure, and session behavior in real-time. The result is that zero-day threats are stopped before they ever reach the endpoint.
But stopping new social engineering attacks requires the ability to dig more deeply, while maintaining performance. We are proud to partner with Google to integrate Menlo HEAT Shield AI with Google Gemini. By feeding Menlo’s inspection telemetry, derived from years of stopping zero day phishing attacks, into Gemini’s multimodal AI models, we can perform intent-based analysis in real-time. The combination of Menlo’s unique model with Gemini’s excellent inference speed and reasoning capabilities combines accuracy with performance, and Menlo’s alliance with Google Threat Intelligence allows us to dig even more deeply into the attack and share that content with other users.
As Jonathan Lee, Principal Product Manager at Menlo, explained: "We can take images, text, and page structure and ask Gemini: 'Is this page asking the user to paste a clipboard command into a terminal?'"
This allows us to detect malicious intent—like a ClickFix attack—instantly, even if the domain is brand new and has no bad reputation yet.
AI has drastically reduced the technical skills required to launch attacks, allowing attackers to move faster and smarter. As Aaron Sutton warned during the webinar, the availability of AI tools means "the barrier to entry... has been lowered for attackers," allowing even less sophisticated actors to launch convincing campaigns. At the same time, as Menlo’s Roslyn Rissler emphasized, “AI also allows attackers to vastly increase the scale of their operations.”
To stay ahead, defenders must do the same. By combining Menlo’s unique ability to secure the browser with the intelligence of Menlo HEAT Shield AI, now augmented by Google Gemini, users are protected from even never-before-seen threats.
Want to see these attacks in action? Watch the full on-demand webinar here to see deep dives into attack flows and a demo of our intent-based detection engine.
------------------------------------------------------------
1 Microsoft Digital Defense Report 2025
2 https://www.techrepublic.com/article/news-vercel-ai-tool-phishing-okta
3 How to Respond to the 2025-2026 Threat Landscape, Gartner, June 2025
Menlo Security
