
It’s that time of year again. The new year is upon us, and cybersecurity companies everywhere are dusting off their crystal balls. But this year feels different. For the last few years, we’ve talked about AI as a novelty or a tool for generating text. As we look toward 2026, the landscape has shifted. We are moving from AI as a "tool" to AI as a "collaborator"—autonomous agents that execute tasks on our behalf. I recently sat down with Ed Wright to unpack what this shift means for security teams. Here are my top five predictions for the threats (and opportunities) defining 2026
We are adding AI everywhere—building agents that leverage AI or using third-party tools that incorporate it. We are rushing to give these agents more autonomy, moving them from simple tools to "collaborators" or "experts" (Level 3 or 4 autonomy).
However, Large Language Models (LLMs) suffer from a significant, unsolved flaw: prompt injection. LLMs do not separate data from instructions. Any data—the content of a web page, an email, or a log entry—can effectively turn into instructions.
This creates an explosive cocktail. When an attacker successfully uses prompt injection, they can turn what you thought was a trusted entity (your AI agent) into a malicious one. If that agent has access to your internal data—your OneDrive, Google Drive, or Salesforce—it effectively becomes an insider threat, working against you from the inside. While real-world impact has been limited so far, I predict this will change significantly in 2026.
Supply chain attacks aren't new (remember SolarWinds?), but in 2025 we saw a shift toward targeting SaaS platforms to facilitate lateral movement between vendors.
High-profile incidents involving Salesloft and Gainsight—likely perpetrated by the "ShinyHunters" group—exposed a harsh reality: we have blind spots in our SaaS environments. Investigating these breaches revealed two major issues for security teams:
We have a playbook for human identities: use an IdP, enforce posture requirements, and use phishing-resistant MFA. We do not have a playbook for the explosion of non-human identities, such as AI agents.
These agents don’t fit the existing IdP model. They don’t change their passwords. There is no orderly HR process to offboard them when they are no longer needed.
In 2026, CISOs will have to start thinking about a privilege matrix for an order of magnitude more roles than they have today. How do you define "least privilege" for an AI agent that needs to read your email to do its job?
There is a debate raging on whether AI helps attackers or defenders more. At BlackHat this year, we heard differing takes. Mikko Hypponen noted limited evidence of attackers using AI effectively, while Nicole Perlroth predicted AI would be a net negative—primarily due to poorly written code.
While I am cautiously optimistic that AI will help defenders more, the pressure to use AI coding tools is tremendous, meaning we will ship more code with less human oversight. There will be areas of your codebase that no human understands—written by AI and reviewed by AI. Benchmarks show that LLMs currently do not do a great job writing secure code. The threat of 2026 may be less about "super-malware" and more about vulnerabilities introduced by "slop code."
Security vendors have been hyping up AI-generated attack threats non-stop. However, I believe the immediate AI security challenges will not be primarily due to GenAI helping attackers.
The more pressing challenge is internal: the use of AI by your own employees. This creates acute problems regarding insider threats, managing non-human identities, and data leakage.
There is no silver bullet, but you must balance preventative measures with damage limitation.
If you want to dive deeper into these topics, I highly recommend reading the OWASP Securing Agentic AI guide before the year ends.
And if you want to see how we tackle this at Menlo, join us for our upcoming webinar on January 7th, where we’ll dig into securing your enterprise against browser-borne AI attacks.
Menlo Security
