Menlo+Votiro_Logo Lockup
Menlo Security Acquires Votiro to Deliver Easy, AI-driven Data Security to Enterprises
Icon Rounded Closed - BRIX Templates

Predictions for 2026: Why AI Agents Are the New Insider Threat

|
January 6, 2026

It’s that time of year again. The new year is upon us, and cybersecurity companies everywhere are dusting off their crystal balls. But this year feels different. For the last few years, we’ve talked about AI as a novelty or a tool for generating text. As we look toward 2026, the landscape has shifted. We are moving from AI as a "tool" to AI as a "collaborator"—autonomous agents that execute tasks on our behalf. I recently sat down with Ed Wright to unpack what this shift means for security teams. Here are my top five predictions for the threats (and opportunities) defining 2026

1. AI Agents Will Become the New "Insider Threat"

We are adding AI everywhere—building agents that leverage AI or using third-party tools that incorporate it. We are rushing to give these agents more autonomy, moving them from simple tools to "collaborators" or "experts" (Level 3 or 4 autonomy).

However, Large Language Models (LLMs) suffer from a significant, unsolved flaw: prompt injection. LLMs do not separate data from instructions. Any data—the content of a web page, an email, or a log entry—can effectively turn into instructions.

This creates an explosive cocktail. When an attacker successfully uses prompt injection, they can turn what you thought was a trusted entity (your AI agent) into a malicious one. If that agent has access to your internal data—your OneDrive, Google Drive, or Salesforce—it effectively becomes an insider threat, working against you from the inside. While real-world impact has been limited so far, I predict this will change significantly in 2026.

2. Supply Chain Attacks Targeting SaaS Platforms Will Accelerate

Supply chain attacks aren't new (remember SolarWinds?), but in 2025 we saw a shift toward targeting SaaS platforms to facilitate lateral movement between vendors.

High-profile incidents involving Salesloft and Gainsight—likely perpetrated by the "ShinyHunters" group—exposed a harsh reality: we have blind spots in our SaaS environments. Investigating these breaches revealed two major issues for security teams:

  • The "Audit Log Tax": Similar to the SSO tax, many SaaS vendors charge extra for quality audit logs. Companies that don't pay often find themselves guessing at the extent of a breach.
  • Orphaned and Overprivileged Accounts: Connections between SaaS tools (like Salesforce and Snowflake) are often created and then abandoned, leaving behind valid tokens that no one is monitoring.

3. Managing Non-Human Identities Will Get More Complex

We have a playbook for human identities: use an IdP, enforce posture requirements, and use phishing-resistant MFA. We do not have a playbook for the explosion of non-human identities, such as AI agents.

These agents don’t fit the existing IdP model. They don’t change their passwords. There is no orderly HR process to offboard them when they are no longer needed.

In 2026, CISOs will have to start thinking about a privilege matrix for an order of magnitude more roles than they have today. How do you define "least privilege" for an AI agent that needs to read your email to do its job?

4. Whether AI is a Net Positive for Security Won't Be Decided This Year 

There is a debate raging on whether AI helps attackers or defenders more. At BlackHat this year, we heard differing takes. Mikko Hypponen noted limited evidence of attackers using AI effectively, while Nicole Perlroth predicted AI would be a net negative—primarily due to poorly written code.

While I am cautiously optimistic that AI will help defenders more, the pressure to use AI coding tools is tremendous, meaning we will ship more code with less human oversight. There will be areas of your codebase that no human understands—written by AI and reviewed by AI. Benchmarks show that LLMs currently do not do a great job writing secure code. The threat of 2026 may be less about "super-malware" and more about vulnerabilities introduced by "slop code."

Security vendors have been hyping up AI-generated attack threats non-stop. However, I believe the immediate AI security challenges will not be primarily due to GenAI helping attackers.

The more pressing challenge is internal: the use of AI by your own employees. This creates acute problems regarding insider threats, managing non-human identities, and data leakage.

How to Prepare for 2026

There is no silver bullet, but you must balance preventative measures with damage limitation.

  • Get Visibility: You cannot secure what you cannot see. Ensure you have visibility into both fat clients and web apps.
  • Use Browser Isolation: For browser-based agents, do not give them free rein. Protect them with browser isolation so that if they go to a dark corner of the web, the malicious code cannot execute on your endpoint.
  • Hard Guardrails: Use DLP to limit what information is exposed to the agent and what content the agent can exfiltrate.

If you want to dive deeper into these topics, I highly recommend reading the OWASP Securing Agentic AI guide before the year ends.

And if you want to see how we tackle this at Menlo, join us for our upcoming webinar on January 7th, where we’ll dig into securing your enterprise against browser-borne AI attacks.

Menlo Security

menlo security logo
linkedin logotwitter/x logoSocial share icon via eMail