Your New Inside Man Runs on Prompts and Doesn’t Need Social Engineering

The Real Impact: Your New Inside Man Runs on Prompts

Let’s be honest, most enterprises spent the last decade defending against human threats inside their networks. In 2026, according to Palo Alto Networks Chief Security Intelligence Officer Wendi Whitmore, that threat just got a software upgrade.

According to Gartner, 40 percent of enterprise applications will integrate with task-specific AI agents by the end of this year. That’s up from less than 5 percent in 2025. Think about it this way: security teams are already drowning in alerts and missing critical skills. Now they need to babysit an army of autonomous agents that might go rogue.

“The CISO and security teams find themselves under a lot of pressure to deploy new technology as quickly as possible,” Whitmore told The Register. “And that’s created this concept of the AI agent itself becoming the new insider threat.”

Here’s the thing – it’s not that AI agents are inherently malicious. The problem is they’re being deployed faster than security teams can understand them, configure them properly, or monitor what they’re actually doing with their access.

The Double-Edged Sword

To be fair, AI agents can help defenders finally catch up in the cyber-skills arms race. These systems can correct buggy code, automate log analysis, triage alerts, and block threats at speeds humans simply can’t match.

Whitmore recently spoke with one of Palo Alto’s SOC analysts who built an AI program that indexed public threat intelligence against the company’s private data. The system analyzed their security posture and identified which issues were most likely to cause real damage.

“When we look through the defender lens, a lot of what the agentic capabilities allow us to do is start thinking more strategically about how we defend our networks, versus always being caught in this reactive situation,” Whitmore explained.

The company is working through stages of implementation – categorizing alerts as actionable, auto-close, or auto-remediate. Like any good DevOps shop knows, you start with simple use cases and progressively automate as confidence builds.

What most people miss is that these same capabilities that help defenders also create massive attack surfaces. Depending on configurations and permissions, these agents may have privileged access to sensitive data and systems – and they don’t need to be social engineered.

The Superuser Problem

The first major risk stems from what Whitmore calls the “superuser problem.” This occurs when autonomous agents receive broad permissions that allow them to chain together access across sensitive applications without security teams knowing or approving those connections.

“It becomes equally as important for us to make sure that we are only deploying the least amount of privileges needed to get a job done, just like we would do for humans,” Whitmore said.

In practice, that means applying zero-trust principles to AI agents. Provision them with minimal access. Monitor their behavior. Have controls ready to detect when an agent goes off-script.

Does Your CEO Have an AI Doppelganger?

The second risk is more dystopian, and we haven’t seen it in investigations yet – but it’s coming. Whitmore calls it the “doppelganger” problem.

Think about task-specific AI agents approving transactions or reviewing contracts on behalf of C-suite executives. According to Whitmore, “We think about the people who are running the business, and they’re oftentimes pulled in a million directions throughout the course of the day. So there’s this concept of: We can make the CEO’s job more efficient by creating these agents.”

Consider this: an agent approves an unwanted wire transfer on the CEO’s behalf. Or picture a merger scenario where an attacker manipulates the model to force the AI agent to act with malicious intent.

By exploiting a single prompt injection or tool misuse vulnerability, adversaries now “have an autonomous insider at their command, one that can silently execute trades, delete backups, or pivot to exfiltrate the entire customer database,” according to Palo Alto’s 2026 predictions.

If that sounds like Office Space meets WarGames, you’re getting the picture. Except instead of stealing fractions of pennies, we’re talking about agents with legitimate credentials executing malicious actions that look completely authorized.

Prompt Injection: The Gift That Keeps on Giving

Despite a year of researchers demonstrating prompt injection vulnerabilities, there’s no fix in sight. “It’s probably going to get a lot worse before it gets better,” Whitmore said. “Meaning, I just don’t think we have these systems locked down enough.”

Part of the problem is intentional – model creators need people finding creative attack vectors, which requires manipulation. The bottom line is that development and innovation within AI models is happening faster than security can keep pace.

“This means that we’ve got to have security baked in, and today we’re ahead of our skis,” Whitmore explained. “The development and innovation within the AI models themselves is happening a lot faster than the incorporation of security, which is lagging behind.”

How Attackers Are Using AI Right Now

In 2025, Palo Alto’s Unit 42 incident response team observed attackers abusing AI in two ways. First, traditional cyberattacks conducted faster and at scale. Second, new attack types involving model manipulation and AI system exploitation.

“Historically, when an attacker gets initial access into an environment, they want to move laterally to a domain controller,” Whitmore said. “We don’t see that as much now. What we’re seeing is them get access into an environment immediately, go straight to the internal LLM, and start querying the model for questions and answers, and then having it do all of the work on their behalf.”

The Anthropic attack from September serves as the perfect example. Chinese cyberspies used Claude Code AI to automate intelligence-gathering attacks against multiple high-profile companies and government organizations – and in some cases, they succeeded.

While Whitmore doesn’t anticipate fully autonomous AI attacks this year, she expects AI to be a massive force multiplier. “You’re going to see these really small teams almost have the capability of big armies,” she said. “They can now leverage AI capabilities to do so much more of the work that previously they would have had to have a much larger team to execute against.”

Defense Against the Dark Prompts

Watch for parallels to the cloud migration from two decades ago. “The biggest breaches that happened in cloud environments weren’t because they were using the cloud, but because they were targeting insecure deployments of cloud configurations,” Whitmore noted. “We’re really seeing a lot of identical indicators when it comes to AI adoption.”

For CISOs, that means establishing best practices around AI identities now. Provision agents and AI-based systems with access controls limiting them to only the data and applications needed for their specific tasks.

“We need to provision them with least-possible access and have controls set up so that we can quickly detect if an agent does go rogue,” Whitmore said.

The reality is that AI agents represent both tremendous opportunity and significant risk. The organizations that get this right will treat their AI agents like they treat privileged users – because that’s exactly what they are. The ones that don’t will be explaining to their board how an AI agent they deployed approved a million-dollar wire transfer to an attacker.

It’s your move. Make it count.

Ideas or Comments?

Share your thoughts on LinkedIn or X with me.