1.1 C
Casper
Tuesday, February 3, 2026

AI Agents Drive Rising Insider Cybersecurity Threats

Must read

A new Akati Sekurity report warns that AI agents are linked to 40% of insider threats, exposing businesses to risks that current security tools are not ready to manage.

Artificial intelligence is reshaping corporate security risks in unexpected ways. A new report from managed security service provider Akati Sekurity finds that AI agents are now involved in 40 percent of insider cybersecurity threats, signaling a fast-growing challenge for businesses.

The problem is compounded by scale. On average, non-human digital identities outnumber human users by 144 to one, creating an attack surface that many IT teams, service providers and vendors are not equipped to defend, Akati CEO Krishna Rajagopal told Channel Dive.

“Partners are focused on making sure that large language models are secure and assessing the security of the MCP server,” Rajagopal said. “But there is this little worm — the agentic agent — that can go rogue. And if that happens, most MSPs and MSSPs currently don’t have an answer for it.”

Also Read: Why AI Detection and Response is the New Baseline for Survival

A New Kind of Insider Threat

Akati’s findings shift the conversation around AI security from external attacks to internal vulnerabilities. While cybercriminals’ use of generative AI for phishing and social engineering is well documented, the report warns that attackers are increasingly targeting the autonomous agents operating inside organizations.

“If you’ve got a generative AI implementation with GPUs running in the cloud, attackers want to piggyback on that and use it to run their own queries,” Rajagopal said.

A foiled cyber-espionage campaign last fall offers a glimpse of what could come. Hackers linked to a state-affiliated group breached Anthropic’s Claude AI coding agent and attempted to infiltrate more than two dozen organizations using the platform. Rajagopal believes the effort was likely a proof-of-concept.

“I think they were testing the waters to see what they could potentially do, and at what scale and speed, should they attempt another SolarWinds-type supply chain attack,” he said.

Security Models Built for Humans, Not Bots

The 2020 SolarWinds breach devastated many managed service providers that relied on the platform for IT management. Rajagopal warns that AI agents could become a similar weak link if companies fail to adapt.

Existing security models were built around people, not machines. “Our pricing models have always been per employee or per device,” he said. “But with this explosion of non-human identities, service providers need to rethink how they protect organizations.”

He argues that traditional user behavior analytics must evolve into agent behavior analytics, with tools designed to monitor and control AI systems the way companies currently oversee employees.

Also Read: Data Privacy Day Isn’t a Celebration. It’s an Indictment.

A Roadmap for Defense

Akati outlined a 12-month roadmap to help organizations reduce rogue-agent risks. In the first 30 days, companies should inventory all non-human identities, audit high-privilege agents and implement blocklists for risky prompts. Within 60 days, they should deploy logging systems to track agent decisions, establish incident-response procedures and limit agents to just-in-time access.

Rajagopal also urged providers to study the MITRE Atlas Framework, which maps how future insider threats may arise from misplaced trust in AI systems rather than from malicious employees.

“This attack chain is going to blow up,” he warned. “You’re going to see a lot more of it in 2026.”

The message is clear: as AI agents multiply across enterprises, security strategies must evolve just as quickly — or risk being outpaced by the very tools meant to improve productivity.

More articles

Latest posts