1.7 C
Casper
Tuesday, May 5, 2026

You Gave Your AI Agent Access. Now, What Does It Have?

Must read

David Bellini
David Bellini
David Bellini is a co-founder and the Chief Executive Officer for CyberFOX. Serving as the Chief Operating Officer and working with his brother Arnie Bellini, the duo spun the ConnectWise software company out of their Tampa-based IT service provider more than four decades ago.

Copilots and agentic workflows promised to do the boring work. The part nobody mentioned is that the access you granted to make that happen never sleeps.

The pitch for AI agents is that they’ll do the boring work nobody has time for. Read the emails. Route the tickets. Summarize the meeting. Draft the code. The pitch is true. Nobody mentions that whatever access you gave those agents to make the pitch come true, they now have twenty-four hours a day.

That’s the part I keep coming back to in conversations with IT leaders. We’ve spent the last two years rolling out copilots, assistants, and agentic workflows inside email, ticketing, CRM, and source control. Every one of those integrations needed some level of access to live systems. And access was granted the way it always is at mid-sized companies. Broadly, quickly, and without a plan to revisit it.

Here’s the part people keep missing. AI inherits your access. If you can see it, the agent can see it. If you can do it, the agent can do it.

The Attack Surface Moved Inside the Walls

For most of my career, we talked about security in terms of the perimeter. Firewalls. VPNs. A trusted inside and a hostile outside. Agents kicked that model apart in about eighteen months. They live inside your mail, your tickets, your repos, your chat channels.

The way agents process information makes them easy to trick. A large language model doesn’t separate the content it’s reading from instructions buried inside that content. To the model, it’s all just tokens. If a user forwards a helpful-looking document to a summarization bot, and that document contains a hidden instruction to send the last ten customer records to an outside address, the bot might just do it. Researchers have already shown this works.

Put three capabilities in one agent, and you have a real problem. The ability to read untrusted content. The ability to communicate externally. The ability to touch sensitive data. Anyone is manageable. All three in the same agent, with broad permissions, is a breach waiting to happen. That’s the default shape of most copilots being rolled out right now.

Identity Is the New Pressure Point

Credential-based attacks are up 71 percent year over year, and that’s before you factor in agents. Agents can run password spraying and credential stuffing faster and more patiently than any human. Defenses built for human speed attacks get swamped.

Most mid-sized organizations also have identity systems that grew up in pieces. One for email. One for the file server. One for SaaS. One for the cloud. Access policies drift, permissions go uncorrelated across disconnected systems, and gaps appear where lateral movement can hide. Add a few hundred agents to that picture, and human review can’t keep up.

Every bot or agent needs to tie back to an identity. Every identity needs to tie back to credentials and a clearly defined scope of access. If you can’t draw that line from an action to an account to a permission, you can’t defend it. You can’t even audit it.

Least Privilege Works on Agents, Too

Least privilege has been in every serious security framework for decades. NIST 800-171. CIS Controls 5 and 6. The Australian Cyber Security Centre’s Essential Eight. UK Cyber Essentials. Microsoft’s own admin guidance. Give users, accounts, and services the minimum access they need. That rule applies more to agents, because agents scale. A human admin with too much access might log in once a day. An agent with too much access might make two thousand API calls before lunch.

The hard part is enforcing it without breaking production. I’ve watched this pattern play out with admin rights on Windows endpoints for years. IT strips privileges. Applications stop working. Users revolt. Management revolts. Privileges get quietly put back. Nothing changes.

Three things have to be true for least privilege to stick in an AI-heavy environment. First, no standing admin. No shared service credentials, no agents running with permanent admin rights because it was easier at setup time. Start with nothing, grant access only for specific tasks, for specific windows of time.

Second, granting access has to be fast. If your model forces a help desk ticket every time an agent needs to run a legitimate task, the agent will get worked around. People find shortcuts. So do agents, because they’re working toward the goal you gave them, not your security posture.

Third, every privileged action has to be logged and tied to an identity. Not “the AI assistant did this.” Which agent, acting on whose behalf, with what permissions, touching what data, and why? A missing log is why most investigations stall.

Where to Start This Week

Pick your highest exposure agent. Inventory what it can actually touch in practice, with the credentials it’s been given. Not what the vendor documentation says. Write it down. Then ask the uncomfortable question: Does this agent need all of that to do its job? If the answer is anything other than a confident yes, start reducing. One afternoon and a willingness to say no get you further than a six-month project.

Credential-based attacks won’t slow down because organizations are making good progress. They’ll slow down when attackers and the agents working for them can’t get anywhere useful with what they steal. That’s what least privilege does. It shrinks the blast radius when something goes wrong. And with agents in the mix, something is going to go wrong.

You can control the agent, or you can let the agent control you. There isn’t a third option.

More articles

Latest posts