1.1 C
Casper
Tuesday, February 3, 2026

Corporate AI Tools Remain Highly Vulnerable, Report Says

Must read

A Zscaler study finds most enterprise AI systems fail security tests within minutes, even as data use and adoption surge across industries.

Artificial intelligence systems are spreading rapidly through corporate networks—but many remain alarmingly fragile from a security standpoint.

That is the central conclusion of a new threat report from cybersecurity firm Zscaler, which warns that enterprise AI platforms are highly susceptible to attacks even as organizations rely on them for more critical functions.

The report found that companies are feeding far greater volumes of sensitive information into AI tools, effectively creating an “expanding target” for cybercriminals worldwide. Zscaler urged organizations to strengthen visibility, real-time defenses, and governance controls to reduce risk.

Systems Fail Fast Under Pressure

One of the most striking findings involves how quickly AI systems can break down under adversarial testing.

“They break almost immediately,” Zscaler researchers wrote. “When full adversarial scans are run, critical vulnerabilities surface within minutes—and sometimes faster.”

During red-teaming exercises across 25 corporate environments, Zscaler found that AI platforms experienced their first major failure after a median of just 16 minutes. Within 90 minutes, 90 percent of systems had failed. In one case, a system collapsed after only one second of testing.

Failures ranged from biased or off-topic responses to incorrect URL verifications and privacy violations. “Models can still be coerced into exposing sensitive data or participating in harmful workflows,” the report noted.

In 72 percent of environments, Zscaler uncovered a critical vulnerability during the very first test of an AI system.

The implication for chief information security officers is clear: risk is present from day one, even in otherwise mature security programs. The company recommends continuous testing and strict governance as essential safeguards.

Also Read: Why AI Detection and Response is the New Baseline for Survival

A Surge in AI Use—and Oversight

Despite the vulnerabilities, Zscaler’s data also reflects how deeply AI is becoming embedded in business operations.

The firm analyzed nearly one trillion AI-related data transactions across its cloud platform in 2025—specifically 989.3 billion, a 91 percent increase from 2024. Activity spanned more than 3,400 different AI tools, highlighting the breadth of adoption.

Encouragingly, organizations appear to be taking governance seriously. About 40 percent of attempted AI transactions were blocked by corporate security policies, a sign that many companies are trying to balance innovation with risk management.

“Governance is in action,” the report said, as leaders work to manage the tradeoffs between rapid AI experimentation and the need for tighter controls.

Where AI Is Growing Fastest

Use of AI tools was most concentrated in North America and India. The United States accounted for 38 percent of AI transactions in 2025, followed by India at 14 percent and Canada at 5 percent.

By industry, finance and manufacturing continued to lead adoption for the third consecutive year, representing 23 percent and 20 percent of AI activity, respectively.

Also Read: Data Privacy Day Isn’t a Celebration. It’s an Indictment.

Proceeding With Caution

Zscaler’s report underscores a central tension in the AI boom: companies are moving quickly to harness new capabilities, but many systems are not yet ready for secure, large-scale deployment.

The firm’s advice to enterprises is straightforward: invest in continuous testing, enforce consistent controls, and maintain real-time visibility into how AI tools are being used.

Without those safeguards, the very technologies designed to boost productivity could become some of the weakest links in corporate defenses.

More articles

Latest posts