1.1 C
Casper
Tuesday, February 3, 2026

Most Workers Use Unapproved AI to Meet Deadlines

Must read

A BlackFog survey finds 60% of employees would use shadow AI tools to finish work faster, raising security risks as companies prioritize speed over safeguards.

Corporate enthusiasm for artificial intelligence is colliding with security reality. A new report from cybersecurity firm BlackFog finds that a majority of employees are willing to bypass official channels and use unapproved AI tools if it helps them get work done faster.

According to the study, six in 10 workers said they would turn to so-called “shadow AI” applications to meet deadlines. At the same time, AI usage is becoming nearly ubiquitous: 86 percent of respondents reported using AI tools at least once a week, and more than a third said they rely on free versions of tools formally approved by their employers.

The pressure to adopt AI appears to be coming from the top. The report found that seven in 10 C-level executives are prepared to prioritize faster output over stronger security controls.

“The consistent story we have heard is that CEOs have mandated the adoption and use of AI and have allocated significant funds to do this, and this is taking precedence over security concerns,” said Darren Williams, founder and chief executive of BlackFog. “The efficiency gains are too large to ignore.”

Security Teams Struggle to Keep Up

The rush to deploy AI is leaving information security teams scrambling to catch up. As workers experiment with consumer-grade tools and personal accounts, corporate data protections are often bypassed.

The findings were based on a survey of 2,000 employees at companies in the United States and the United Kingdom, conducted by Sapio Research on behalf of BlackFog. Respondents were evenly split between the two countries.

The report highlights a growing dilemma for organizations: how to balance productivity goals with the need to protect sensitive information. Security leaders have repeatedly warned that AI adoption must be accompanied by clear guardrails and governance policies to prevent data leaks and other risks.

Also Read: Why AI Detection and Response is the New Baseline for Survival

A Broader Pattern of Risk

BlackFog’s conclusions echo concerns raised by other cybersecurity firms. A separate report released earlier this month by Netskope found that many employees are accessing AI tools through personal accounts, effectively sidestepping corporate security protocols.

Such practices can expose confidential company information, create compliance problems, and increase the risk of follow-on cyberattacks.

The latest findings suggest that enthusiasm for AI’s potential is outpacing the controls needed to manage it safely.

For now, the tension between speed and security shows little sign of easing. As Williams put it, “AI is being embraced at all levels of the enterprise. The challenge is ensuring that adoption doesn’t come at the expense of protection.”

More articles

Latest posts