6 C
Casper
Tuesday, February 17, 2026

The Missing Layer in AI’s Enterprise Ambition

Must read

Khushbu Raval
Khushbu Raval
Khushbu is a Senior Correspondent and a content strategist with a special foray into DataTech and MarTech. She has been a keen researcher in the tech domain and is responsible for strategizing the social media scripts to optimize the collateral creation process.

As AI agents scale inside enterprises, OPAQUE CEO Aaron Fulkerson argues that cryptographic proof—not policy—will determine who can deploy AI safely.

Artificial intelligence has advanced at a pace that governance frameworks were never designed to match. Models improve quarterly. Agents deploy weekly. Boards meet monthly. And somewhere in between, enterprise leaders are being asked to trust systems they cannot fully see.

Aaron Fulkerson, Chief Executive Officer of OPAQUE, believes that gap — between capability and proof — is now the defining challenge of enterprise AI.

“I’ve spent 20 years watching the same pattern repeat,” Fulkerson said. “The early internet needed SSL certificates before e-commerce could scale. Cloud computing needed SOC 2 and ISO certifications before enterprises would move workloads off-premise. The technology was ready years before the trust infrastructure caught up. AI is in that same gap right now.”

The difference, he argues, is speed.

AI agents are not waiting for governance frameworks to mature. They are already reading emails, executing commands and operating with system privileges once reserved for human employees. Tools like Anthropic’s Claude Cowork now extend that reach into local file systems and external services — a capability that is both transformative and destabilizing.

“That’s powerful,” Fulkerson said. “It’s also a fundamentally new trust problem.”

The Trust Chasm

Surveys from consulting firms and cybersecurity vendors point to a consistent theme: executives are intrigued by AI’s promise but constrained by its opacity. Data privacy, ethical concerns and board-level scrutiny continue to stall broader deployment. According to Fulkerson, the issue is not cultural hesitation or regulatory overreach. It is architectural.

“Culture and regulation matter,” he said. “But they’re downstream of the architectural problem. Fix the architecture, and the trust follows.”

He points to recent episodes in the open-source AI community, where popular autonomous agents rapidly amassed users before researchers uncovered widespread data exposure — leaked API keys, exposed credentials, and unauthorized data exfiltration. Those were consumer-grade incidents. The enterprise version, he warns, operates on similar technical foundations but with far higher stakes.

“Every agent is a new identity, a new access path, and a new attack surface that traditional security tools can’t see,” he said.

The result is a paradox: companies are building increasingly capable AI systems while leaving vast troves of sensitive enterprise data untouched. Fulkerson estimates that hundreds of billions of dollars’ worth of high-value data remains unused, not because models are inadequate, but because organizations lack a trusted way to process it.

Trust as Architecture

Fulkerson has seen this pattern before. At ServiceNow, where he helped scale one of the company’s fastest-growing product lines, the lesson was not about speed to market but about institutional resistance.

“Change is uncomfortable,” he said. “We succeeded because governance, security, role-based access and auditability weren’t afterthoughts. They were built into the architecture from day one.”

When trust is embedded, adoption accelerates. When it is bolted on later, he argues, organizations stall in political debates and incremental compromises.

The same logic, he believes, applies to AI.

“Organizations that treat security and privacy as a Phase 2 problem will never get to Phase 2,” he said. “The ones embedding verifiable guarantees into their AI stack from the start are the ones who’ll scale.”

From Policy to Proof

What distinguishes OPAQUE’s approach is its emphasis on what Fulkerson calls “verifiable guarantees.” Traditional enterprise security, he argues, operates on intent: access controls, configurations and policies that assume compliance.

“You set up the rules and hope everyone follows them,” he said.

That model falters when autonomous agents operate at machine speed. Instead of assuming systems behave as configured, Fulkerson advocates cryptographic verification — the ability to prove what code executed, under which constraints, and what data was accessed.

“Not ‘we configured it correctly.’ Not ‘we have a policy,’” he said. “Mathematical verification. Cryptographic proof.”

The arithmetic of risk compounds quickly. Even a small exposure probability per agent scales dramatically as organizations deploy dozens or hundreds of agents. In such environments, Fulkerson argues, policy controls alone are insufficient.

“You can’t policy-control your way out of that,” he said. “You need mathematical guarantees.”

Confidential AI as Competitive Advantage

Many companies frame confidential computing as a defensive measure — a way to reduce risk. Fulkerson sees it differently.

“It’s actually an unlock,” he said.

Without verifiable guarantees, enterprises limit AI autonomy and restrict access to their most valuable workflows. With runtime proof of policy enforcement and data protection, companies can deploy more broadly and extract value from assets competitors are too cautious to touch.

“That’s not managing risk,” he said. “That’s removing stagnation.”

OPAQUE’s model embeds confidentiality into execution itself. Before runtime, it verifies configuration and integrity. During execution, it enforces cryptographic policies and isolates workloads. After execution, it generates hardware-signed audit logs that demonstrate what ran and how data was handled.

“The old trade-off between security and speed is a false choice,” Fulkerson said. “Confidential AI eliminates it.”

Sovereignty and the Next Frontier

The language of “sovereign AI” has entered policy debates in Europe, North America and Asia, often centered on data localization and hardware capabilities. Fulkerson argues that hardware encryption alone is insufficient.

“It’s like having a locked room with no security camera and no sign-in sheet,” he said. “The room is secure, but you can’t prove who went in, what they did, or what they took.”

True sovereignty, in his view, requires lifecycle verification — workload attestation, model validation, enforceable policies and auditability. Post-quantum cryptography will eventually strengthen these foundations, but the immediate gap is more basic.

“Most organizations have no verifiable proof of what’s happening inside their AI stack right now,” he said.

In enterprise AI, ambition is abundant. What remains scarce is proof.

And as Fulkerson suggests, in environments where intellectual property, customer data and regulatory exposure are on the line, hope is not a control mechanism. It is a liability.

More articles

Latest posts