13.3 C
Casper
Monday, May 11, 2026

Healthcare Is Scaling AI Without the Infrastructure to Manage It

Must read

Errol Weiss
Errol Weiss
Errol Weiss, CSO of Health-ISAC

Nobody set out to build a governance problem. It just happened because there was no process to prevent it.

Healthcare organizations are adopting AI faster than they’re learning to manage it. According to Menlo Ventures, the share of health organizations implementing domain-specific AI tools increased sevenfold last year alone.

AI vendors have flooded the market, and pilot programs have multiplied. Clinical and administrative teams are spinning up AI tools to solve problems, often without anyone tracking what’s being deployed, where, or why. 

That’s a governance problem, and in healthcare, it’s more serious than in most sectors. When left unmanaged, AI can touch patient data, clinical workflows, and regulatory standing simultaneously. A single ungoverned tool can compromise all three before anyone can identify an issue. 

What’s concerning to me is that this is one of the most consistent shortcomings  I see across organizations in the health sector. A few organizations have baked in governance into their new systems from the start and are scaling without major incidents. 

Others haven’t, and the bill is coming due. 

When No One Is Keeping Track, Everything Is a Surprise

AI sprawl tends to make a predictable pattern. A team identifies a problem, finds a tool that addresses it, and the tool works well enough that word spreads. Six months later, it’s embedded across multiple departments, touching patient data, informing clinical decisions, and generating outputs that no one has formally reviewed. 

Nobody set out to create a governance problem. It just happened because there was no process to prevent it. 

This is an inventory issue at heart. If you don’t have a current and accurate list of all the AI systems in your environment, you cannot assess your risk posture. Every unvetted AI tool expands your attack surface, and unlike most threats, this one is self-inflicted. 

The inventory problem is also what gives shadow AI its teeth, which is a risk distinct from the shadow IT problem most security teams are familiar with. An unsanctioned SaaS tool is containable, but AI systems operating outside formal frameworks are not. The large and small language models underpinning these tools can suffer algorithmic drift, bias, and compounding errors that may not surface until they’ve already influenced a significant number of decisions. 

In healthcare, that means delayed or incorrect diagnoses, biased treatment recommendations, and data exposures that trigger regulatory action. The tool nobody tracked becomes the incident everybody has to explain: first to regulators, then to the press. 

The Cost of Skipping Risk Assessment 

The argument for moving fast is understandable. Pilots are low-stakes by design, and in a resource-constrained environment, it’s easy to defer additional processes. But there’s a difference between moving quickly and skipping the steps that sustain that speed.

A risk assessment is a set of questions: What does this tool do? What data does it access? Who is accountable for it? What decisions does it inform, directly or indirectly? What happens when it fails or produces a false output? 

Any security or operations team would ask similar questions before deploying any significant system. AI tools that arrive through informal channels shouldn’t just get a pass. 

The lifecycle matters as much as the initial vetting. A tool that clears a reasonable review on day one is not automatically safe on day 180 because models may drift from the context or alignment, vendors may push system-breaking updates, and regulatory requirements will evolve. 

More than 700 large healthcare data breaches are reported to HHS each year, and third-party vendors consistently top the list of sources exposing data. Every AI vendor in your environment is a third-party relationship, which is why your AI tools require ongoing monitoring, clear contractual expectations, and someone at your organization who is responsible for fixing systems when they break. 

Practical Steps for Any Organization

Healthcare organizations already have the processes to accomplish most of what’s needed. The work is applying them consistently to AI. 

Start by taking inventory. Ask teams what AI tools they’re using, including the free ones, consumer apps used for work, and anything that touches patient data or clinical workflows. In most organizations, the actual scope of AI use is larger than leadership believes. 

Once you have the inventory, classify it. A scheduling assistant and a clinical decision-support tool do not carry the same risk, and should not be evaluated the same way. Triage by potential impact and scrutinize proportionally. 

Assign ownership. Every AI deployment must be owned by someone accountable, not just for the vendor relationship, but also for monitoring performance and flagging changes over time. 

Build monitoring in from the beginning. Define what an ideal deployment looks like, and schedule formal checkpoints to verify integrity. Put it in the vendor contract and make it an internal expectation as well. 

Finally, use your network. ISACs, peer organizations, and sector working groups exist to help their members avoid solving the same problems repeatedly. Healthcare is not short on AI governance frameworks, risk assessment templates, or hard-won lessons. Organizations that draw on that collective wisdom will move faster and more safely than those building from scratch. 

The Window for Getting This Right Is Narrowing

Healthcare organizations aren’t going to slow their adoption of AI tools, and more are coming every day, so the window to get governance right is narrowing. Each month without a framework is another month of unvetted tools in the environment, unmonitored vendors in the supply chain, and decisions being made by systems nobody is watching.

 Companies that build strong governance models now will scale their AI applications more effectively, safely, and with far less exposure than those still operating on pilot logic. This is not an argument against moving fast, but for building a tech stack that can manage the strain of moving fast.

More articles

Latest posts