14.7 C
Casper
Monday, March 30, 2026

Most Boards Were Built for a Pre-AI World. The Bill Is Coming Due.

Must read

Martin Rowinski
Martin Rowinski
Martin Rowinski is CEO of Boardsi and an author and keynote speaker focused on executive branding and board leadership. He helps senior leaders translate experience into long-term influence.

As AI governance shifts from white papers to boardrooms, companies with the wrong board composition are accumulating risk they won’t fully understand until it’s too late.

There are inflection points when a risk stops being theoretical and becomes operational. When it moves out of white papers and into real decisions that carry consequences.

The recent standoff between Anthropic and the Pentagon was one of those moments. This was not simply a disagreement over deployment or safeguards. It was a clear signal that AI governance has arrived in the boardroom. Not as a future agenda item, but as an immediate responsibility.

And many boards are not prepared for it.

The issue is not awareness. Nearly every board today understands that artificial intelligence represents both opportunity and risk. Approximately 72 percent of S&P 500 companies now identify AI as a material business risk. Yet only about 35 percent of boards have integrated AI into their oversight activities in a meaningful way.

That gap is often misunderstood. It is not a knowledge problem. It is a composition problem.

The Wrong Table for the Right Conversation

Most boards were assembled for a different operating environment. One where disruption was more predictable, where technology adoption followed a clearer curve, and where risks could be contained within functional boundaries. AI does not behave that way. It cuts across the enterprise simultaneously. It impacts legal exposure, operational execution, cybersecurity, product integrity, and reputation in ways that are interconnected and often difficult to isolate.

Despite this, many boards are still trying to manage AI within existing structures. Oversight is delegated to committees. Briefings are scheduled. External advisors are brought in to educate. These are rational steps. They are also insufficient. 

Because governance is not about being informed. It is about being able to challenge. And challenge requires experience.

Why Education Will Not Close the Gap

There is a growing reliance on AI literacy initiatives at the board level. Workshops, briefings, and structured education programs are becoming common. They have value. But they are not a substitute for lived experience.

You cannot compress years of operating judgment into a series of presentations. You cannot replicate the pattern recognition that comes from building or scaling AI systems in real environments. And you cannot expect a board to effectively challenge management on AI strategy if no one at the table has taken accountability for those outcomes.

This is where I see strong companies make avoidable mistakes. They invest in educating the board they have, instead of questioning whether they have the right board in the first place. They treat AI as a topic to be added, rather than a capability that must be represented. That distinction matters.

The Coming Exposure

Over the next two years, the market will begin to reset its expectations. A new class of companies is approaching the public markets. Organizations like OpenAI and Anthropic are not simply technology companies. They are AI-native businesses, built with entirely different risk profiles and governance requirements. Their boards will reflect that reality. They will be evaluated not only on independence and diversity, but on technical credibility and the ability to oversee systems that operate with autonomy and complexity far beyond traditional software.

This creates a new benchmark. And governance is always relative to the benchmark.

When that standard becomes visible, the gap between AI-literate boards and traditional boards will not be subtle. It will show up in investor scrutiny, in regulatory attention, and ultimately in valuation.

Regulation Will Accelerate the Divide

At the same time, regulation is moving quickly. Frameworks such as the EU AI Act, along with emerging U.S. guidelines, are shifting AI oversight from a strategic consideration to a fiduciary expectation. Boards will not only need to demonstrate that they discussed AI risks. They will need to show that they understood them and governed them appropriately.

That raises a simple question. How does a board demonstrate effective oversight of a domain where it has no direct experience? At a certain point, process is no longer enough. Competence becomes the standard.

What the Right Board Looks Like Now

An AI-literate board does not require every director to be a technologist. But it does require intentional composition.

First, it includes operators. Individuals who have built, deployed, or scaled AI systems and understand their real-world implications. Second, AI oversight is integrated across the board’s work. It is not isolated within a single committee. It informs risk, compensation, and strategic decision-making. Third, the dialogue changes. The questions become more precise. Not whether an AI strategy exists, but where it breaks. How outputs are validated. Where the organization is most exposed.

Those conversations only happen when the right experience is present.

The Compounding Effect of Getting It Right

Board composition is not a static decision. It compounds over time. Boards that move early to integrate AI-relevant expertise will make better decisions, allocate capital more effectively, and operate with greater clarity. Boards that delay will find themselves reacting to events rather than shaping outcomes. In a cycle moving as quickly as this one, that difference becomes material very quickly.

Most boards were not built for this moment. But they are being evaluated in it. And the consequences of those earlier composition decisions are no longer theoretical. 

They are now showing up in real time.

More articles

Latest posts