7.4 C
Casper
Monday, March 23, 2026

When the CEO’s Avatar Speaks, Who Is Actually Responsible?

Must read

Victor Cho
Victor Cho
Victor Cho is the CEO of Emovid, where he explores how AI can support more authentic, emotionally intelligent communication. With a background in product innovation and digital leadership, he’s focused on building tools that help people connect more effectively, without losing the human touch.

AI governance focuses on model safety and the quality of training data. It ignores the bigger problem: how AI is quietly reshaping communication, accountability and trust inside organizations.

When the CEOs of Zoom and Klarna used AI avatars to fill in for them during their companies’ earnings presentations, the backlash was immediate. Employees, investors, and the public felt it crossed a line: The polished, lifelike avatars delivered the information perfectly, but the gimmick left people wondering if these leaders really stood behind their avatars’ statements.

That reaction contains a lesson regulators haven’t yet absorbed.

Organizations are moving quickly to establish AI governance rules, and rightly so. Debates about model transparency, training data, safety guardrails, and algorithmic accountability are important, especially as AI tools change how we work and live. 

But inside organizations, the daily flow of communication matters more than model architecture or guardrails. AI is changing how decisions are documented, how leadership shows up, and how trust is built or quietly lost. As these systems spread, organizations will increasingly need mechanisms for verified human communication – clear assurance that a real person authored a message and stands behind it. Any policy that ignores that reality risks solving the wrong problem.

The Governance Gap No One Talks About 

I don’t see companies hesitating over AI. The tech has already quietly entered daily workflows: Encouraged by leadership, employees are freely using AI tools to draft messages, summarize meetings, and generate updates. And by the time those very leaders realize they need new policies to address this massive change in behavior, the habits are already ingrained. 

That leaves a governance gap that has little to do with the systems themselves; rather, we now have a new problem: how those systems are reshaping everyday communication.

When an AI tool summarizes a two-hour strategy meeting, it dutifully records all the minutes and outlines the points raised and the decisions made. But these tools often omit dissenting voices, unresolved tension, or any caveats raised at the end. 

The summary, therefore, becomes the official record, and decisions are made based on a compressed account of what was actually discussed. No one intended to obscure anything; the tool just did what it was designed to do.

Now consider this happening across thousands of organizations and hundreds of meetings every day, and it’s easy to envision a systematic erosion of how decisions are documented and who is held accountable.  

Model safety frameworks are not designed to address such issues. The problem’s rooted in how communication is recorded and interpreted inside organizations, and regulation isn’t addressing it. 

Not All Communication Is the Same

To understand what AI governance needs to address, it helps to recognize something that’s obvious within organizations but rarely acknowledged in policy discussions: not all communication serves the same purpose.

Some communication is transactional. AI is well-suited for exchanges designed to move work forward, such as scheduling, status updates, support tickets, and meeting recaps. Automating a meeting summary or a follow-up email doesn’t damage trust; it removes friction.

Communication that shapes trust and relationships is different. When leaders explain strategy, managers deliver feedback, or organizations share sensitive news, the exchange is built on credibility. The people receiving these messages are asking a deeper question: Is there a real person standing behind what’s being said?

Trust starts eroding when leaders treat relational moments as if they were transactional – delegating them to AI generation, using avatars for delivery, or automating drafting. It’s not always immediately visible, but over time, employees will start to question whether leadership cares. 

Eight years ago, 60 percent of employees said they didn’t trust their CEOs. By 2023, that number had climbed to 79 percent. AI isn’t the only reason for that failure, but it’s certainly accelerating the trend when leaders use it to automate the very moments when their presence matters the most.

What Regulation Is Missing

Current AI policy discussions focus almost entirely on the systems themselves: how models are built, what data is used to train them, how outputs are audited, and where safety guardrails are warranted. Those are necessary questions. But they miss the layer where most governance problems actually surface: communication.

Workplace trust is not shaped by the model’s output; it is informed by what comes afterwards. How that output moves through an organization, who is attributed with it, and whether decisions can be traced back to the people who made them.

AI regulation must evolve to consider communication environments, too. Specifically, it must address three things most frameworks ignore: 

  • Authorship transparency: People should know when AI has materially shaped communication. 
  • Documentation integrity: AI-generated records must be able to be distinguished from first-hand accounts.
  • Relational and verified human accountability: For certain categories of high-stakes communication, a human being must stand behind the message. This is the principle of verified human communication.

Who Owns This?

For business leaders, the first practical step involves assigning ownership. Someone must decide when human presence will trump automation. That responsibility forms what I would call the trust function, whether it exists as a formal role, like a Chief Trust Officer, or not. Without someone taking up that ownership, organizations will default to efficiency, and efficiency alone will automate away the moments that matter most.

For regulators, the ask is simpler but complicated: stop designing policy as if AI systems’ outputs are the only thing that matters. Start asking what those systems are doing to the communication environments people rely on to make decisions, identify responsibility, and maintain trust.

AI will keep advancing, and its adoption is inevitable. What matters is whether, when a message arrives, a decision is recorded, or a leader speaks, a person can be identified as the one responsible.

That’s not a technical question; it’s a governance problem. Right now, no one is owning it.

More articles

Latest posts