On the gap between AI ambition and operational reality — and why closing it is less a technology problem than a leadership one.
Most enterprises have an AI strategy. Fewer have an AI reality.
The gap between the two — between the pilot that impressed the board and the production system that never quite arrived — is where Meng Khong Tong spends most of his working life. As Chief Executive of Sotatek USA, he advises organizations navigating the unglamorous middle distance of enterprise AI: past the proof-of-concept stage, short of meaningful scale, and increasingly uncertain about how they got there.
His diagnosis is consistent across clients and industries. The problem is rarely the technology. It is the organizational infrastructure that should surround it — the data readiness, the clarity of governance, the alignment on risk tolerance, and the willingness of leadership to put their names to accountability frameworks before something goes wrong.
In this conversation, Tong addresses the questions that enterprise AI conversations too often avoid: who actually owns the data, what accountability looks like when autonomous systems fail, and whether tightening regulation will concentrate AI’s benefits among those already best positioned to capture them. His answers are precise, occasionally uncomfortable, and grounded in the operational reality that most AI commentary skips past entirely.
Excerpts from the interview;
Enterprises talk about AI at scale, yet most remain stuck in pilot mode. Is the industry overestimating readiness—or underestimating execution complexity?
A bit of both, honestly. Many organizations still don’t have a clear picture of what outcome they actually want from AI. They underestimate how much foundational work is required before AI can scale, things like data availability, workflow visibility, documentation, governance, and even basic prompting capability within teams.
At the same time, some companies are aiming too big too quickly. We’ve seen cases where clients want to build AI-driven fraud detection systems that can learn transaction anomalies, scan applicant backgrounds, and automate decision-making, but internally they haven’t aligned on policies, rules, risk tolerance, or governance. So the project becomes a constant cycle of learning, redesigning, fixing, and reworking.
That’s the common trap today. The ambition is there, but operational readiness and organizational alignment are often not yet there.
Governance and compliance make AI safer—but also more expensive. Are we heading toward a future where only large enterprises can afford to do AI “right”?
I don’t fully agree with that. Governance and compliance are definitely necessary, but governance doesn’t always mean expensive technology. Sometimes, simple things like clear ownership, accountability, and a proper RACI already solve many governance problems. I shall address accountability in the last question.
If something breaks, you need to know whether the issue came from the data model, the AI model, the guardrails, or the workflow itself. That clarity is more important than throwing more tools at the problem.
That said, large enterprises will naturally drive more advanced AI adoption because they have larger datasets, greater inefficiencies and business use cases, more complex workflows, and larger budgets. SMBs will focus more on targeted AI point solutions that improve efficiency or reduce cost. And that’s perfectly fine. In both scenarios, AI plays a role.
Also Read: Healthcare Is Scaling AI Without the Infrastructure to Manage It
AI needs deep access to data to deliver value, but that access creates risk. Where do you draw the line—and who enforces it?
The line should always be drawn around business necessity and explainability. Just because AI can access certain data doesn’t mean it should.
Organizations need to classify which data are truly required to achieve the outcome they are trying to achieve. Sensitive data should be segmented, masked, or controlled with clear access boundaries and auditability.
The bigger challenge is not only technology, it’s also ownership. In many companies, no one is fully aligned on who owns the data, who approves the policies, or who is accountable when AI behaves unexpectedly. Without that clarity, governance quickly becomes fragmented.
In practice, enforcement should be shared between business owners, risk and compliance teams, and the technology platform itself. AI governance cannot and should not sit with IT alone.
As AI begins auditing AI, does accountability improve—or does it create a system no human fully understands?
It definitely improves scalability and monitoring, but it also blurs accountability if the governance model is unclear.
The important thing is to separate accountability for the system from accountability for every individual AI decision. Nobody manually reviews every trade in an algorithmic trading system today. Instead, leadership is accountable for the operating boundaries, surveillance mechanisms, and governance framework around that system.
The same principle applies to AI agents. If AI is auditing AI, then the organization needs clear policies, audit trails, escalation paths, and observability built into the architecture. Otherwise, complexity becomes overwhelming very quickly.
I don’t think we’re at a stage where humans completely lose understanding, but we are definitely moving toward environments where continuous monitoring and machine-readable governance become critical.
Also Read: The AI Code No One Read Is Already in Production
As customer journeys become more predictive and automated, where does personalization cross the line into a loss of user control?
That’s exactly why many people say the future will become even more human-centric despite the rise of AI. In the world of robotics and AI, people are yearning to interact with real humans more than ever.
AI is very good at optimization, automation, and prediction. But if companies over-automate everything, personalization eventually becomes artificial and repetitive. We already see this in social media today, a lot of AI-generated content feels very template-driven and emotionally disconnected. They are cookie-cutter, mass-produced for speed and productivity rather than personalization and effectiveness.
The balance is important. Some parts of customer journeys can absolutely be automated, but moments that require trust, empathy, negotiation, or emotional understanding still need strong human involvement.
The risk is not AI itself. The risk is when organizations optimize too aggressively for efficiency and slowly remove meaningful human interaction from the experience. This is the part the solutions architect needs to consider with business stakeholders when they are designing an AI system.
Does stricter AI regulation enable better innovation—or quietly limit who gets to participate in it?
Probably both. Some level of regulation is necessary because AI is becoming too powerful to operate without guardrails, especially in industries like banking, healthcare, and public services. Regulation can actually increase trust and accelerate adoption if done correctly.
But there is also a risk that excessive regulation favors only large enterprises that can afford compliance, legal teams, and governance infrastructure. Smaller companies may struggle to keep up.
That’s why regulators need to strike a balance between protecting society and still allowing innovation to happen. Otherwise, AI innovation may become concentrated among only a few dominant players.
Also Read: Counting Heads Won’t Fix Leadership Gaps
As AI systems begin making decisions independently, what does leadership accountability actually look like when things go wrong?
I think the biggest risk is not that nobody is accountable. The real risk is that the boundaries between business, risk, and technology accountability become ambiguous.
Each team can legitimately say, “I did my part,” and yet the system can still fail because the operating assumptions between those domains were never fully aligned.
For example, the business defines the operating goals, the risk team defines the acceptable boundaries, and the technology team implements the controls. But if there’s a gap between those layers, autonomous agents can make decisions that nobody explicitly intended or prohibited.
That’s why governance for agentic AI needs to become much more structured and machine-readable. Policies, guardrails, escalation rules, tool permissions, and audit trails all need to be codified and continuously monitored rather than reviewed periodically in committees.
In the future, leadership accountability will be less about reviewing every AI decision manually and more about being accountable for the system’s design, operating boundaries, assumptions, and the surveillance mechanisms around it.


