4.3 C
Casper
Tuesday, January 13, 2026

UK AI CEO Warns No Model Can Fully Stop Explicit Content

Must read

Locai Labs CEO James Drayson tells MPs no AI can guarantee safety from explicit images, urging tougher rules and British-built models.

James Drayson, chief executive of Locai Labs, has warned that no technology company can guarantee its artificial intelligence systems will never generate explicit or harmful images—accusing Silicon Valley rivals of downplaying a problem that is growing more acute.

Drayson, the son of former science minister Lord Drayson, is due to appear before MPs on Wednesday as part of an inquiry by Parliament’s Human Rights and the Regulation of AI Committee into the risks AI poses to privacy, safety, and human rights.

His remarks follow a series of controversies highlighting the darker edge of generative AI. A recent image-editing feature released by Grok enabled users to manipulate images of women and children—including private individuals and public figures—placing them in sexualised or violent scenarios. In the United States last year, the death of 14-year-old Sewell Setzer III was linked by his family to alleged manipulation by an AI chatbot, intensifying scrutiny of the technology’s psychological impact.

Parliament’s inquiry is examining whether existing UK laws are sufficient to regulate AI developers or whether new legislation is required to ensure accountability as the technology advances.

Also Read: How Explainable AI Builds Trust in Data Decisions

A British Challenger Stakes Its Ground

Launched last year by brothers James and George Drayson, Locai positions itself as the UK’s answer to ChatGPT. The company claims its model already outperforms US rivals on several benchmarks, even as public trust in AI is shaken by high-profile safety lapses.

Against that backdrop, Drayson says Locai is deliberately taking a more cautious path. The company has declined to release image-generation tools until they can be deployed safely, has barred under-18s from accessing its chatbot, and is calling for greater transparency across the industry.

“It’s impossible for any AI company to promise its model can’t be tricked into creating harmful content, including explicit images,” Drayson said. “These systems are powerful, but they’re not foolproof. The public deserves honesty.”

He added that Locai is “openly working to fix these problems, not pretending they don’t exist,” arguing that acknowledging risk is a prerequisite to managing it.

Also Read: The Hybrid Office: Why APAC’s Tech Is Failing Workers

Drayson also warned that the UK’s growing dependence on foreign-built AI systems risks importing values that do not align with British laws or social norms.

“We need our own models, built for Britain, with British ethics and regulation at their core,” he said. “That’s how we protect our rights—and our children.”

While acknowledging that AI is here to stay, Drayson said the challenge for policymakers and industry alike is to ensure the technology becomes safer, fairer, and more trustworthy than it is today. “The question isn’t whether AI will shape our future,” he said. “It’s whether we’re willing to shape AI responsibly in return.”

More articles

Latest posts