-5.7 C
Casper
Thursday, December 4, 2025

Study: Big AI’s Safety Playbook Isn’t Up to Code

Must read

A global review finds major AI firms lag far behind emerging safety standards, raising concerns about superintelligent systems and industry resistance to regulation.

A new global review of artificial intelligence safety has delivered a stark verdict: the world’s leading AI developers are racing ahead on capability, but not on control.

According to the latest AI Safety Index from the Future of Life Institute, the practices at companies such as Anthropic, OpenAI, xAI and Meta fall “far short of emerging global standards,” despite rapid advances toward systems capable of human-level or even superhuman reasoning.

The evaluation — conducted by an independent panel of experts — found that none of the companies has a credible, comprehensive strategy for governing increasingly powerful models. The findings come amid rising public unease, following reported cases in which AI chatbots were linked to self-harm or suicide.

“Despite uproar over AI-powered hacking and AI driving people to psychosis and self-harm, U.S. AI companies remain less regulated than restaurants,” said Max Tegmark, MIT professor and president of the Future of Life Institute. “And they continue lobbying against binding safety standards.”

The institute, founded in 2014 and once supported by Elon Musk, has long warned about the risks of building superintelligent systems without guardrails. Those concerns crescendoed in October, when prominent scientists Geoffrey Hinton and Yoshua Bengio called for a moratorium on developing superintelligence until society can fully understand — and safely contain — it.

Also Read: AI vs. AI: The $10B Cybersecurity Battle You’re Missing

Industry responses were mixed. Google DeepMind said it is working to advance safety “at pace with capabilities.” OpenAI stressed that it invests heavily in frontier safety research and rigorously tests its models. xAI responded only with a terse automated reply: “Legacy media lies.” Other major developers, including Anthropic, Meta, Z.ai, DeepSeek and Alibaba Cloud, declined to comment.

As governments worldwide begin crafting regulations for advanced AI, the report adds fuel to a growing debate: whether the companies building the technology are prepared — or even willing — to meet the standards required to keep it safe.

More articles

Latest posts