17.6 C
Casper
Friday, June 27, 2025

Proper AI Regulation Boosts Progress, Says Scientist

Must read

Microsoft’s Chief Scientist argues AI regulation can speed up innovation, challenging Trump’s proposed 10-year ban. Discover expert insights on AI’s future.

Microsoft’s chief scientist has said that regulation, if “done properly”, could actually accelerate advances in artificial intelligence rather than hinder them.

Dr Eric Horvitz, a former technology adviser to Joe Biden, said it was up to scientists to communicate to governments that guidance and controls could potentially speed up progress.

His comments follow a proposal by the Trump administration of a 10-year ban on US states creating “any law or regulation limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems”.

It is driven in part by White House fears China could otherwise win the race to human-level AI, but also pressure from tech investors, such as Andreessen Horowitz, an early investor in Facebook, which argues consumer uses should be regulated rather than research efforts. Its co-founder, the Trump donor Marc Andreessen, said earlier this month that the US was in a two horse race for AI supremacy with China. The US vice-president, JD Vance, recently said: “If we take a pause, does [China] not take a pause? Then we find ourselves … enslaved to [China]-mediated AI.”

Speaking at a meeting of the Association for the Advancement of Artificial Intelligence last week, Horvitz said: “It us up to us as scientists to communicate to government agencies, especially those right now who might be making statements about ‘no regulation, this is going to hold us back’. Guidance, regulation… reliability, controls, are part of advancing the field, making the field go faster, in many ways.

“We need to be very cautious about jargon and terms like regulation or bumper stickers that say no regulation because it’s going to slow us down. It can speed us up done properly. We should be cautious and care and communicate to governments about that.”

Horvitz said he was already concerned about “AI being leveraged for misinformation and inappropriate persuasion” and for its use “for malevolent activities, for example, in the biology biological hazard space”.

Horvitz’s comments came despite reports that Microsoft is part of a Silicon Valley lobbying push with Google, Meta and Amazon, to support the ban on individual US states regulating AI for the next decade which is included in Trump’s budget bill which is passing through Congress.

Microsoft is part of a lobbying drive to urge the US Senate to enact a decade-long moratorium on individual states introducing their own efforts to legislate, the Financial Times reported last week. The ban has been written into Trump’s “big beautiful bill” that he wants passed by Independence Day on 4 July.

Also Read: Cybersecurity 2025: From AI Intrigue to Billion-Dollar Moves

Speaking at the same seminar as Horvitz, Stuart Russell, the professor of computer science at the University of California, Berkeley, said: “Why would we deliberately allow the release of a technology which even its creators say has a 10% to 30% chance … of causing human extinction? We would never accept anything close to that level of risk for any other technology.”

The apparent contradiction between Microsoft’s chief scientist and reports of the company’s lobbying effort comes amid rising fears that unregulated AI development could pose catastrophic risks to humanity and is being driven by companies prioritising short-term profit.

Microsoft has invested $14bn (£10bn) in OpenAI, the developer of ChatGPT, whose chief executive Sam Altman who this week predicted that: “In five or 10 years we will have great human robots and they will just walk down the street doing stuff … I think that would be one of the moments that … will feel the strangest.”

Also Read: ANS: Foundational Framework for Secure AI Agents?

Predictions of when human-level artificial general intelligence (AGI) will be reached vary from a couple of years to decades. The Meta chief scientist, Yann LeCun, has said AGI could be decades away, while last week his boss, Mark Zuckerberg, announced a $15bn investment in a bid to achieve “superintelligence”.

Fred Humphries, corporate vice president of US government affairs for Microsoft in Washington, D.C., said: “We cannot afford to wake up to a future where 50 different states have enacted 50 conflicting approaches to AI safety and security. That’s why we support federal preemption on frontier models and their security and safety—while still carving out space for states to act in areas where they have traditionally exercised authority.”

More articles

Latest posts