2.3 C
Casper
Thursday, November 20, 2025

Google CEO Warns: Don’t Blindly Trust AI’s Answers

Must read

Sundar Pichai urges users not to rely solely on AI, warning models still make errors as Google races to improve Gemini and rebuild trust amid rising scrutiny.

As Google races to deploy its most advanced artificial intelligence software to date, the company’s chief executive, Sundar Pichai, has issued a stark caveat to the billions of users expected to rely on it: Do not trust it blindly.

In an exclusive interview with the BBC, Pichai conceded that AI models remain “prone to errors”. He cautioned that they should be utilized as a supplement to, rather than a replacement for, traditional information sources.

“People have to learn to use these tools for what they’re good at, and not blindly trust everything they say,” Pichai said.

The admission strikes a dissonant chord during a week of triumphalism for the tech giant. On Tuesday, Google unveiled Gemini 3.0, a model it claims will unleash “a new era of intelligence” across its ecosystem, from smartphone assistants to its flagship search engine.

Also Read: How Explainable AI Builds Trust in Data Decisions

The Reliability Gap

Pichai’s comments highlight the central tension facing Silicon Valley: the commercial imperative to move fast against competitors like OpenAI versus the reputational risk of deploying technology that frequently fabricates information.

“We take pride in the amount of work we put in to give us as accurate information as possible, but the current state-of-the-art AI technology is prone to some errors,” Pichai told the BBC. He noted that while AI excels at creative tasks, users should verify factual claims using other products, such as Google Search.

However, critics argue that Google is shifting the burden of verification onto the consumer while simultaneously eroding the visibility of traditional sources.

Gina Neff, a professor of responsible AI at Queen Mary University of London, criticized this approach.

“The company now is asking to mark their own exam paper while they’re burning down the school,” Neff told the BBC. She argued that while hallucinations might be acceptable when asking for movie recommendations, they pose significant risks in high-stakes queries regarding health, science, or news.

A History of Hallucinations

Google’s caution is rooted in recent experience. The company’s rollout of AI Overviews earlier this year was marred by high-profile errors, including instances where the search engine provided erratic or dangerous advice with confidence.

Independent research underscores the scope of the problem. A study conducted by the BBC earlier this year found that AI assistants from major tech firms, including Google, Microsoft, and OpenAI, misrepresented news stories 45% of the time. When tested on content from the BBC website, these models frequently produced answers containing “significant inaccuracies.”

Also Read: Edition 3: Tech Leaders Turning Complexity into Clarity

Gemini 3.0 and the ‘New Era’

Despite these concerns, Google is pressing ahead. The newly launched Gemini 3.0 is designed to reclaim market share from ChatGPT by offering “state-of-the-art” reasoning capabilities and the ability to process inputs across multiple formats, including photos, audio, and video.

Pichai described the integration of Gemini into Google Search as a “new phase of the AI platform shift.” The company maintains it is balancing this speed with safety, employing a strategy  Pichai describes as being “bold and responsible at the same time.”

To that end, Google announced it is open-sourcing technology to help detect AI-generated images, a move aimed at mitigating the spread of misinformation.

Competition and Control

Addressing the broader dynamics of the industry, Pichai responded to concerns raised by Elon Musk years ago that Google’s acquisition of DeepMind could lead to an AI “dictatorship.”

“No one company should own a technology as powerful as AI,” Pichai said, pointing to the current diversity of the market as a safeguard. “If there was only one company that was building AI technology and everyone else had to use it, I would be concerned about that too, but we are so far from that scenario right now.” 

More articles

Latest posts