23.9 C
Casper
Wednesday, June 18, 2025

Responsible AI, Not AI-First: Why the White House’s Executive Order Needs Guardrails

Must read

Khushbu Raval
Khushbu Raval
Khushbu is a Senior Correspondent and a content strategist with a special foray into DataTech and MarTech. She has been a keen researcher in the tech domain and is responsible for strategizing the social media scripts to optimize the collateral creation process.

The White House’s AI order is a leap forward—but Panaseer’s Thordis Thorsteins warns that without transparency, safeguards, and alignment with NIST AI RMF, the risks may outweigh the rewards.

The White House’s recent executive order to accelerate AI adoption within US federal agencies marks a significant step towards integrating this transformative technology into government operations. However, as Thordis Thorsteins, Senior AI Data at Panaseer, points out, this push for innovation must be balanced with a strong emphasis on responsible AI practices.

Thorsteins acknowledges the “enormous value” AI offers when applied judiciously. However, she cautions against a blanket “AI-first” approach, emphasizing that “it’s not always the answer.” She highlights the importance of considering the specific application and its potential risks, stating, “Some applications are too high risk (e.g., social scoring, which the AI EU Act prohibits) and others are too costly or unreliable (e.g., determining a bank balance).” This nuanced perspective underscores the need for careful evaluation before deploying AI solutions.

A central concern raised by Thorsteins is the issue of transparency and explainability, particularly in government settings. She argues that when AI makes decisions that trigger real-world actions, such as “automatically approving applications and prioritizing tasks,” it is crucial to understand and validate the reasoning behind those choices. “Without transparency,” she warns, “agencies risk deploying tools they can’t fully trust or control.” This lack of trust can have significant consequences, potentially leading to flawed decision-making and a loss of public confidence.

Also Read: Fragmenting Tech Giants: A Self-Inflicted Wound for US Innovation?

The executive order’s focus on streamlining acquisition and cutting reporting, while intended to speed up deployment, raises concerns about potential risks. Thorsteins cautions that “without strong safeguards around security, bias, and data protection, it could expose systems to serious vulnerabilities and cause unintended harm to people and organizations.” She stresses the importance of embedding responsible practices from the outset, including “documentation, risk classification, and validation.” This proactive approach is essential to mitigate potential negative consequences and ensure that AI is deployed safely and ethically.

Thorsteins acknowledges the executive order’s positive aspects, particularly its mandate for “meaningful public transparency into the Federal Government’s use of AI,” “human oversight, intervention, and accountability suitable for high-impact use cases,” and “documenting provenance of the data used to train, fine-tune, or operate the AI.” These measures, she suggests, are critical steps towards ensuring responsible AI adoption.

However, she argues that these requirements should not exist in isolation. To “fully realize AI’s value for society,” Thorsteins believes these mandates should be implemented with existing frameworks such as the NIST AI Risk Management Framework (AI RMF) or ISO 42001. These frameworks provide comprehensive guidance on managing AI risks and ensuring that AI systems are developed and deployed responsibly. By integrating the executive order’s mandates with these established frameworks, agencies can ensure that AI is “applied to the right problems with the necessary controls in place.”

Also Read: At Google Cloud Next, a Unified Push to Fortify Enterprise Cybersecurity

In conclusion, Thorsteins’ commentary highlights the importance of a balanced approach to AI adoption in the US government. While the White House’s executive order represents a positive step towards accelerating AI deployment, it is crucial to prioritize responsible practices. By emphasizing transparency, explainability, robust safeguards, and aligning the order’s mandates with existing risk management frameworks, federal agencies can harness the power of AI while mitigating its potential risks and ensuring that its application serves the best interests of society. The goal, as Thorsteins aptly puts it, should not be “AI-first,” but rather the development and deployment of AI systems that are “secure, transparent, fit-for-purpose.”

More articles

Latest posts