Kamala Harris urges urgent action to address near-term AI threats to democracy and privacy while also acknowledging long-term existential risks. She outlines principles for AI development and testing and announces the US AI Safety Institute.
Short-term threats posed by artificial intelligence to democracy and privacy need to be addressed as urgently as longer term existential threats, Kamala Harris, the US vice-president, is expected to say in a speech setting out the Biden administration’s vision before the UK’s Bletchley Park summit on AI.
In a speech in London on Wednesday before attending the conference, she will say: “We reject the false choice that suggests we can either protect the public or advance innovation. We can – and we must – do both. And we must do so swiftly, as this technology rapidly advances.”
Harris wants to advance beyond the debates about the future potential, sometimes speculative, existential threats posed by AI in the future to examine harms that are already happening, including those associated with discrimination and disinformation.
She will say the existential threats are “without question, profound, and demand global action. But let us be clear: there are additional threats that also demand our action, threats that are currently causing harm and which, to many people, also feel existential.”
Harris is particularly interested in technology to combat AI-generated voice calls that may be seeking to steal from vulnerable people. She also wants measures to trace authentic government-produced digital content and AI-generated or manipulated content, including through digital signatures, watermarking, and other labeling techniques.
She will set a series of tests when it comes to the development, testing, and use of AI, including: “Whose biases are being written into the code? Whose interests are being served? Who reaps the reward of speedy adoption? Who suffers the harms most acutely? Who will be hurt if something goes wrong? Who has been at the table?”
She will also reveal that 30 countries have agreed to sign a US-sponsored political declaration for the use of AI by national militaries. The vast bulk of the signatories are Western-oriented nations, suggesting a new cold war AI division may be starting to form. She warns of “AI-enabled cyber-attacks at a scale beyond anything we have seen before, to AI-formulated bioweapons that could endanger the lives of millions”.
The goals of the political declaration, first set out in February, include for states to commit to “strong and transparent norms that apply across military domains, regardless of a system’s functionality or scope of potential effects. States would also commit to pursue continued discussions on how military AI capabilities are developed, deployed and used in a responsible manner, and to continue to engage the rest of the international community to promote these measures.”
The political declaration, she promises, would preserve the right to self-defence and countries’ ability to responsibly develop and use AI in the military domain.
Her two days in the UK represent a chance for her personally to show American leadership on an issue that offers a fresh dimension for her often criticized vice-presidency.
Harris confirmed that the US Department of Commerce would establish the United States AI Safety Institute (US AISI) that will create “guidelines, tools, benchmarks and best practices for evaluating and mitigating dangerous capabilities and conducting evaluations to identify and mitigate AI risk”.
The institute will develop technical guidance on issues such as authenticating content created by humans, watermarking AI-generated content, identifying and mitigating against harmful algorithmic discrimination, ensuring transparency and enabling adoption of privacy-preserving AI. It would also serve as a driver of the future workforce for safe and trusted AI.
The body would share information and collaborate on research with peer institutions internationally, including the UK’s planned AI Safety Institute.
Inside the US administration, the Biden team is setting governance boards to track the progress of AI, advise agency leadership on AI, and to coordinate and track agencies’ AI activities.
Defending the need for state regulation, Harris will say: “History has shown in the absence of regulation and strong government oversight, some technology companies choose to prioritize profit over the wellbeing of their customers, the security of our communities and the stability of our democracies.
“One important way to address these challenges – in addition to the work we have already done – is through legislation. Legislation that strengthens AI safety without stifling innovation.”