The UK invests £8.5 million to combat AI threats like deepfakes and cyberattacks. Research focuses on “systemic AI safety” to protect society and harness AI benefits.
The UK has promised £8.5m ($10.8m) to fund new AI safety research to tackle cyber threats, including deepfakes.
Technology secretary Michelle Donelan announced at the AI Seoul Summit today that the research grants will focus on “systemic AI safety” – understanding how to better protect society from AI risks and harness the technology’s benefits.
The research program will be led by researcher Shahar Avin at the government’s AI Safety Institute and delivered in partnership with UK Research and Innovation and The Alan Turing Institute. Although applicants must be based in the UK, they will be encouraged to collaborate with other researchers in AI safety institutes worldwide.
AI represents a two-pronged threat to economic and social stability. On the one hand, AI systems could be targeted by techniques such as prompt injection and data poisoning, and on the other, threat actors themselves could use the technology to gain an advantage.
The UK’s National Cyber Security Centre (NCSC) warned in January that malicious AI use will “almost certainly” increase the volume and impact of cyber-attacks, particularly ransomware, over the next two years.
In fact, new research from compliance specialist ISMS.online released this week revealed that 30% of information security professionals experienced a deepfake-related incident in the past 12 months, the second most popular answer after “malware infection.”
At the same time, three-quarters (76%) of respondents claimed that AI technology improves information security, and 64% said they are increasing their budgets accordingly over the coming year.
AI Safety Institute research director Christopher Summerfield claimed the new funding represents a “major step” toward ensuring AI is deployed safely in society.
“We need to think carefully about how to adapt our infrastructure and systems for a new world in which AI is embedded in everything we do,” he added. This program is designed to generate a huge body of ideas for tackling this problem and help ensure that great ideas can be implemented.”
Also Read: Why Identity Security Should Be the Foundation of Modern Cybersecurity
The institute has already been conducting valuable research into AI threats. A May update published Monday revealed four of the most used generative AI chatbots are vulnerable to basic jailbreak attempts.
Yesterday, the UK and South Korea hailed a “historic first” as 16 major AI companies signed new commitments to develop AI models safely.