25.5 C
Casper
Friday, July 26, 2024

ChatGPT-like AI Can be Tricked to Produce Malicious Code, Cyber Attacks

Must read

Researchers demonstrate how text-to-SQL systems can lead to cyber-attacks.

A team of researchers from the University of Sheffield has demonstrated that popular artificial intelligence applications like OpenAI’s ChatGPT, among five others, can be manipulated to produce potentially harmful Structured Query Language (SQL) commands and can be exploited to attack computer systems in the real world.

The applications they used in their study included BAIDU-UNIT, ChatGPT, AI2SQL, AIHELPERBOT, Text2SQL, and ToolSKE.

“In reality, many companies are simply not aware of these types of threats and due to the complexity of chatbots, even within the community, there are things that are not fully understood,” said Xutan Peng, a PhD student and co-lead of the research.

“At the moment, ChatGPT is receiving a lot of attention. It’s a standalone system, so the risks to the service itself are minimal, but we found that it can be tricked into producing malicious code that can seriously harm other services,” added Peng.

The team conducted vulnerability tests on text-to-SQL systems commonly used to create natural language interfaces to databases. Text-to-SQL is a technique that automatically translates a question in the human language to an SQL statement. 

According to the University’s press release, the team found that these AI applications can be tricked into producing malicious code, which could be used to launch cyber attacks. The team was able to steal sensitive personal information, tamper with databases, and bring down services through Denial-of-Service attacks.

A Denial-of-Service attack is meant to shut down a machine or network, making it inaccessible to its intended users.

Pen explained that although many people use ChatGPT as a conversational tool, most users use it as a productivity tool.

“For example, a nurse could ask ChatGPT to write an SQL command so that they can interact with a database, such as one that stores clinical records. As shown in our study, the SQL code produced by ChatGPT in many cases can be harmful to a database, so the nurse in this scenario may cause serious data management faults without even receiving a warning,” added Peng.

The researchers also discovered that they could secretly put a harmful code, like a Trojan Horse, into text-to-SQL models while being trained. This code won’t be ‘visible’ at first but can be used to harm people who use it.

“Users of Text-to-SQL systems should be aware of the potential risks highlighted in this work,” said Dr Mark Stevenson, a senior lecturer at the University. “Large language models, like those used in Text-to-SQL systems, are extremely powerful but their behavior is complex and can be difficult to predict. At the University of Sheffield we are currently working to better understand these models and allow their full potential to be safely realized.”

The team shared their work with Baidu and OpenAI to warn them about the flaws in their AI apps. According to the researchers, both companies have fixed the issue.

More articles

Latest posts