18 C
Casper
Thursday, September 12, 2024

Microsoft Copilot AI Found Vulnerable to Hacking

Must read

A security researcher has discovered that Microsoft’s Copilot AI can be exploited to reveal sensitive information and launch phishing attacks, highlighting the risks of AI-powered tools.

Microsoft’s Copilot AI, a tool designed for organizations to customize chatbots according to their needs, has been found vulnerable to hacking. A security researcher named Michael Bargury, has demonstrated that this AI could be exploited to disclose an organization’s confidential information such as emails and bank transactions. The findings were presented at the Black Hat security conference in Las Vegas.

It can also be used for phishing attacks

Bargury also revealed that the Copilot AI could become a potent phishing tool. He stated, “I can do this with everyone you have ever spoken to, and I can send hundreds of emails on your behalf.”This discovery highlights the potential risks associated with AI chatbots like Copilot and ChatGPT when connected to sensitive data sets.

Also Read: AI Hype vs. Real-World Impact

How was sensitive data revealed?

Bargury demonstrated that without access to an organization’s account, he could trick the chatbot into altering the bank transfer recipient. He achieved this by sending a malicious email that didn’t require the targeted employee to open it. If a hacker accessed an employee’s compromised account, they could extract sensitive data from Copilot by asking straightforward questions.

Copilot AI’s vulnerabilities stem from data access

The vulnerabilities of Copilot AI arise from its need to access company data to function effectively. Bargury highlighted that many of these chatbots are discoverable online by default, making them easy targets for hackers. He told The Register, “We scanned the internet and found tens of thousands of these bots.”This discovery underscores the security risks of using AI tools in business settings.

More articles

Latest posts