14.6 C
Casper
Saturday, July 6, 2024

Apple’s AI Moves Will Impact Future Chip, Cloud Security Plans

Must read

Apple’s private hardware and software approach to device AI security contrasts with chipmakers’ scramble. Will Apple’s black box model become the gold standard for AI privacy?

Apple’s private hardware & software approach for device AI security contrasts with chipmakers’ scramble. Will Apple’s black box model become the gold standard for AI privacy?

Analysts say Apple’s measures to prevent customer data theft and misuse by artificial intelligence (AI) will have a marked impact on hardware security, especially as AI becomes more prevalent on customer devices.

Apple emphasized customer privacy in new AI initiatives announced during the Worldwide Developers Conference a few weeks ago. It has built an extensive private hardware and software infrastructure to support its AI portfolio.

Apple has full control over its AI infrastructure, which makes it harder for adversaries to break into systems. Analysts say the company’s black-box approach also provides a blueprint for rival chip makers and cloud providers for AI inferencing on devices and servers.

“Apple can bolster the abilities of an LLM [large language model] while not having any visibility into the data being processed, which is excellent from both customer privacy and corporate liability standpoints,” says James Sanders, an analyst at TechInsights.

Apple’s AI Approach

The AI back end includes new foundation models, servers, and Apple Silicon chips. AI queries originating from Apple devices are packaged in a secure lockbox, unpacked in Apple’s Private Compute Cloud, and verified by the authorized user and device; answers are sent back to devices and accessible to authorized users only. Data isn’t visible to Apple or other companies and is deleted once the query is complete.

Apple has etched security features directly into device and server chips, which authorize users and protect AI queries. Data remains secure while on the device and during transit via features such as secure boot, file encryption, user authentication, and secure communications over the Internet via TLS (Transport Layer Security).

Apple is its customer with a private infrastructure, which is a big advantage. At the same time, rival cloud providers and chip makers work with partners using different security, hardware, and software technologies, Sanders says.

“The implementations of that per cloud vary … there’s not a single way to do this, and not having a single way to do this adds complexity,” Sanders says. “I suspect that the difficulty of implementing this at scale becomes much harder when dealing with millions of client devices.”

Also Read: Explained: Generative Model

Microsoft’s Pluton Approach

However, Apple’s main rival, Microsoft, is already on its way to end-to-end AI privacy with security features in chips and Azure cloud. Last month, the company announced a class of AI PCs called CoPilot+ PCs that require a Microsoft security chip called Pluton. The first AI PCs shipped this month with chips from Qualcomm, with Pluton switched on by default. Intel and AMD will also ship PCs with Pluton chips.

Pluton ensures data in secure enclaves is protected and accessible only to authorized users. The chip is now primed to protect AI customer data, says David Weston, vice president for enterprise and OS security at Microsoft.

“We have a vision for AI mobility between Azure and the client, and Pluton will be at the core of that,” he says.

Google declined to comment on its chip-to-cloud strategy.

Intel, AMD, and Nvidia also build hardware black boxes that protect AI data from hackers. Intel didn’t respond to requests for comment on its chip-to-cloud strategy, but in earlier interviews, the company said it is prioritizing securing chips for AI.

Also Read: Explained: Ethical AI

Security Through Obscurity May Work

However, analysts say a mass-market approach by chip makers could leave larger surfaces for attackers to intercept data or break into workflows.

Dylan Patel, founder of chip consulting firm SemiAnalysis, says Intel and AMD have a documented history of vulnerabilities, including Spectre, Meltdown, and their derivatives.

“Everyone can acquire Intel chips and try to find attack vectors,” he says. “That’s not the case with Apple chips and servers.”

In contrast, Apple is a relatively new chip designer and can take a clean-slate approach to chip design. A closed stack helps with “security through obscurity,” Patel says.

Microsoft has three confidential computing technologies in preview in the Azure cloud: AMD’s SEP-SNV offering, Intel’s TDX, and Nvidia’s GPU. With AI’s growing popularity, Nvidia’s graphics processors are now a target of hackers, and the company recently issued patches for high-severity vulnerabilities.

Intel and AMD work with hardware and software partners to plug their own technologies, creating a longer, secure supply chain, says Alex Matrosov, CEO of hardware security firm Binarly. This gives hackers more chances to poison or steal data used in AI and creates problems in patching security holes as hardware and software vendors operate on their timelines, he says.

Also Read: Enhancing Security Operations with AI-driven SOC Insights

“The technology is not built from the perspective of seamless integration to focus on actually solving the problem,” Matrosov says. “This has introduced a lot of layers of complexity.”

Intel and AMD chips weren’t inherently designed for confidential computing, and firmware-based rootkits may intercept AI processes.

“The silicon stack includes layers of legacy … and then we want confidential computing. It’s not like it’s integrated,” Matrosov says.

More articles

Latest news