Apple says its privacy-focused system will first try to fulfill AI tasks locally on the device itself. If data is exchanged with cloud services, it will be encrypted and then deleted. The company also says the process, which it calls Private Cloud Compute, will be subject to verification by independent security researchers.
The pitch offers an implicit contrast to companies such as Alphabet, Amazon or Meta, which collect and store vast amounts of personal data. Apple says that any personal data transmitted to the cloud will only be used for AI work and will not be retained or accessible to the company, even for debugging or quality control, after the model completes the request.
Simply put, Apple says people can trust it to analyze incredibly sensitive data—photos, messages, and emails that contain intimate details of our lives—and provide automated services based on what it finds there, without actually storing data on the internet or make any of it vulnerable.
He showed some examples of how this will work in upcoming versions of iOS. Instead of scrolling through your messages for the podcast your friend sent you, for example, you could just ask Siri to find it and play it for you. Craig Federighi, Apple’s senior vice president of software engineering, went through another scenario: an email arrives that disrupts a work meeting, but his daughter is performing at a play that night. His phone can now find the PDF of performance information, predict local traffic, and let him know if he’ll make it in time. These capabilities will extend beyond Apple-built apps, allowing developers to tap into Apple’s AI as well.
Because the company earns more from hardware and services than from advertising, Apple has less incentive than some other companies to collect personal data online, allowing it to position the iPhone as the most private device. Even so, Apple has been targeted by privacy advocates in the past. Security flaws led to inappropriate photos being leaked from iCloud in 2014. In 2019, contractors were found to be listening to intimate Siri recordings for quality control. Controversy over how Apple handles data requests from law enforcement continues.
The first line of defense against privacy breaches, according to Apple, is to avoid cloud computing for AI tasks whenever possible. “The cornerstone of the personal intelligence system is on-device processing,” Federighi says, meaning many of the AI models will run on iPhones and Macs rather than in the cloud. “Knows your personal data without collecting your personal data.”
This presents some technical hurdles. Two years into the AI boom, pinging models for even simple tasks still requires massive amounts of computing power. Achieving this with the chips used in phones and laptops is difficult, so only Google’s smaller AI models can run on the company’s phones, and everything else is done through the cloud. Apple says its ability to handle AI calculations on the device is due to years of chip design research, which led to the M1 chips that began shipping in 2020.
But even Apple’s most advanced chips can’t handle the full range of tasks the company promises to perform with AI. If you ask Siri to do something complex, it may need to pass that request, along with your data, to models that are only available on Apple’s servers. That step, security experts say, introduces a number of vulnerabilities that could expose your information to outside bad actors, or at least Apple itself.