Skip to Content
MarketWatch

Apple's OpenAI partnership threatens your online privacy and data security. Here's what we can do.

By Jurica Dujmovic

Apple and other tech giants must be pushed to adopt more secure and ethical AI practices

Apple must reaffirm its commitment to user privacy.

Apple (AAPL) has worked hard to create and maintain an image of being a champion of user privacy. Its refusal to create a backdoor for the FBI after a 2015 terrorist attack in San Bernardino, Calif., for example, earned widespread praise from privacy advocates and reinforced tech giant's image as a protector of personal data.

But there is more to this picture: Critics point out that Apple's reliance on iCloud for data storage undermines its privacy claims, as backed-up data remains accessible to law enforcement via warrants. Additionally, Apple's compliance with local laws in countries like China, where it enables state oversight, further complicates its global privacy stance. These strategic shifts and controversies raise significant concerns about the true extent of Apple's commitment to safeguarding personal information.

Apple's latest AI strategies and their implications for users also is troubling. Traditionally, Apple has championed on-device data processing, a strategy that ensures user data remains securely on the device rather than being transmitted to external servers. This approach has been a fundamental part of Apple's commitment to user privacy and helped protect against data breaches and unauthorized access. For example, features like Face ID and Touch ID process biometric data directly on the device, never leaving user's control?.

Yet now, with the release of iOS 18 and other upcoming software updates, Apple is moving toward server-side AI processing. The shift is partly driven by the need for more powerful computing resources to handle advanced AI functionalities that on-device hardware cannot support alone. By processing data on powerful servers, Apple can offer more sophisticated AI features, such as enhanced Siri capabilities and new machine learning models for predictive text and image recognition?.

To mitigate the privacy risks associated with server-side processing, Apple says it is implementing confidential computing techniques that aim to keep data encrypted not only at rest (when stored) but also during processing. Data is processed in secure enclaves within the server's hardware to avoid exposing it to the rest of the system. This way, even if a server is compromised, the data remains protected.

However, despite advanced encryption measures, server-side processing introduces vulnerabilities that its on-device counterpart inherently avoids. Here's why it's particularly problematic:

Security threats

When data is processed on the server side, user information is transmitted and stored in centralized data centers - attractive targets for cybercriminals despite being secured with advanced encryption techniques. If hackers physically infiltrate these centers, they can potentially breach secure enclaves - protected sections within the server hardware designed to process data securely. Such breaches could lead to unauthorized access to sensitive user data, which is a risk that on-device processing, where data never leaves the user's device, inherently avoids.

While gaining physical access to server hardware is complex and requires significant effort, the potential payoff for hackers can be enormous. Server-side data breaches can expose vast amounts of information from multiple users simultaneously, making such targets particularly lucrative. The complexity of executing such an attack does not mitigate the severity of the risk. High-profile breaches in recent years, like the Equifax data breach that exposed millions of records, illustrate how devastating such incidents can be. Needless to say, hackers will put in the work, if reward is enticing enough.

OpenAI concerns

Apple's potential collaboration with third-party companies, such as OpenAI, to integrate advanced AI features like chatbots into iOS 18, raises questions about users' data privacy and security. Outsourcing such functionalities means that Apple has less control over how data is handled, stored, and protected. Although the company plans to make such features opt-in, the integration with third-party systems introduces a layer of risk that users need to be aware of? - especially if that third party is OpenAI.

OpenAI has been at the center of several controversies regarding data collection and privacy practices. Apple users expose themselves to potential risks when opting to use its chatbot:

1. Data scraping: OpenAI has faced several lawsuits over its data scraping practices. The company has been accused of systematically scraping vast amounts of data from the internet, including personal information, without obtaining proper consent. This includes data from minors, which raises significant ethical and legal concerns. Lawsuits from media organizations and individuals allege that their copyrighted content was used to train OpenAI's models without permission. Plaintiffs argue that OpenAI's business model relies on the unauthorized use of vast amounts of personal data, posing significant privacy risks.

2. Regulatory scrutiny: Regulators in Europe and Japan have raised concerns about OpenAI's compliance with data protection laws. The Italian data protection authority, for example, has investigated OpenAI for potential violations of the European Union's privacy regulations (GDPR). Similar concerns have been raised in Japan regarding compliance with local data protection standards. These regulatory challenges underscore the potential risks of Apple collaborating with a company that is under scrutiny for its data handling practices?.

3. Sam Altman: Under Altman's leadership, OpenAI has faced criticism for not implementing robust ethical safeguards to govern the use of data. Critics argue that the existing policies and practices are inadequate in ensuring data privacy and ethical AI usage. Many question OpenAI's commitment to ethical standards and user protection, especially after its entire superalignment team was dissolved. The decision was heavily criticized as it seemingly undermines the importance of ethical oversight within the organization. (The superalignment team was responsible for ensuring that highly advanced AI systems, potentially smarter than humans, behaved in ways that were safe and aligned with human values.)

While the superalignment team was not primarily focused on privacy issues, their work had significant ethical implications, including the indirect safeguarding of user data and privacy through the development of robust and ethical AI practices. By ensuring that AI systems operate safely and ethically, the team contributed to a broader framework that supports user privacy and data protection.

So while Apple touts advanced security measures such as confidential computing, the inherent risks and vulnerabilities associated with centralizing data processing cannot be ignored. The partnership with OpenAI further exacerbates these concerns.

Read: A 'liar's dividend': AI deepfakes are a powerful tool for criminals and political control

Moreover, the regulatory landscape is increasingly unforgiving. Data protection laws in the EU and U.S. states like California demand rigorous compliance, and any misstep could result in significant legal and financial repercussions for Apple. Its decision to outsource AI functionalities to a third party with a checkered privacy record could complicate its ability to meet stringent regulatory requirements.

In an era where data is the new oil, and privacy breaches can have devastating consequences, Apple must reaffirm its commitment to user privacy. The company needs to prioritize on-device processing, where data remains under user's control, and ensure that any third-party collaborations strictly adhere to the highest privacy standards. Currently, and especially when it comes to OpenAI, that is not the case.

What we can do

Opting out of services that jeopardize personal data and demanding greater transparency from tech giants can drive meaningful change.

As users of this technology, we have a role to play as well. Staying informed and advocating for stronger privacy protections can push Apple and other tech giants toward more secure and ethical practices. Apple must reaffirm its commitment to safeguarding user data and always prioritize user privacy above convenience. Any third-party collaborations must adhere to the strictest privacy protocols, ensuring that user trust is never compromised.

In addition, opting out of services that jeopardize personal data and demanding greater transparency from tech giants can drive meaningful change. In today's digital age, where data breaches can inflict devastating consequences, it is imperative that we remain vigilant in our pursuit of digital privacy, lest we risk losing yet another fundamental pillar of our freedom.

More: What investors may be getting wrong about Apple's AI strategy

Also read: Apple's new AI-powered service is everything we'd hoped Siri, Google Assistant and Alexa would become

-Jurica Dujmovic

This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.

 

(END) Dow Jones Newswires

06-18-24 1721ET

Copyright (c) 2024 Dow Jones & Company, Inc.

Market Updates

Sponsor Center