24 x 7 World News

Elon Musk claims Apple’s new AI tools are a privacy risk. How much of a concern are they?

0

On Monday, Apple revealed a suite of highly anticipated AI features тАФ┬аincluding ChatGPT┬атАФ┬аthat it will soon integrate into its devices. But not everyone was thrilled at the news.

While some observers were excited at the prospect of, for example, drawing math equations on an iPad that could then be solved by AI,┬аbillionaire tech mogul Elon Musk called Apple’s inclusion of ChatGPT тАФ which is developed by OpenAI, not Apple тАФ an “unacceptable security violation.”

“If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies,” he┬аwrote in a post on X,┬аformerly Twitter. Musk co-founded OpenAI, but┬аstepped┬аdown from its board in 2018┬аand┬аlaunched a competing AI company.┬а

He said visitors to his┬аcompanies┬а“will have to check their Apple devices at the door, where they will be stored in a Faraday cage,” which is a shield that blocks phones from sending or receiving signals.

“Apple has no clue what’s actually going on once they hand your data over to OpenAI,” he wrote in a separate post. “They’re selling you down the river.”

But Musk’s posts also contained inaccuracies тАФ he claimed Apple was “not smart enough” to build its own AI models, when it in fact had тАФ leading to a community fact-check on X. But his privacy concerns were spread far and wide.┬а

But are those┬аconcerns valid? When it comes to┬аApple’s AI,┬аdo you need to worry about your privacy?

How privacy is built into Apple’s AI approach

Apple emphasized during┬аMonday’s announcement at its annual developer┬аconference┬аthat its approach to AI is designed with privacy in mind.

Apple Intelligence is the company’s name for its own AI models, which run on the devices themselves┬аand don’t send information over the internet to do things like generate images and predict text.┬а

But some tasks┬аneed beefier AI, meaning some information must be sent over the internet to Apple’s servers, where more powerful models exist.┬аTo make this process more private, Apple also introduced Private Cloud Compute.

WATCH | Calls to pause development of┬аAI:┬а

Elon Musk, tech experts call for pause on AI development

In an open letter citing risks to society, Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4. Some experts in Canada are also putting their name on that list.

When a device connects to one of Apple’s AI servers, the connection will be encrypted тАФ meaning nobody can listen in тАФ and the server will delete any user data after the task is finished. The company says not even its own employees can see the data that is sent to its AI servers.

The servers are built on Apple’s chips and use Secure Enclave, an isolated system that handles things like encryption keys, among other in-house privacy tech.┬а

Anticipating that people might not take it┬аat its word, Apple also announced that it will release some of the code powering its servers for security researchers to pick apart.

In┬аa thread on X, Johns Hopkins computer science professor Matthew Green praised the company’s “very thoughtful design,” but also raised some concerns. Researchers won’t see the source code running on servers, for example, which Green wrote is “a little suboptimal” when it comes to┬аinvestigating how the software behaves.

Importantly, users won’t be able to choose when┬аtheir device┬аsends information to Apple’s servers. “You won’t opt into this, you won’t necessarily even be told it’s happening. It will just happen. Magically.┬аI don’t love that part,” Green wrote.

He explained that there may be many┬аother flaws and issues that would be hard for security researchers to detect, but that┬аultimately, it┬а“represents a real commitment by Apple not to ‘peek’ at your data.”┬а

Could ChatGPT be a weak link?

Musk’s main point of contention was Apple’s upcoming integration of ChatGPT, the popular chatbot from OpenAI. While Apple’s own models will power most of what happens on your device, users can also choose to let ChatGPT handle some tasks.┬а

ChatGPT has been the focus of privacy concerns from experts and regulators. Research has found, for example, that the an earlier iteration of ChatGPT could┬аbe forced to divulge personal information┬аscraped from the internet тАФ┬аsuch as names, phone numbers and email addresses┬атАФ┬аand included in its training data.

LISTEN | Can OpenAI be trusted with ChatGPT?┬а

Day 69:56Can OpenAI be trusted to develop ChatGPT responsibly?

This week, OpenAI announced it was suspending the use of one of its new ChatGPT voices after Scarlett Johansson accused the company of imitating her voice without her permission. Meanwhile, multiple senior employees have resigned, citing concerns about the company’s commitment┬аto developing AI┬аsafely. Sigal Samuel, a senior tech reporter for Vox, unpacks what’s going on with the company.

Anything a user asks ChatGPT is also vacuumed up by OpenAI and used to train the chatbot, unless they opt out.┬аThis has prompted major companies, including Apple, to ban or restrict the use of ChatGPT by employees. ChatGPT is also the subject of multiple regulatory probes, including by the Office of the Privacy Commissioner of Canada.

When reached for comment via email, Apple said that ChatGPT is separate from Apple Intelligence and that it is not on by default.┬а

Additionally, as the company showed during Monday’s announcement, people who turn on the ChatGPT option are asked via pop-up notification every time if they’re sure they want to use it.┬аAs an extra layer of privacy, Apple says┬аit “obscures” users’ IP addresses, and that OpenAI will delete user data and not use it to improve the chatbot.┬а

A man stands on a dark stage with a large multicoloured illustration of an apple glowing behind him.
Apple CEO Tim Cook attends the annual developer conference event at the company’s headquarters in Cupertino, Calif., on Monday, where he unveiled Apple’s long-awaited AI strategy to integrate ‘Apple Intelligence’ across its suite of apps and partner with OpenAI to bring ChatGPT to its devices. (Carlos Barria/Reuters)

Apple did not respond to questions around how it will verify that OpenAI is deleting user data sent to its servers.┬а

In an emailed statement to CBC News,┬аApple said that people will be able to use the free version of ChatGPT “anonymously” and “without their requests being stored or trained on.”

However, Apple said users can choose to link┬аtheir┬аChatGPT account to access paid features, in which case their data is covered under OpenAI’s policies, meaning┬аrequests┬аwill be stored by the company and used for training unless the user opts out.┬а

“The data the AI receives is used to train the model,” wrote Cat Coode in an email. The┬аWaterloo, Ont.-based data privacy expert┬аfounded cybersecurity firm BinaryTattoo. “If you are feeding it personal information then it will take it.”

Coode┬аnoted┬аthat Apple also collects data from users, but┬а“historically ChatGPT has been less secure.”┬а

When reached for comment, OpenAI spokesperson Niko Felix said that “customers are informed and in control of their data when using ChatGPT.”┬а

“IP addresses are obscured and we don’t store [data] without user permissions,” Felix said. “Users can also choose to connect their ChatGPT account, which means their data preferences will apply under ChatGPT’s policies.”

ChatGPT users with an account can opt out of their data being used for training purposes.┬а

Apple Intelligence and ChatGPT on Apple devices aren’t just a test for AI tech, but also┬аfor new privacy approaches that are necessary to safely use large AI models over the internet.

Green, the┬аcomputer science professor, wrote in his thread that this world┬аof AI on devices is one we’re moving to.

“Your phone might seem to be in your pocket, but a part of it lives 2,000 miles away in a data center.”

Leave a Reply