The battle between privacy and convenience in artificial intelligence (AI) has truly begun, as Google introduces its own Pixel take on AI smartphones, making a subtle (and unwise) indirect dig at Apple for being open to working with others.
Because open beats closed, right?
The new Pixel 9 range ships with support for Gemini AI, Google’s ChatGPT/Apple Intelligence competitor. That means these users will have access to that AI, so long as they accept the privacy risk of using cloud-based AI services.
The cost of that convenience is some sacrifice in privacy (see below).
Google Gemini v. Apple Intelligence
Strangely, in view of its contributions to privacy, Google doesn’t want the conversation to be about privacy. So, it instead focused on convenience, telling Pixel launch attendees that the device is “deeply integrated with Google apps and Android and can handle complex queries without hand-off to third-party AI providers you may not know or trust.”
That “hand-off” remark seems to be an obvious dig at Apple Intelligence. Google knows that the Pixel is up against the iPhone and Apple Intelligence and needs to foster the perception that there are shortcomings to those products (and to the Macs and iPads that already ship with AI inside).
The problem is that when it comes to Apple Intelligence, Apple has developed an AI system with privacy at its core.
That means it can handle many tasks on-device, some using Apple’s own secured servers, and others with help from third-party AI services, currently including OpenAI’s ChatGPT and soon perhaps also Google Gemini. However, Apple prizes privacy and iPhone users will be warned before their request is shared with a third-party AI service provider. That means it is intentional.
Apple has also designed its AI system to gather and store as little information about you, the device, or your request as it can, while offering its services. That’s privacy by design.
Privacy beats convenience
It is also why I feel that Google is making a somewhat sophisticated argument when it throws shade at Apple for directing complex requests to “third-party AI providers you may not know or trust.” Because Google is collecting a lot of information about you — and you don’t know how it is used or who gets access to it.
This is the information Google Gemini collects when you make a request:
- Conversations.
- Usage information.
- Location Information.
- Feedback.
Google says it needs this information to improve its product, but anyone who still recalls the outcry when it emerged that Apple’s Siri teams had access to conversations made with HomePod will surely want to raise the same concern on Google’s statement that human reviewers “read, annotate and process your Gemini Apps conversations.”
Not only that, but also while Google promises to “take steps” to protect privacy as part of this process, including disconnecting conversations from the Google account, it clearly doesn’t see those steps as foolproof, or it wouldn’t also warn (reproduced in bold text, as that’s how Google published it on its own website): “Please don’t enter confidential information in your conversations, or any data you wouldn’t want a reviewer to see or Google to use to improve our products, services and machine learning technologies.”
Who watches the watchmen?
Now, I don’t know who those human reviewers working for Google are, where they might be, how much they are paid, or the extent to which they may have been penetrated by surveillance operatives. But I suspect at least some teams will be working for third-party companies on Google’s behalf.
If that’s the case, then when you use Gemini, you are also arguably sharing your requests with outside providers you might not “know or trust,” and while that information might be made private in the sense of removing names, telephone numbers, that still leaves the actual request — which in some cases is too much information to share in the first place.
Think about that. Then consider that conversations that have been reviewed by human reviewers are not deleted for three years, even if you delete your Gemini Apps Activity.
What you gain in exchange for these privacy risks is access to a sophisticated Generative AI system capable of helping you get challenging tasks successfully done.
Convenience has a cost, privacy has iPhone
This convenience comes at a price that will be far, far too high for any enterprise professional handling private or restricted data. Those working in regulated industries will almost certainly be advised to forbid employees against using these systems with company information.
Fortunately, there is an alternative: iPhone (and iPad and Mac) and Apple Intelligence.
While the combination might not (yet) provide everything ChatGPT or Gemini promise, what it does provide is built with privacy in mind, particularly when it comes to edge-based AI. So, the stark choice Google tried to obfuscate during its Pixel launch is that convenience has a cost, while privacy has an iPhone.
And those complex queries you can solve at the cost of privacy? We’ve managed to resolve many of them most of the time for the last few thousand years, so perhaps it’s OK to wait until Apple Intelligence can match those features in an AI that’s private by design. Just putting it out there.
More from Jonny Evans
- Apple’s Patreon fee will hurt the wrong people
- Apple, this is the time to seize the moment
- Seeking DMA compliance, Apple gets to business
Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.