Month: August 2024

9 hidden Google Pixel features for smarter calling

Pixel phones are filled with efficiency-enhancing Google intelligence, and one area that’s all too easy to overlook is the way the devices can improve the act of actually talking on your cellular telephone. That’s true for any Pixel model, no matter how old it may be, and it’s especially true for the new Pixel 9 series of phones that launched this past week.

Talking on your phone, you say? What is this, 1987?! Believe me, I get it, Vanilli. We’re all perpetually busy beings these days, and the timeless art of speaking to another human on your mobile device can seem both archaic and annoying.

But hear me out: Here in the real world, placing or accepting (or maybe even just avoiding) a good old-fashioned phone call is occasionally inescapable. That’s especially true in the world of business, but it’s also apparent in other areas of life — from dialing up a restaurant to confirm your carnitas are ready to dodging your Uncle Ned’s quarterly regards-sending check-ins. (No offense, Ned. I never dodge your calls. Really. Send my regards to Aunt Agnes.)

Whatever the case may be, your trusty Pixel can make the process of dealing with a call easier, more efficient, and infinitely less irksome, and you don’t need the shiny new Pixel 9 to appreciate any of these advantages. (The only exception is the new incoming Call Notes feature launching exclusively on the Pixel 9, but since it isn’t available to anyone just yet, we won’t get into it here.)

As a special supplement to my Pixel Academy e-course — a totally free seven-day email adventure that helps you uncover tons of next-level Pixel treasures — I’d like to share some of my favorite hidden Pixel calling possibilities with you, my fellow Pixel-appreciating platypus. Check ’em out, try ’em out, and then come sign up for Pixel Academy for even more super-practical Pixel awesomeness.

Pixel calling trick #1: The call sound sharpener

We’ll start with something critical to the act of communicating via voice from your favorite Pixel device — ’cause if you’re dialing digits and preparing to (gasp!) speak to another sentient creature, the last thing you want to do is struggle to hear the person clearly.

Since 2022’s Pixel 7 series, all Pixels have offered a helpful option called Clear Calling that you’d be crazy not to enable. In short, it’s like noise cancellation for the human voice: It actively reduces the background noise when the person you’re talking to is in a loud environment (as people on cell phones always seem to be). And it genuinely does make a meaningful, noticeable difference.

Best of all? All you’ve gotta do is dig around a bit once to make sure the system is enabled:

  • First, open up your Pixel’s system settings.
  • Tap “Sound & vibration,” then scroll down and look for the line that says “Clear Calling.” Again, it should be present on any Pixel model from the Pixel 7 onward.
  • Tap that, then confirm that the toggle on the screen that comes up next is in the on and active position.
Google Pixel calling features: Clear Calling
One flip of a switch, and boom: All your calls will be noticeably clearer.

JR Raphael, IDG

And that’s it! All your compatible calls will now have Clear Calling enhancements in place, and you’ll be able to hear what any comrades, colleagues, and/or caribous you communicate are saying more easily than ever.

Not bad, right? And we’re just getting started.

Pixel calling trick #2: The hassle-free holder

The worst part of phone calls, without a doubt, is having to hold. It’s the time-tested way for some of the world’s most sadistic companies to waste precious moments in your day, test what little patience you have left, and summon the pink-speckled rage-demon that lives deep within your brain.

Enter one of the Pixel’s best-kept secrets: With your Google-made gadget in hand (or fin — no offense intended to any aquatic readers out there), you’ll never have to hold again.

Open up the Phone app, tap the three-dot menu icon in its upper-right corner, and select “Settings” from the menu that pops up. Look for the section labeled “Hold for Me.” It’s present on the Pixel 3 and higher, and it’s presently available in the US, the UK, Canada, Australia, and Japan (sorry, other international pals!) — mostly for English-speaking Pixel owners, though also for Japanese in Japan.

If you meet those conditions, tap that line, then make sure the feature is actually active by flipping the toggle into the on position on the screen that comes up. It’s typically off by default on most new Pixels, so you will need to make sure you take this step anytime you reset your phone or move into a new device.

Then just call any crappy company you like (or don’t like, to be more accurate) and look for the handy new Hold for Me option on your screen once the call is underway and your brightly colored rage beast starts polka-dancing around your cranium. As long as the call involves a toll-free number, your fancy new sanity-saving button should show up and be ready for vigorous pressing.

Google Pixel calling features: Hold For Me
That Hold for Me button is one of the Pixel’s best calling-related tricks.

JR Raphael, IDG

The second the company places you on an eternal hold, just smash that button, utter a few choice curse words, for good measure, and then just go about your business without having to listen to the sounds of smooth jazz and endless reassurances that your call is, like, totally important to them (and will be answered — let’s all say it together now — in the order it was received).

Google Pixel calling features: Hold For Me in action
No more holding for you, thanks to your Pixel’s Hold for Me capability.

JR Raphael, IDG

As soon as an actual (alleged) human comes on the line, your Pixel will alert you. At that point, unfortunately, the displeasure of dealing with said company is back on your shoulders. But being able to skip the 42-minute build-up to that point is a pretty powerful perk and one you won’t find on any other type of device.

Pixel calling trick #3: The irritation estimation station

Having your Pixel hold for you is fantastic and all, but what if you could avoid putting yourself in a position where you even have to hold in the first place?

Well, hold the phone, my friend — ’cause your Pixel can help with that, too.

In what may be the most hidden of all hidden Pixel calling perks, the Pixel Phone app has the incredibly cool ability to tell you how busy any given business number is likely to be before you even place the call.

This one happens automatically, whenever relevant info is available, so you mostly just need to know it exists and then keep an eye out for it when the right situation arises.

The way it works is this: When you open up your Pixel’s Phone app and start typing out a number in the dialer, your Pixel will automatically match those digits with its knowledge of typical activity for that business at different times of day and the subsequent likely time you’re bound to wait depending on when you’re calling. As soon as you finish typing in the number, that info will appear on your screen if it’s available — like so:

Google Pixel calling features: Hold time estimates
See exactly how long waits are expected to be with your Pixel phone’s built-in intelligence.

JR Raphael, IDG

Now, that’s some intel I will happily accept!

What’s especially useful about this setup is that in addition to seeing the expected wait time for your call in the current moment, you can tap any other day or time to compare and see if things might be at least a little less bad at some point in the future.

Yes, please — and thank you.

Pixel calling trick #4: The menu maze skipper

Holding aside, one of the most irritating parts of calling a company is navigating your way through those blasted phone tree menus. You know the drill, right?

  • For store hours and information, press 1.
  • For directions to the nearest location, press 2.
  • For a test of your sanity, continue to listen to these choices.
  • For the sound of sea cucumbers, press 4.
  • For the option you actually want to select, prepare to wait through at least 14 more annoying options.

Yeaaaaaaaaah. Not a great use of anyone’s time (though, to be fair, a great way to test your ability to get angry).

Well, my Pixel-palmin’ pal, your Googley gadget’s got your back. But it’s on you to enable the associated feature and make sure it’s ready to save you from your next visit to the haunted phone tree forest.

Here’s how:

  • Open up that fancy Phone app of yours.
  • Once again, tap the three-dot menu icon and select “Settings.”
  • Look for the “Direct My Call” option and smack it with your favorite fingie.
  • Flip all the toggles on the screen that comes up into the on position. (Again, they’re usually off by default with any new or freshly reset Pixel phone.)
  • Flip a pancake in a griddle and then eat it with gusto.*

* Gusto-filled pancake consumption is optional but highly recommended.

This one’s available on the Pixel 3a and higher and only in the US and UK and in English, by the way. Insert the requisite grumbling on behalf of all other Pixel owners here.

So long as you can get the option enabled, though, just look for the Direct My Call button at the top of the screen after you’ve made a call to a company that clearly hates you. Tap it, and as soon as your Pixel detects number-based menu options being provided, it’ll kick the feature in and start showing you any available options — set apart in buttons alongside a transcription of everything else the obnoxious phone system is saying to you:

Google Pixel calling features: Direct My Call
Phone menus are far less obnoxious with the help of your Pixel’s Direct My Call system.

JR Raphael, IDG

Still mildly annoying? Of course. But a noticeable step up from actually having to tune in and listen to all that gobbledegook in real-time? Holy customer service nightmare, Batman — is it ever.

Pixel calling trick #5: The complete call transcriber

Our next buried Pixel calling treasure is technically an Android accessibility feature. But while it provides some pretty obvious (and pretty incredible) benefits for folks who actually have hearing issues, it can also be useful for just about anyone in the right sort of situation.

It’s part of the Pixel’s Live Caption system, and it has a couple of interesting ways it could make your life a little easier. The system itself is available in English on the Pixel 2 and later and English, French, German, Italian, Japanese, and Spanish on the Pixel 6 series and later (tan elegante!).

To use it, just press either of your phone’s physical volume buttons while you’re in the midst of a call — then look for the little rectangle-with-a-line-through-it icon at the bottom of the volume control panel that pops up on your screen.

Tap that icon, confirm that you want to turn on the Live Caption for calls system, and then just wait for the magic to begin.

Google Pixel calling features: Live Caption
You can see what anyone is saying with the Pixel’s Live Caption option.

JR Raphael, IDG

How ’bout them apples, eh? Everything the other person says will be transcribed into text and put on your screen in real-time all throughout the call.

If you’ve got the Pixel 6 or higher, you can even have your phone translate the jibber-jabber into a different language on the fly. Those more recent Pixel models can also allow you to type out responses back and then have the affable genie within your gadget read your words aloud to the other person on your behalf — a fine way to have an actual spoken conversation without making a single peep, for times when both silence and voice-based communication are required.

To activate that option and also configure the other Live Caption possibilities, head into the Sound & Vibration section of your system settings and tap the “Live Caption” line. There, you can enable the ability to “Type responses during calls” and also tell your Pixel to caption your calls always or never — or to prompt you every time to check.

Google Pixel calling features: Live Caption options
The Pixel’s Live Captions settings hold a treasure trove of hidden helpers.

JR Raphael, IDG

Just be sure to hit your volume button again once the call is done and turn the Live Caption system back off via that same volume-panel icon. Otherwise, the system will stay on and continue to caption stuff indefinitely (and also consume needless battery power while it’s doing it).

Pixel calling trick #6: The automated screener

Perhaps my favorite tucked-away Pixel phone feature is one that brings us into the realm of incoming calls. It’s an extraordinarily sanity-saving system for screening calls as they come in to keep you from having to fritter away moments of your day talking to telemarketers, totally theoretical relatives named Ned, and anyone else you’d rather avoid.

This one’s available on all Pixel models but also only for English and in the US, as of now. The feature sort of works in a bunch of other countries, too, though without the automated element.

To set it up, head back into your Pixel’s Phone app and once again tap that three-dot menu icon followed by “Settings.” This time, find and select the line labeled “Call Screen.”

It may take the system a few minutes for your phone to activate the feature the first time you open it, but once it does, you can set up exactly how the system works and when it should kick in. The latest version of Google’s Pixel Call Screen gives you three simple paths to choose from, depending on how aggressively you want your Pixel to screen and filter calls for you:

  • Maximum protection automatically kicks all unknown numbers (i.e. anyone who isn’t in your Google Contacts on Android) into the screening processes and auto-declines any calls determined to be spam on your behalf
  • Medium protection takes things down a notch and screens only suspicious-seeming calls, while continuing to auto-decline spam
  • And Basic protection declines only known spam calls, without all the extra screening

Pick whichever path you prefer, and the next time a person and/or evil spirit on the other line gets sent into screening, your phone will ring while showing a transcription of their response on the screen. That way, you can see what the call’s about before deciding if you want to pick up.

Google Pixel calling features: Call Screen
Call Screen lets your Pixel screen and filter calls on your behalf.

JR Raphael, IDG

And here’s the especially cool part: Once the system is set up and activated, you can also always activate it manually, too — even when a call is coming in from a known number. Just look for the “Screen call” command on the incoming call screen. Whoever’s calling will be asked what they want, and you’ll see their responses transcribed in real-time. You can then opt to accept or reject the call or even select one-tap follow-up questions to get more info (and/or annoy your co-workers, friends, and family — an equally valid use for the function, if you ask me).

Google Pixel calling features: Call Screen questions
Call Screen can ask all sorts of questions for you before you decide to take a call.

JR Raphael, IDG

You can always find a full transcription and audio recording of those interactions in the Recents tab of your Phone app, too, in case you ever want to go back and review (and perhaps publish) ’em later (hi, honey!).

Google Pixel calling features: Call Screen transcription
Your Pixel’s Call Screen recordings and transcriptions are always available, if you ever want to revisit ’em.

JR Raphael, IDG

Ah…efficiency.

Pixel calling trick #7: The out-loud call announcer

Provided you work in a place where occasional noises aren’t a problem, another interesting way to stay on top of incoming calls is to tell your Pixel phone to read caller ID info out loud to you anytime a call comes in. That way, you can know who’s calling as soon as you hear the ringtone, without even having to find your phone or glance up from your midafternoon Wordle break Very Important Work Business™.

This one’s super-simple to set up: Just gallop your way over into the Phone app, tap that three-dot menu icon and select “Settings,” then look for the “Caller ID announcement” line way down at the bottom of the screen.

Tap it, and you’ll be able to choose to have incoming call announcements made always, never, or with the lovely middle-ground option of only when you’ve got a headset connected.

Google Pixel calling features: Caller ID announce
Hear incoming call info out loud in whatever scenarios you want with the Pixel’s Caller ID announcement options.

JR Raphael, IDG

And here’s an extra little bonus to go along with that: If you’ve got a Pixel 6 or higher and use English, Japanese, or German as your system language, you can even accept or reject a call solely by speaking a command out loud.

You only have to configure it once: Provided you have one of those devices, head into your Pixel’s system settings and type quick phrases into the search box at the top of the screen.

Tap “Quick phrases” in the list of results, then turn the toggle next to “Incoming calls” into the on position — and the next time a call starts a-ring-a-ring-ringin’, simply say “Answer” or “Decline” to have your Pixel do your bidding without ever lifting a single sticky finger.

Pixel calling trick #8: The polite rejecter

All right, so what if you get a call you know you want to avoid — but in an extra-polite way that prevents you from having to respond to a voicemail later? Well, fear not, for your Pixel has a fantastic feature for that very noble purpose.

The next time such a call comes in, look for the “Message” button on the incoming call screen, right next to that “Screen call” command we were just going over a minute ago. (And if your Pixel’s screen was on when the call started and you’re seeing the small incoming call panel instead of the full-screen interface, press your phone’s power button once. That’ll bump you back out to the standard full-screen setup, where you’ll see the button you need.)

Tap that command, and how ’bout that? With one more tap, you’ll be able to decline the call while simultaneously sending the person a charming message explaining the reason for your rejection.

Google Pixel calling features: Quick Responses
One-tap rejection with the Pixel’s Quick Responses system — doesn’t get much easier than that.

JR Raphael, IDG

You can pick from a handful of prewritten texts or even opt to write your own message on the spot, if you’re feeling loquacious. You can also customize the default responses to make ’em more personalized and appropriate for your own friendly rejection needs. Just go back into the “Settings” area of your Pixel Phone app and look in the “Quick responses” section there to get started.

Google Pixel calling features: Quick Response custom
You can create your own custom Quick Responses within the Pixel Phone app’s settings.

JR Raphael, IDG

Cordial, no? And speaking of simple dismissals, we’ve got one more important Pixel calling trick to consider…

Pixel calling trick #9: The simple silencer

Let’s be honest: No matter what type of pleasant-seeming ringtone you pick out on your phone — the gentle tolling of chimes, the majestic fwap! of an octopus violently flapping its limbs, or maybe even the soothing bellows of that weird guy who for some reason shouted in every single song by the B-52’s — something about the sound of a phone ringing always manages to raise one’s hackles.

Well, three things: First, on the Pixel 2 or higher, you can configure your phone to vibrate only when calls first come in — and then to slowly bring in the ring sound and increase its volume as the seconds move on. That way, the actual ringing remains as minimally annoying as possible and only begins (and also only gets loud) when it’s actually needed. To activate that option, look in the Sound & Vibration section of your system settings, tap either “Vibrate for calls” or “Vibration & haptics,” then make sure the toggle next to “Vibrate first then ring gradually” is in the on and active position.

Second, more recent Pixel models have a nifty “Adaptive alert vibration” feature that’ll intelligently turn down your device’s vibration strength anytime your phone is sitting face-up on a surface. That’s a nice way to make that buzzing a bit less bothersome when a call comes in in any such scenarios.

And finally, a super-simple possibility to remember: The next time your Pixel rings and you want to make it stop — whether you’re planning to answer the call or not — just press either of the phone’s physical volume buttons. It’s easy to do even with the phone in your pocket, and the second you do it, all sounds and vibrations will end. You’ll still be able to answer the call, ignore it, send a rejection message, scream obscenities at the caller with the knowledge that they’ll never hear you, or whatever feels right in that moment. But the hackle-raising sound will be silenced, and your sanity will be saved. Whew.

Remember: There’s lots more where this came from. Come join my completely free Pixel Academy e-course for seven full days of delightful Pixel knowledge — starting with some camera-centric smarts and moving from there to advanced image magic, next-level nuisance reducers, and oodles of other opportunities for advanced Pixel intelligence.

I’ll be waiting.

Researchers tackle AI fact-checking failures with new LLM training technique

As the excitement about the immense potential of large language models (LLMs) dies down, now comes the hard work of ironing out the things they don’t do well.  

The word “hallucination” is the most obvious example, but at least output that is crazily fictitious stands out as wrong. It’s the lesser mistakes – factual inaccuracies, bias, misleading references – that are more of a problem because they aren’t noticed.

It’s become a big enough issue that a paper by the Oxford Internet Institute argued last year that the technology is so inclined to sloppy output that it poses a risk to science, education, and perhaps democracy itself.

The digital era finds itself struggling with the issue of factual accuracy across multiple spheres. LLMs, in particular, struggle with facts. This isn’t primarily the fault of the LLMs themselves; if the data used to train an LLM is inaccurate, the output will be too.

Now a team of researchers from IBM, MIT, Boston University, and Monash University in Indonesia has suggested techniques they believe could address the shortcomings in the way LLMs are trained. The paper’s abstract sums up the problem:

“Language models appear knowledgeable, but all they produce are predictions of words and phrases — an appearance of knowledge that doesn’t reflect a coherent grasp on the world. They don’t possess knowledge in the way that a person does.”

One solution is to deploy retrieval-augmented generation (RAG), which improves LLMs by feeding them high-quality specialist data.

The catch is that this requires a lot of computational resources and human labor, which renders the technique impractical for general LLMs.

Marking its own homework

The team’s alternative is something called deductive closure training (DCT), whereby the LLM assesses the accuracy of its own output.

In unsupervised mode, the LLM is given “seed” statements which it uses to generate a cloud of statements inferred from them, some of which are true, others which aren’t. The LLM model then analyses the probability that each of these statements is true by plotting a graph of their consistency. When supervised by humans, the model can also be seeded with statements known to be true.

“Supervised DCT improves LM fact verification and text generation accuracy by 3-26%; on CREAK, fully unsupervised DCT improves verification accuracy by 12%,” reported the team’s research paper (PDF).

Meanwhile, a second team has suggested a way to refine this further using a technique called self-specialization, essentially a way of turning a generalist model into a specialist one by ingesting material from specific areas of knowledge.

“They could give the model a genetics dataset and ask the model to generate a report on the gene variants and mutations it contains,” IBM explained. “With a small number of these seeds planted, the model begins generating new instructions and responses, calling on the latent expertise in its training data and using RAG to pull facts from external databases when necessary to ensure accuracy.”

This might sound rather like a way of implementing RAG. The difference is that these specialist models are only called upon, via an API, when they are needed, the researchers said.

Still bad at facts

According to Mark Stockley, who co-presents The AI Fix podcast with Graham Cluley, the underlying problem is that LLMs are widely misunderstood. They are good at specific tasks but are not, nor were ever intended to be, uncomplicated fact- or truth-checking engines.

“The IBM research doesn’t seem to address the root cause of why LLMs are bad at facts, but it suggests there is a useful but unspectacular modification that might make them less bad at the things they’re currently bad at,” he said.

“You can look at that and say the route to a truly intelligent AI doesn’t go through LLMs and so improving them is a sideshow, or you can look at that and say LLMs are useful in their own right, and a more useful LLM is therefore a more useful tool, whether it’s enroute to artificial general intelligence (AGI) or ultimately a cul-de-sac.”

What is not in doubt, however, is that LLMs need to evolve rapidly or face either becoming specialized, expensive tools for the few or glorified grammar checkers for everyone else.

MIT delivers database containing 700+ risks associated with AI

A group of Massachusetts Institute of Technology (MIT) researchers have opted to not just discuss all of the ways artificial intelligence (AI) can go wrong, but to create what they described in an abstract released Wednesday as “a living database” of 777 risks extracted from 43 taxonomies.

According to an article in MIT Technology Review outlining the initiative, “adopting AI can be fraught with danger. Systems could be biased or parrot falsehoods, or even become addictive. And that’s before you consider the possibility AI could be used to create new biological or chemical weapons, or even one day somehow spin out of control. To manage these potential risks, we first need to know what they are.”

For IT, Jamf’s Microsoft Azure partnership means a lot

Jamf has removed yet another brick in the wall put up by Windows-centric IT staffers to fend off acceptance Macs in the enterprise, revealing a new partnership with Microsoft that simplifies management of both Windows and Apple devices using Microsoft Azure.

The arrangement means Jamf device management solutions will be hosted on Microsoft Azure and made available for purchase on the Azure Marketplace. 

The Apple device management company has also joined the Microsoft ISV Partner Program and reached a five-year agreement to expand its existing collaboration with new and innovative Microsoft Cloud and AI-powered solutions.

Apple is in the enterprise tent

This builds on work both companies have been doing since at least 2017, as they responded to the realization that most enterprises now recognize the value of Apple products within their ecosystems.

This trend kick-started when the iPhone entered the workplace as an employee-owned device and grew to include employee-choice schemes across multiple platforms.

Of course, those in IT with vested (and sometimes expensively qualified) interest in Microsoft’s hegemony continue to sit on their thrones before a restless ocean to deny the changing tides — and those are the ones most likely to benefit from the new partnership between Jamf and MIcrosoft.

That’s because the move to make Jamf Pro available via Azure (cloud and marketplace) means those accustomed to using Azure to help manage and secure Windows devices can now use Jamf to manage and secure Apple devices from within the same familiar, unified environment. 

More than Windows

This goes beyond just the PC. Many companies rely on Microsoft’s back-end technologies and services, so the move to bring Jamf into Azure will make life a little easier there too. 

To an extent, this reflects what current Jamf CEO, John Strosahl told me last year: “Many companies still use Windows applications and services, and we do support some of those activities on network security and the like — things that are further from the device. But the closer you get to the device, the more we believe that Apple is the future.”

With Azure, it will be much easier to integrate iPhones, iPads, and Macs in complex IT workflows built on Microsoft’s enterprise cloud platform.

The direction of travel has been clear for a while, particularly as Jamf integrates with Microsoft Intune and Entra ID. In truth, Jamf and Microsoft have created a string of landmark partnerships in recent years, including integrations across Sentinel, Defender, and Copilot for Security. Jamf joined the Microsoft Intelligent Security Association (MISA) in 2023. 

The Apple enterprise

The news should also help Windows-based tech support take better control over the security of those Apple devices that are already deployed across their networks.

With as many as 75% of enterprise employees ready to choose a Mac if given a choice, IT really should take security seriously. Earlier this year, Jamf reported that 40% of mobile users and 39% of organizations are running a device with known vulnerabilities. Apple itself has also warned that the number of data breaches has at least tripled since 2013. (Though it is fair to say that Apple is not the platform most impacted, which is a story that speaks many volumes on its own account.) Timely updates on every platform should be in your supplier SLAs.

“It’s time for organizations to get their modern device estates in order by embracing industry best practices and building a defense-in-depth strategy for the hybrid workforce,” Michael Covington, vice president of portfolio strategy at Jamf, said earlier this year.

Soon, with Jamf and Azure, it will become a little easier to do just that. The multi-platform future of enterprise technology continues to emerge, and Apple will play a big part.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Google documents filed in antitrust case show efforts to push data collection limits

For almost as long as it has existed, Google has been at the center of controversies around its data strategy, ranging from privacy concerns, data retention with its related cybersecurity implications, and compliance, to the debate about what kind of limits there should be for leveraging data.

A series of Google internal documents, which were entered as exhibits in an ongoing United States prosecution of the company on antitrust issues, shines a light on the data giant’s strategy and positioning. The documents are roughly seven years old, so these memos may not reflect Google’s current thinking, but they do give IT leaders a peek into Google’s candid views on data strategies.

The Google documents are part of the United States Vs. Google litigation being heard in the US District Court for Virginia’s Eastern District, and were made public August 6.

The internal documents made clear Google’s enthusiasm for coordinating all possible data about users so that they could sell the most focused details to advertisers. Google said that it needs to “use a combination of advertiser data such as email subscription lists, Google signed-in data such as web traversal data, Gmail data such as receipts, and subscribed newsletters, to target users across multiple devices.” 

It also showed a fondness for various corporate-speak euphemisms for spying on users, such as “sharing of conversational corpus” and “being able to harvest the conversation signals that could improve ad timeliness and applicability will be important to stay competitive.” 

Google said that it needed to invest more heavily “to improve our understanding of the message that is being exchanged between the parties. To be used to better understand the funnel position of a user and as well as broad quality uplift.”

Google also wrote that it needed to “evaluate tradeoffs between user happiness and shorter-term revenue gains.”

The notes also revealed hesitation by some at Google to push data usage too far, saying, “The capabilities of Gmail ads format has remained a quite limited set over the last couple of years, mostly due to security concerns by the consumer Gmail team.”

One document did express corporate worries about privacy, but it was not involving the privacy of users. It involved the privacy of Google itself. 

“Once again, the privacy protections here are key. We would never allow audiences generated with Google data to leave the Google ecosystem, nor impression level reports based on those media buys,” it said. “Ad tech vendors or agencies could then use these reports and the ability to activate media from them within their own systems. We suggest we require Google-branded, or alternatively white-labeled or otherwise branded by the partner.”

The documents also show that Google at the time was starting to see the need to focus more on what users were doing online and less on where they were doing it. Google said that it wanted to focus on “geo-targeting based on weather/travel searches, not IP address, auto make/model/year, e-commerce product catalogs, user profile/ transaction data, etc.”

Google strategists elaborated on these possibilities as they evaluated efforts by various companies that were luring away Google advertisers. 

“Services have enough data — typically location, logged-in users, intent data — to offer unique targeting aligned with their brand. Weather.com can command a premium with weather data, Pandora can optimize based on what type of music someone listens to, etc. TripAdvisor can target based on destination searches. Commerce companies can even expand into audience extension, buying third-party inventory on behalf of advertisers. We lost Wayfair because AppNexus is better at this than us,” the documents said.

“Audio services like Pandora and Spotify are heavily subscription-driven and many content companies are pursuing subscriptions with increasing success. NYT [New York Times] makes as much from subscriptions as ads and wants to emulate Netflix’s sophistication with upsells. Conde Nast is trying to build a universal subscriber ID to manage on-site subscription offers.”

The documents also included management discussions about Google’s strategic weaknesses, pointing out that some advertisers who had left Google fared significantly better.

“Weather.com ended exclusivity with Google and is seeing 30%+ revenue lift,” it said.

The documents also looked at Gmail’s global challenges at the time, under “coverage shortcomings,” noting:

“Gmail lacking strong penetration in Apple devices. No obvious differentiator from Apple Mail to merit standalone download, unlike data differentiator in Maps. Gmail lacking footprint in key countries/regions. China: no Google products. Japan: Yahoo mail is the leading provider. Russia: Mail.ru is the key player.”

The Virginia case, one of multiple antitrust actions involving Google at the moment, is heading to a jury trial. Many more documents, some of them much more recent, are expected to be published soon. Those are likely to shed even more light on Google’s data strategies.

Microsoft rolls out Face Check selfie verification system

Microsoft’s facial matching verification system, Face Check, is now available. The feature, part of Entra Verified ID, offers a new way to confirm a user’s identity and protect against unauthorized login attempts, Microsoft said

Face Check works by comparing selfie footage taken on a user’s smartphone in real-time with a verified photo held on Microsoft’s servers — a passport photo or driver’s license, for example. The real-time selfie footage won’t be stored after a verification attempt, Microsoft said.

A successful match will confirm a user’s identity and authorize a login to an account. This could be useful for purposes such as remote employee onboarding or password changes, the company said. 

Microsoft’s Azure AI Vision Face API is used to power the face detection and recognition. The software can also conduct a “liveness” check, which helps prevent the use of a static photo or 2D video to trick the verification system, Microsoft said, so deepfakes shouldn’t be effective.  

Customer organizations can choose the level of confidence required to accept a Face Check login attempt. The higher the confidence score threshold, the less likely Face Check will incorrectly verify an impersonator. The default score is a 50% match, which equates to a one in 100,000 chance of getting a false positive; at 90%, the chances are  one in a billion, Microsoft said. (A higher confidence score requirement also increases the likelihood a legitimate login attempt will be rejected.)

Changes in a user’s appearance compared to the verified photo — a different haircut, for example –—could lower the match score, as well as differences in surroundings, such as lighting.

Microsoft Entra ID customers can access Face Check as a standalone service (which costs 25 cents per verification) or with a subscription to the Entra Suite paid add-on ($12 per user each month).  

Hollywood unions OK AI-cloned voices in commercials

Hollywood actors’ union SAG-AFTRA said it has signed an agreement with talent marketplace Narrativ to let advertisers buy the rights from actors to recreate their voices using AI.

According to Reuters, the agreement allows the actors themselves to set the price for the digital voice copy, provided that it is at least equivalent to SAG-AFTRA’s minimum wage for audio-based advertising. Brands must also obtain consent from the actor for any ad that uses a digital voice copy.

“It is understandable that not all members will be interested in taking advantage of the opportunities that licensing their digital voice copies can offer. But for those who want to, you now have a safe alternative,” SAG-AFTRA official Duncan Crabtree-Ireland said in a statement.

BitLocker encryption becomes the default in Windows 11 24H2

It’s long been possible to encrypt the contents of a Windows PC using the included BitLocker encryption tool. Beginning this fall, with the newest update of Windows 11 (version 24H2), the encryption will be activated by default during re- or new installations, according to The Verge.

Microsoft also plans to lower the system requirements for BitLocker; for example, the computer no longer needs to support Hardware Security Test Interface (HSTI) or Modern Standby.

In a normal update, encryption will not be turned on automatically, meaning users shouldn’t run into trouble accessing files if they update from Windows 11 23H2 to 24H2, for example.

Bitlocker encryption becomes the default in Windows 11 24H2

It’s long been possible to encrypt the contents of a Windows PC using the included Bitlocker encryption tool. Beginning this fall, with the newest update of Windows 11 (version 24H2), the encryption will be activated by default during re- or new installations, according to The Verge .

Microsoft also plans to lower the system requirements for Bitlocker; for example, the computer no longer needs to support Hardware Security Test Interface (HSTI) or Modern Standby.

In a normal update, encryption will not be turned on automatically, meaning users shouldn’t run into trouble accessing files if they update from Windows 11 23H2 to 24H2, for example.

The irony of Google’s Pixel 9 AI gamble

If there were any ounce of doubt remaining, I think we can safely set it aside now and say that Google’s all in on AI.

And, let’s be honest: That’s probably putting it mildly.

At its earlier-than-usual Made by Google Pixel launch gala this week, El Googabond made it clear that its next generation of Android-based Pixel phones would be all about AI. And good golly, it sure ain’t kiddin’.

The Pixel 9, Pixel 9 Pro, and Pixel 9 Pro XL — along with the Pixel 9 Pro Fold, launching a little later in the year — represent a “rebuilding” of Android with “AI at the core,” as a Google executive put it in a prebriefing I attended ahead of Tuesday’s public event.

That kind of language is everywhere you look with these new Pixel products. Right off the bat, the official Google announcement proclaims that the devices “bring you the best of Google AI.” Another company blog post waxes poetic about how the now-present-on-Pixels-by-default Gemini Android assistant reshapes the Pixel experience. Everywhere you look and everything you see about the phones, it’s AI, AI, AI. And then AI some more.

It’s an interesting move that mirrors trends across the tech industry right now. Practically every app, service, and product you see is “AI-based” at this point, even if it does exactly the same stuff it did two years earlier — before those two lofty letters turned into an unavoidable buzzword. 

With Google’s Pixel phones, though, the singular-seeming emphasis on AI represents a puzzling duality with a more than a sliver of irony attached. And it sure seems like going all in on the idea of “AI” as a central identity of the Pixel model is a bold and risky gamble for Google to make.

[Psst: Got a Pixel? Any Pixel? Check out my free Pixel Academy e-course to uncover all sorts of advanced intelligence lurking within whatever Pixel model you’re using!]

Google’s Pixel 9 phones, AI aside

Let’s get one thing out of the way: On the surface, at least, the new Pixel 9 phones look like spectacular products.

Pixel 9 Pro XL Rose Quartz
The Pixel 9 Pro XL, in its Rose Quartz color.

Google

The Pixel 9 series maintains all the unmatched qualities that have made Google’s Pixel devices stand out from the Android pack since the very first model:

  • The phones sport the same pristine, unmuddled Android software experience that’s defined Pixels from the get-go, along with the same seamless integration with standout Google services and cohesive-feeling consistency with the greater Google and Android ecosystems.
  • They have all sorts of genuinely useful features no one else offers along with a lack of patience-testing and privacy-compromising additions so many other Android device-makers dump into their devices.
  • Their cameras look to be every bit as phenomenal as what past Pixels have possessed, with all sorts of intelligent software-driven editing tools and opportunities for effortless enhancements.
  • And they continue Google’s industry-leading seven-year promise for timely and reliable OS updates, security patches, and quarterly feature drops — something even Apple can’t outclass.

Newly polished outer appearances aside, the Pixel 9 devices add some interesting and useful-seeming practical elements into the equation, too, including a Call Notes feature that creates a private (and locally processed) summary of every call you make, a Pixel Screenshots app that lets you search through saved screenshots with natural language inquiries, and a futuristic new camera feature that lets you capture a group photo including yourself without having to rely on any awkward arm-stretching maneuvers.

That’s all awesome, right? So what’s the gamble — and where’s the irony?

It’s two-fold. And it’s something I suspect a lot of potential Pixel purchasers — particularly folks who aren’t already familiar with the Pixel experience and who might be considering a switch from a different type of device — are gonna be chewing over closely in the weeks and months ahead.

Google’s Pixel 9 AI obsession

First things first, let’s not beat around the bush: AI, in the way the word is most commonly perceived and defined right now, hasn’t exactly made a winning impression on most ordinary mammals.

Even specific to Google, the advent of all this generative AI gobbledegook has been a bit of a mess. The company has been pushing us with increasing aggressiveness to adopt its next-gen Gemini AI assistant in place of the classic (now “legacy”) Google Assistant while simultaneously injecting AI-powered poppycock into all sorts of awkward places — and for most of us, that largely seems to be leaving a bad taste.

From the facepalm-inducing inaccuracies and inconsistent, unreliable info these sorts of systems serve up to their lack of critical functionality core to the Android assistant arena, it’s been a frustrating and forced-feeling rush into a type of technology that seems at best not quite ready for primetime and at worst ill-suited for its intended purpose. Plain and simple, Gemini just isn’t an effective Android assistant when it comes to the types of tasks we need such a service to handle. And the “creative” elements it adds into the mix aren’t particularly compelling or pertinent for that environment.

Most of this is just an inherent side effect of the large-language-model concept at the heart of Gemini and other such systems. LLMs, as they’re called, can’t really analyze or understand anything. At the simplest level, they just look at patterns in language and predict the most likely next word in an ongoing sequence. That’s why they get so much stuff wrong and churn out so much low-quality info while having a tough time understanding and accomplishing the types of tasks we’ve come to expect our Android assistants to manage.

Plus, the systems’ most commonly touted new capabilities — being able to generate text (of questionable quality and originality) and create images (of questionable creepiness) — just aren’t things most of us actually need all that often or would be well-advised to use in most of situations, particularly in the business universe.

And I’m far from the only one who’s been sensing that.

Google, Pixels, and the AI fixation frustration

A study published by Washington State University earlier this summer dug deep into people’s perceptions of AI at this point. And — yes, indeed — it found that using the term “artificial intelligence” in a product description actively turns off potential purchasers and “reduces purchase intentions”:

The findings consistently showed products described as using artificial intelligence were less popular, according to Mesut Cicek, clinical assistant professor of marketing and lead author of the study.

“When AI is mentioned, it tends to lower emotional trust, which in turn decreases purchase intentions,” he said. “We found emotional trust plays a critical role in how consumers perceive AI-powered products.”

Oh, and also — emphasis mine:

Researchers also discovered that negative response to AI disclosure was even stronger for “high-risk” products and services, those which people commonly feel more uncertain or anxious about buying, such as expensive electronics. … Because failure carries more potential risk, which may include monetary loss or danger to physical safety, mentioning AI for these types of descriptions may make consumers more wary and less likely to purchase, according to Cicek.

So, yeah — there’s the gamble: By positioning the Pixels so firmly around the idea of “AI,” Google may actually repel potential purchasers who might otherwise be interested in the phones and all the legitimately great things they have to offer.

And now the irony: All AI placarding aside, Google’s Pixel phones have actually been delivering incredible real-world experiences with artificial intelligence at their core for years now — since the very earliest Pixel models. Most of that just happened before AI became a household term and took on its current generative AI association.

After all, much of the Pixels’ photography prowess is related to Google’s exceptional work with software-driven, AI-oriented processing. The same goes for the slew of post-capture editing and image-enhancing options offered on Pixels. The devices’ popular and extremely useful call-related features are all AI-provided advantages. The list just keeps going from there — and absolutely includes the most practical-seeming new additions from this year’s models, too, as mentioned a moment ago.

So it’s actually been AI that’s set Pixels apart and made them what they are all along. And it’s still AI that, in many ways, continues to make them commendable. That term has just taken on a new meaning now and brought with it a bunch of extra baggage and also-ran silliness most of us could happily (and arguably quite eagerly) do without.

The Pixel perception problem

For years now, I’ve been crowing about how Google has long struggled with figuring out how to market Pixel phones and present their very real, very compelling advantages in a way that both reaches and resonates with the general phone-buying public. As I wrote around this time last year:

Where the Pixel really stands out is in all the unique bits of Google intelligence it brings into the equation and the practical impact those elements add into your day-to-day life — things like the Pixel’s exceptional call spam screening system and its Call Screen feature, which can answer calls for you and lean on AI to interact with unknown callers so you’re barely even bothered by their interruptions. The Hold for Me feature, which takes over torturous holds for you and then notifies you when an actual (alleged) human comes back on the line, is another one of those things you never want to live without once you realize how helpful it can be.

And the updates — for the love of Goog, the updates. The Pixel’s long-standing advantage in the area of timely and reliable software update delivery has always been a difficult type of value to convey and get any “normal” phone owner to care about, important as those of us in these quarters may know it to be.

But [starting] with the Pixel 8 and Pixel 8 Pro, being able to say “Our phones will remain viable and safe for you to use for a full seven years, which is more than anyone else offers” — and to be able to break down a specific dollar figure of exactly how much money that’ll save you over the course of the device’s life — that’s the kind of information that can make an impact with anyone and emphasize what sets the Pixel apart.

All those statements apply even more now with the Pixel 9 series. But now more than ever, Google risks getting lost in an ocean of AI ambivalence by focusing on those two letters as a primary reason for why the phones are worth owning — both in a philosophical sense, given that “AI” actually means very little (beyond maybe “unreliable and not particularly important”) to most people, and in a practical sense, since that focus relies on overused buzzwords instead of the genuine bits of specific practical value Pixel phones provide.

Focusing on Gemini — which, again, by all counts remains an incomplete, inconsistent, and unreliable Android assistant — feels like another liability when it comes to the Pixels’ public perception. It’s clear why tech companies so desperately want us to go gaga over these AI chatbots (hint: Look to their investors’ reactions), but it isn’t clear that their presence offers any meaningful value for most of us human phone owners in our day-to-day lives. And more and more signs suggest even the non-tech-obsessed lay-folk among us are starting to pick up on that and see “AI” as being something between a “meh”-inducing neutral and a “nah”-inspiring turn-off.

Google’s got so much good stuff going on with its Pixel devices, and that’s never been more apparent than now — with the unveiling of this new Pixel 9 series. But by zooming in so heavily on the area of AI and making the phones seem all about that, it risks sending the wrong message and highlighting all the wrong elements of what makes these devices so special.

Don’t let yourself miss an ounce of Pixel magic. Sign up for my free Pixel Academy e-course to find tons of hidden features and time-saving tricks for your favorite Googley phone.