Page 19 of 101

Windows 11 will soon be available on Meta Quest 3 headsets

Meta Quest 3 and Quest 3S headset owners will soon gain access to the “full capabilities” of Windows 11 in mixed reality, Microsoft announced at its Ignite conference this week. 

Users will be able to access a local Windows PC or Windows 365 Cloud PC “in seconds,” Microsoft said in a blog post, providing access to a “private, high-quality, multiple-monitor workstation.” 

Although it’s already possible to cast a PC desktop to a Quest device, the update should make the process simpler. 

Microsoft has been working with Meta to bring its apps to the mixed-reality headsets for a while. Last year,the company launched several Microsoft 365 apps on Quest devices, with web versions of Word, Excel and PowerPoint, as well as Mesh 3D environments in Microsoft Teams. At its Build conference in May, Microsoft also announced Windows “volumetric apps” in a developer preview that promise to bring 3D content from Windows apps into mixed reality.

Meta is the market leader, with Quest headsets accounting for 74% of global AR and VR headset shipments, according to data from Counterpoint Research. At the same time, Microsoft has rolled back its own virtual and mixed reality plans, recently announcing it will discontinue its HoloLens 2 headset, with no sign of plans for new version in the works. 

The number of devices sold globally fell in the second quarter of 2024, according to IDC analysts, down 28% year on year. However, IDC predicts the total number of devices sold will grow from 6.7 million units in 2024 to 22.9 million in 2028 as cheaper devices come to market. 

Using a Quest headset as a private, large or multi-monitor setup makes sense from a productivity persective, said Avi Greengart, founder of research firm Technsponential. Access to all of Windows — rather than just a browser and select Windows 365 apps — adds “a lot of utility.” 

“Large virtual monitors are a key use case for investing in head-mounted displays, whether that’s a mainstream headset like the Quest 3, a high-end spatial computing platform like the Apple Vision Pro, or a pair of display glasses from XREAL that plug into your phone or laptop,” said Greengart.

Several hardware constrains limit the use of Quest devices for work tasks, including  display resolution and field of view (the amount of the observable virtual world visible with the device), and the discomfort of wearing a headset for extended periods.

Meta’s Quest 3 and 3S devices are more comfortable than Apple’s Vision Pro, but lack the high resolution of the more expensive device. 

Greengart added that some people — particularly older users — might struggle to focus on small text at a headset’s fixed distance focal length. Those that require vision correction lenses inside the headset can find the edges of the display distorted, he said.

“I love working in VR, but compared to a physical multi-monitor setup, it isn’t quite as productive and it gives me a headache,” said Greengart. “That said, I’ve been covering this space for years, and each iteration gets better.” 

Windows 11 will soon be available on Meta Quest 3 headsets

Meta Quest 3 and Quest 3S headset owners will soon gain access to the “full capabilities” of Windows 11 in mixed reality, Microsoft announced at its Ignite conference this week. 

Users will be able to access a local Windows PC or Windows 365 Cloud PC “in seconds,” Microsoft said in a blog post, providing access to a “private, high-quality, multiple-monitor workstation.” 

Although it’s already possible to cast a PC desktop to a Quest device, the update should make the process simpler. 

Microsoft has been working with Meta to bring its apps to the mixed-reality headsets for a while. Last year,the company launched several Microsoft 365 apps on Quest devices, with web versions of Word, Excel and PowerPoint, as well as Mesh 3D environments in Microsoft Teams. At its Build conference in May, Microsoft also announced Windows “volumetric apps” in a developer preview that promise to bring 3D content from Windows apps into mixed reality.

Meta is the market leader, with Quest headsets accounting for 74% of global AR and VR headset shipments, according to data from Counterpoint Research. At the same time, Microsoft has rolled back its own virtual and mixed reality plans, recently announcing it will discontinue its HoloLens 2 headset, with no sign of plans for new version in the works. 

The number of devices sold globally fell in the second quarter of 2024, according to IDC analysts, down 28% year on year. However, IDC predicts the total number of devices sold will grow from 6.7 million units in 2024 to 22.9 million in 2028 as cheaper devices come to market. 

Using a Quest headset as a private, large or multi-monitor setup makes sense from a productivity persective, said Avi Greengart, founder of research firm Technsponential. Access to all of Windows — rather than just a browser and select Windows 365 apps — adds “a lot of utility.” 

“Large virtual monitors are a key use case for investing in head-mounted displays, whether that’s a mainstream headset like the Quest 3, a high-end spatial computing platform like the Apple Vision Pro, or a pair of display glasses from XREAL that plug into your phone or laptop,” said Greengart.

Several hardware constrains limit the use of Quest devices for work tasks, including  display resolution and field of view (the amount of the observable virtual world visible with the device), and the discomfort of wearing a headset for extended periods.

Meta’s Quest 3 and 3S devices are more comfortable than Apple’s Vision Pro, but lack the high resolution of the more expensive device. 

Greengart added that some people — particularly older users — might struggle to focus on small text at a headset’s fixed distance focal length. Those that require vision correction lenses inside the headset can find the edges of the display distorted, he said.

“I love working in VR, but compared to a physical multi-monitor setup, it isn’t quite as productive and it gives me a headache,” said Greengart. “That said, I’ve been covering this space for years, and each iteration gets better.” 

Windows 11 will soon be available on Meta Quest 3 headsets

Meta Quest 3 and Quest 3S headset owners will soon gain access to the “full capabilities” of Windows 11 in mixed reality, Microsoft announced at its Ignite conference this week. 

Users will be able to access a local Windows PC or Windows 365 Cloud PC “in seconds,” Microsoft said in a blog post, providing access to a “private, high-quality, multiple-monitor workstation.” 

Although it’s already possible to cast a PC desktop to a Quest device, the update should make the process simpler. 

Microsoft has been working with Meta to bring its apps to the mixed-reality headsets for a while. Last year,the company launched several Microsoft 365 apps on Quest devices, with web versions of Word, Excel and PowerPoint, as well as Mesh 3D environments in Microsoft Teams. At its Build conference in May, Microsoft also announced Windows “volumetric apps” in a developer preview that promise to bring 3D content from Windows apps into mixed reality.

Meta is the market leader, with Quest headsets accounting for 74% of global AR and VR headset shipments, according to data from Counterpoint Research. At the same time, Microsoft has rolled back its own virtual and mixed reality plans, recently announcing it will discontinue its HoloLens 2 headset, with no sign of plans for new version in the works. 

The number of devices sold globally fell in the second quarter of 2024, according to IDC analysts, down 28% year on year. However, IDC predicts the total number of devices sold will grow from 6.7 million units in 2024 to 22.9 million in 2028 as cheaper devices come to market. 

Using a Quest headset as a private, large or multi-monitor setup makes sense from a productivity persective, said Avi Greengart, founder of research firm Technsponential. Access to all of Windows — rather than just a browser and select Windows 365 apps — adds “a lot of utility.” 

“Large virtual monitors are a key use case for investing in head-mounted displays, whether that’s a mainstream headset like the Quest 3, a high-end spatial computing platform like the Apple Vision Pro, or a pair of display glasses from XREAL that plug into your phone or laptop,” said Greengart.

Several hardware constrains limit the use of Quest devices for work tasks, including  display resolution and field of view (the amount of the observable virtual world visible with the device), and the discomfort of wearing a headset for extended periods.

Meta’s Quest 3 and 3S devices are more comfortable than Apple’s Vision Pro, but lack the high resolution of the more expensive device. 

Greengart added that some people — particularly older users — might struggle to focus on small text at a headset’s fixed distance focal length. Those that require vision correction lenses inside the headset can find the edges of the display distorted, he said.

“I love working in VR, but compared to a physical multi-monitor setup, it isn’t quite as productive and it gives me a headache,” said Greengart. “That said, I’ve been covering this space for years, and each iteration gets better.” 

Apple plans for a smarter LLM-based Siri smart assistant

Once upon a time, we’d say software is eating the planet. It still is, but these days our world is being consumed by generative AI (genAI), which is seemingly being added to everything. Now, Apple’s Siri is on the cusp of bringing in its own form of genAI in a more conversational version Apple insiders are already calling “LLM Siri.”

What is LLM Siri?

Apple has already told us to expect a more contextually-aware version of Siri in 2025, part of the company’s soon-to-be-growing “Apple Intelligence” suite. This Siri will be able to, for example, respond to questions and requests concerning a website, contact, or anything else you happen to be looking at on your Mac, iPhone, or iPad. Think of it like an incredibly focused AI that works to understand what you are seeing and tries to give you relevant answers and actions that relate to it.

That’s what we knew already. What we learn now (from Bloomberg) is that Apple’s AI teams are working to give Siri even more capabilities. The idea is to ensure Apple’s not-so-smart smart assistant can better compete against chatbots like ChatGPT, thanks to the addition of large language models (LLMs) like OpenAI or Gemini already use. 

What will Smart Siri do?

This smarter Siri will be able to hold conversations, and drill into enquiries, just like those competing engines — particularly Advanced Voice Mode on ChatGPT. Siri’s responses will also become more human, enabling it to say, “I have a stimulating relationship with Dr. Poole,” and for you to believe that.

These conversations won’t only need to be the equivalent of a visit to the therapist on a rainy Wednesday; you’ll also be able to get into fact-based and research-focused conversations, with Siri dragging up answers and theories on command.

In theory, you’ll be able to access all the knowledge of the internet and a great deal of computationally-driven problem solving from your now-much-smarter smartphone. Apple’s ambition is to replace, at least partially, some of the features Apple Intelligence currently hands off to ChatGPT, though I suspect the iPhone maker will be highly selective in the tasks it does take on.

The company has already put some of the tools in place to handle this kind of on-the-fly task assignment; Apple Intelligence can already check a request to see whether it can be handled on the device, on Apple’s own highly secure servers, or needs to be handed over for processing by OpenAI or any other partners that might be in the mix.

When will LLM Siri leap into action?

Bloomberg speculates that this smarter assistant tech could be one of the highlight glimpses Apple offers at WWDC 2025. If that’s correct, it seems reasonable to anticipate the tech will eventually be introduced across the Apple ecosystem, just like Apple Intelligence.

You could be waiting a while for that introduction; the report suggests a spring 2026 launch for the service, which the company is already testing as a separate app across its devices.

In the run-up to these announcements, Siri continues to develop more features. As of iOS 18.3 it will begin to build a personal profile of users in order to provide better responses to queries. It will also be able to use App Intents, which let third-party developers make the features of their apps available across the system via Siri. ChatGPT integration will make its own debut next month.

Will it be enough?

Siri as a chatbot is one area in which Apple does appear to have fallen behind competitors. While it seems a positive — at least in competitive terms — that Apple is working to remedy that weakness, its current competitors will not be standing still (though unfurling AI regulation might put a glass ceiling to limit some of their global domination dreams).

Apple’s teams will also be aware of work in the background taking place between former Apple designer Jony Ive and Sam Altman’s OpenAI, and will want to ensure it has a moat in place to protect itself against whatever the fruits of that labor turn out to be.

With that in mind, Apple’s current approach — to identify key areas in which it can make a difference and to work towards edge-based, private, secure AI — makes sense and is likely to remain the primary thrust of Apple’s future efforts.

Though if there’s one net positive every Apple user already enjoys out of the intense race to AI singularity it is that the pre-installed memory inside all Apple devices has now increased. Which means that even those who never, ever, ever want to have a conversation with a machine can get more stuff done quicker than before. Learn more about Apple Intelligence here.

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe

AI agents are unlike any technology ever

The agents are coming, and they represent a fundamental shift in the role artificial intelligence plays in businesses, governments, and our lives.

The biggest news in agentic AI happened this month when we learned that OpenAI’s agent, Operator, is expected to launch in January.

OpenAI Operator will function as a personal assistant that can take multi-step actions on its own. We can expect Operator to be put to work writing code, booking travel, and managing daily schedules. It will do all this by using the applications already installed on your PC and by using cloud services. 

It joins Anthropic, which recently unveiled a feature for its AI models called “Computer Use.” This allows Claude 3.5 Sonnet to perform complex tasks on computers autonomously. The AI can now move the mouse, click on specific areas, and type commands to complete intricate tasks without constant human intervention.

We don’t know exactly how these tools will work or even whether they’ll work. Both are in what you might call “eta” — aimed mainly at developers and early adopters.

But what they represent is the coming age of agentic AI. 

 

What are AI agents?  

A great way to understand agents is to compare them with something we’ve all used before: AI chatbots like ChatGPT. 

Existing, popular LLM-based chatbots are designed around the assumption that the user wants, expects, and will receive text output—words and numbers. No matter what the user types into the prompt, the tool is ready to respond with letters from the alphabet and numbers from the numeric system. The chatbot tries to make the output useful, of course. But no matter what, it’s designed for text in, text out. 

Agentic AI is different. An agent doesn’t dive straight away into the training data to find words to string together. Instead, it stops to understand the user’s objective and comes up with the component parts to achieve that goal for the user. It plans. And then it executes that plan, usually by reaching out and using other software and cloud services. 

AI agents have three abilities that ordinary AI chatbots don’t: 

1. Reasoning: At the core of an AI agent is an LLM responsible for planning and reasoning. The LLM breaks down complex problems, creates plans to solve them, and gives reasons for each step of the process.

2. Acting: AI agents have the ability to interact with external programs. These software tools can include web searches, database queries, calculators, code execution, or other AI models. The LLM determines when and how to use these tools to solve problems. 

3. Memory Access: Agents can access a “memory” of what has happened before, which includes both the internal logs of the agent’s thought process and the history of conversations with users. This allows for more personalized and context-aware interactions.

Here’s a step-by-step look at how AI agents work: 

  1. The user types or speaks something to the agent. 
  2. The LLM creates a plan to satisfy the user’s request.
  3. The agent tries to execute the plan, potentially using external tools.
  4. The LLM looks at the result and decides if the user’s objective has been met. If not, it starts over and tries again, repeating this process until the LLM is satisfied. 
  5. Once satisfied, the LLM delivers the results to the user. 

Why AI agents are so different from any other software

“Reasoning” and “acting” (often implemented using the ReACT — Reasoning and Acting) framework) are key differences between AI chatbots and AI agents. But what’s really different is the “acting” part. 

If the main agent LLM decides that it needs more information, some kind of calculation, or something else outside the scope of the LLM itself, it can choose to solve its problem using web searches, database queries, calculations, code execution, APIs, and specialized programs. It can even choose to use other AI models or chatbots.

Do you see the paradigm shift?

Since the dawn of computing, the users who used software were human beings. With agents, for the first time ever, the software is also a user who uses software.

Many of the software tools agents use are regular websites and applications designed for people. They’ll look at your screen, use your mouse to point and click, switch between windows and applications, open a browser on your desktop, and surf the web — in fact, all these abilities exist in Anthropic’s “Computer Use” feature. Other tools that the agent can access are designed exclusively for agent use. 

Because agents can access software tools, they’re more useful, modular, and adaptable. Instead of training an LLM from scratch, or cobbling together some automation process, you can instead provide the tools the agent needs and just let the LLM figure out how to achieve the task at hand. 

They’re also designed to handle complex problem-solving and work more autonomously. 

The oversized impact of the coming age of agents

When futurists and technology prognosticators talk about the likely impact of AI over the next decade, they’re mostly talking about agents. 

AI agents will take over many of the tasks in businesses that are currently automated, and, more impactfully, enable the automation of all kinds of things now done by employees looking to offload mundane, repetitive and complicated tasks to agents. 

Agents will also give rise to new jobs, roles, and specialties related to managing, training, and monitoring agentic systems. They will add another specialty to the cybersecurity field, which will need agents to defend against cyber attackers who are also using agents. 

As I’ve been saying for many years, I believe augmented reality AI glasses will grow so big they’ll replace the smartphone for most people. Agentic AI will make that possible. 

In fact, AI smart glasses and AI agents were made for each other. Using streaming video from the glasses’ camera as part of the multimodal input (other inputs being sound, spoken interaction, and more), AI agents will constantly work for the user through simple spoken requests. 

One trivial and perfectly predictable example: You see a sign advertising a concert, looking directly at it (enabling the camera in your glasses to capture that information), and tell your agent you’d like to attend. The agent will book the tickets, add it to your calendar, invite your spouse, hire a babysitter and arrange a self-driving car to pick you up and drop you off. 

Like so many technologies, AI will both improve and degrade human capability. Some users will lean on agentic AI like a crutch to never have to learn new skills or knowledge, outsourcing self-improvement to their agent assistants. Other users will rely on  agents to push their professional and personal educations into overdrive, learning about everything they encounter all the time.

The key takeaway here is that while agentic AI sounds like futuristic sci-fi, it’s happening in a big way starting next year. 

AI agents are unlike any technology ever

The agents are coming, and they represent a fundamental shift in the role artificial intelligence plays in businesses, governments, and our lives.

The biggest news in agentic AI happened this month when we learned that OpenAI’s agent, Operator, is expected to launch in January.

OpenAI Operator will function as a personal assistant that can take multi-step actions on its own. We can expect Operator to be put to work writing code, booking travel, and managing daily schedules. It will do all this by using the applications already installed on your PC and by using cloud services. 

It joins Anthropic, which recently unveiled a feature for its AI models called “Computer Use.” This allows Claude 3.5 Sonnet to perform complex tasks on computers autonomously. The AI can now move the mouse, click on specific areas, and type commands to complete intricate tasks without constant human intervention.

We don’t know exactly how these tools will work or even whether they’ll work. Both are in what you might call “eta” — aimed mainly at developers and early adopters.

But what they represent is the coming age of agentic AI. 

 

What are AI agents?  

A great way to understand agents is to compare them with something we’ve all used before: AI chatbots like ChatGPT. 

Existing, popular LLM-based chatbots are designed around the assumption that the user wants, expects, and will receive text output—words and numbers. No matter what the user types into the prompt, the tool is ready to respond with letters from the alphabet and numbers from the numeric system. The chatbot tries to make the output useful, of course. But no matter what, it’s designed for text in, text out. 

Agentic AI is different. An agent doesn’t dive straight away into the training data to find words to string together. Instead, it stops to understand the user’s objective and comes up with the component parts to achieve that goal for the user. It plans. And then it executes that plan, usually by reaching out and using other software and cloud services. 

AI agents have three abilities that ordinary AI chatbots don’t: 

1. Reasoning: At the core of an AI agent is an LLM responsible for planning and reasoning. The LLM breaks down complex problems, creates plans to solve them, and gives reasons for each step of the process.

2. Acting: AI agents have the ability to interact with external programs. These software tools can include web searches, database queries, calculators, code execution, or other AI models. The LLM determines when and how to use these tools to solve problems. 

3. Memory Access: Agents can access a “memory” of what has happened before, which includes both the internal logs of the agent’s thought process and the history of conversations with users. This allows for more personalized and context-aware interactions.

Here’s a step-by-step look at how AI agents work: 

  1. The user types or speaks something to the agent. 
  2. The LLM creates a plan to satisfy the user’s request.
  3. The agent tries to execute the plan, potentially using external tools.
  4. The LLM looks at the result and decides if the user’s objective has been met. If not, it starts over and tries again, repeating this process until the LLM is satisfied. 
  5. Once satisfied, the LLM delivers the results to the user. 

Why AI agents are so different from any other software

“Reasoning” and “acting” (often implemented using the ReACT — Reasoning and Acting) framework) are key differences between AI chatbots and AI agents. But what’s really different is the “acting” part. 

If the main agent LLM decides that it needs more information, some kind of calculation, or something else outside the scope of the LLM itself, it can choose to solve its problem using web searches, database queries, calculations, code execution, APIs, and specialized programs. It can even choose to use other AI models or chatbots.

Do you see the paradigm shift?

Since the dawn of computing, the users who used software were human beings. With agents, for the first time ever, the software is also a user who uses software.

Many of the software tools agents use are regular websites and applications designed for people. They’ll look at your screen, use your mouse to point and click, switch between windows and applications, open a browser on your desktop, and surf the web — in fact, all these abilities exist in Anthropic’s “Computer Use” feature. Other tools that the agent can access are designed exclusively for agent use. 

Because agents can access software tools, they’re more useful, modular, and adaptable. Instead of training an LLM from scratch, or cobbling together some automation process, you can instead provide the tools the agent needs and just let the LLM figure out how to achieve the task at hand. 

They’re also designed to handle complex problem-solving and work more autonomously. 

The oversized impact of the coming age of agents

When futurists and technology prognosticators talk about the likely impact of AI over the next decade, they’re mostly talking about agents. 

AI agents will take over many of the tasks in businesses that are currently automated, and, more impactfully, enable the automation of all kinds of things now done by employees looking to offload mundane, repetitive and complicated tasks to agents. 

Agents will also give rise to new jobs, roles, and specialties related to managing, training, and monitoring agentic systems. They will add another specialty to the cybersecurity field, which will need agents to defend against cyber attackers who are also using agents. 

As I’ve been saying for many years, I believe augmented reality AI glasses will grow so big they’ll replace the smartphone for most people. Agentic AI will make that possible. 

In fact, AI smart glasses and AI agents were made for each other. Using streaming video from the glasses’ camera as part of the multimodal input (other inputs being sound, spoken interaction, and more), AI agents will constantly work for the user through simple spoken requests. 

One trivial and perfectly predictable example: You see a sign advertising a concert, looking directly at it (enabling the camera in your glasses to capture that information), and tell your agent you’d like to attend. The agent will book the tickets, add it to your calendar, invite your spouse, hire a babysitter and arrange a self-driving car to pick you up and drop you off. 

Like so many technologies, AI will both improve and degrade human capability. Some users will lean on agentic AI like a crutch to never have to learn new skills or knowledge, outsourcing self-improvement to their agent assistants. Other users will rely on  agents to push their professional and personal educations into overdrive, learning about everything they encounter all the time.

The key takeaway here is that while agentic AI sounds like futuristic sci-fi, it’s happening in a big way starting next year. 

How to bring Android 16’s Notification Cooldown brilliance to any phone today

Well, I’ll be: We’ve just barely finished welcoming Google’s Android 15 update into the world, and already, Android 16 is teasing us with a tiny early taste.

Yes, indeedly: Google has now officially launched the first developer preview of next year’s Android 16 software. It’s part of the company’s plan to shake up the Android release schedule and put out major new versions in the second quarter of the year with smaller updates to follow in the fourth quarter.

At this point, what we can see of Android 16 is still extremely rough and preliminary. Odds are, most of its more significant elements aren’t even publicly visible just yet. But one standout addition is already stepping into the spotlight and tempting those of us who follow such subjects closely.

The feature is called Notification Cooldown, and it’s something we actually first heard about around this year’s Android 15 release. Google tested the concept during the development of that Android version but ended up pulling it and holding it for Android 16 instead.

As a smart and savvy Android Intelligence reader, though, you don’t have to wait for Android 16 to enjoy this significant new annoyance-eliminator. You can implement something similar and even more versatile, customizable, and effective on any Android device this second — if you know where to look.

[Psst: Grant yourself even more noteworthy notification powers with my new Android Notification Power-Pack — six smart enhancements that’ll change how you use your phone.]

Notification Cooldown — no Android 16 required

First things first: Notification Cooldown, if the name doesn’t ring a bell, is a new Android option designed to minimize interruptions from back-to-back, rapid-fire notifications — like when your chatty colleague Kirstie sends you 7,000 short messages during a Zoom call or your kinda-sorta buddy Brad sends seven stupid sentences somehow split into 14 separate texts.

In Android 16, Notification Cooldown can turn down the volume and “minimize alerts” in any such repeat-interruption scenarios — automatically, on your behalf, when you active a single simple toggle with your system settings.

Here’s a little secret, though: I’ve had a similar sort of system up and running on my own personal Android phone for ages now, since long before Android 16 existed. It’s even better, actually, ’cause I can decide exactly which notifications will trigger it — down to the specific app and even sender involved — and also decide for myself how long the “cooldown” period should last.

The key is an incredible Android power-user app called BuzzKill. BuzzKill lets you create powerful filters for your phone’s notifications, with all sorts of eye-opening options. I have an in-depth guide to some of its more useful possibilities, but right now, I want to focus on the Notification-Cooldown-like wizardry it can bring to any Android phone this minute — with about 60 seconds of simple setup.

Ready?

60 seconds to smarter Android notifications

All right — here’s all you’ve gotta do to get Android-16-like Notification Cooldown powers on your favorite Android phone today:

  • First, go download BuzzKill from the Play Store. It’ll cost you four bucks, once, as a one-time up-front purchase (and it’ll be worth that much and then some over time!).
  • Open it up and follow the prompts to grant it the permissions it requires. These are all genuinely required for it to be able to view and interact with your notifications. The app is from a known and reputable Android developer, it doesn’t store or share any manner of personal info, and it doesn’t even have the ability to access the internet for transferring any sort of data if it wanted to (which, again, it doesn’t!).
  • Tap the “Create rule” button on the main BuzzKill screen.

Now, here’s where the fun begins: BuzzKill will show you a ready-to-be-filled-in rule for how it should process certain types of incoming notifications.

Android 16 notification cooldown — BuzzKill
BuzzKill’s still-blank starting point for creating your own Android-16-style Notification Cooldown rule.

JR Raphael, IDG

What we need to do next is tap each underlined term and fill in the blanks to configure our custom Notification Cooldown behavior.

So, first, tap the words any app and select the app or apps you want to watch for these purposes. Communication apps like Google Messages or Slack probably make the most sense, but you can pick any app or combination of apps you want (and you can always go back and create additional rules later, too).

Next, tap the words contains anything and think carefully about what specific sorts of notifications you want to include. If you want BuzzKill to stop rapid-fire back-to-back alerting for any and all incoming messages, you can just leave this blank and not change anything. But if you want to limit that behavior to messages from a specific contact, you could tap “Phrase” and then type in their name — exactly as it appears in your messaging app.

Android 16 notification cooldown — BuzzKill
You can include any name or other phrase you want, and BuzzKill will limit its cooling only to notifications that match.

JR Raphael, IDG

Once you’ve applied that and you’re back on the rule configuration screen, tap the words do nothing, then find and tap the option for “Cooldown” in the list and tap “Pick action” at the bottom of the screen to save it. (Yes, BuzzKill used the “Cooldown” term first!)

Android 16 notification cooldown — BuzzKill
Cooldown is just one of the notification-processing options BuzzKill presents for you.

JR Raphael, IDG

Now, you’ve got a couple quick choices to make before we wrap this puppy up:

  • See the words that app? Tap ’em, and you can select exactly how your cooldown will work — if BuzzKill will silence all subsequent alerts from the same app, limit it only to notifications within the same specific conversation, or limit it only to notifications that match whatever term you put in a minute ago. Assuming you put in a specific contact’s name, I’d suggest using the “that conversation” option here; otherwise, “that app” would probably make the most sense.
Android 16 notification cooldown — BuzzKill
You’ve got all sorts of options that Google’s official Android 16 Notification Cooldown feature won’t provide.

JR Raphael, IDG

  • By default, BuzzKill will silence all back-to-back notifications that match your conditions for five minutes. If you tap 5 mins, you can change that to any other time you like.
Android 16 notification cooldown — BuzzKill
The amount of time your notification cooling lasts is completely up to you.

JR Raphael, IDG

Personally, I’d start with a lower value — a minute or two — and then see what you think as you experience it in real-time. Generally speaking, a minute or two is plenty to shield yourself from the bothersome back-to-back dinging a rapid-fire texter creates but not so much that you’re likely to miss something unrelated and potentially important.

And with that, you’re all set! You should see your complete Cooldown rule scripted out in front of you, and all that’s left is to hit “Save rule” to make it active.

Android 16 notification cooldown — BuzzKill
An Android-16-style Notification Cooldown rule — ready to save and activate.

JR Raphael, IDG

You should then see the rule on your main BuzzKill screen, with the toggle flipped over to the right in the active position.

Android 16 notification cooldown — BuzzKill
Notification Cooldown, in action — no Android 16 required. How ’bout them apples?!

JR Raphael, IDG

And that’s it: You’ve officially set up your own version of Android 16’s Notification Cooldown, with even more flexibility and control and no restrictions on where it can run.

Take a minute to explore some of the other clever ways you can put BuzzKill to use, then keep the customization coming with my new Android Notification Power-Pack — six powerful enhancements for your phone’s notification panel, completely free from me to you.

Serenity now — interruptions later. Enjoy!

In the age of AI, what is a PC? Arm has its answer

Amid the uncertainty around what makes a Windows 11 PC a Copilot+ PC, and how that differs from an AI PC, Arm is bringing some clarity — or perhaps a new source of confusion — with its definition of what constitutes an Arm PC.

For decades, the heart of every PC running Windows was an x86 processor, designed by Intel and later expanded upon by AMD with the x64 architecture. But in 2017, Microsoft released a version of Windows 10 that ran on processors built on designs from Arm, prompting some manufacturers to introduce Arm-based PCs.

Initially they had little influence on the market, but now Microsoft has really thrown its weight behind the Arm architecture. The Arm version of Windows 11 is superficially indistinguishable from the x86/x64 version, with the same user interface and functions. However, behind the scenes, while Windows 11 on Arm will run applications compiled for x86, it runs them slowly, in an emulator. Only applications compiled for the Arm architecture get the full power of the processor.

Microsoft makes no distinction between x86 and Arm architectures in its definition of what qualifies as a “Windows 11 PC,” leaving buyers to find out for themselves whether their favorite software application will run well or not.

For the last year or so, we’ve also had to contend with “AI PCs.” Pretty much everyone agrees that these are PCs that run AI applications thanks to an additional “neural processing unit” (NPU) alongside their CPU and GPU. For Intel, that NPU has to be in one of its Core Ultra chips. In Microsoft’s definition, an AI PC — initially at least — also had to have a dedicated Copilot key to launch its Copilot software.

Microsoft then added to the confusion with a new category: Copilot+ PCs. These are Windows 11 PCs with a “compatible” processor and an NPU capable of 40 trillion operations per second (TOPS) or more. This requirement neatly excluded Intel’s first generation of AI chips, which only hit 37 TOPS. The only chips Microsoft deemed suitable for the Copilot+ PCs on sale at launch were the Arm-based Snapdragon X Series from Qualcomm. However, that’s changing as machines with AMD Ryzen AI 300 Series and Intel Core Ultra 200V Series chips that meet the spec are now hitting the market.

But wait: It takes more than just a processor to make a PC. For years, Intel and AMD created reference designs for PCs based on the chips they made, clarifying details of interconnects and security systems. Arm doesn’t make chips, though; it licenses its architecture to Qualcomm and other companies, who sell the chips used in Arm-based PCs. So who is responsible for defining how everything fits together in an Arm-based PC?

Into that vacuum comes Arm, with its Arm PC Base System Architecture 1.0 platform design document providing rules and guidelines for companies manufacturing PCs from chipsets based on its architecture. This is an important step towards CEO Rene Haas’ goal of winning half of the Windows PC market by 2029.

Critical requirements for Arm PCs

Arm’s new PC Base System Architecture (PC-BSA) document lays out the basic elements intended to make its architecture reliable for PC operating systems, hypervisors, and firmware.

At a high level, it stipulates that 64-bit processors must be built on Arm v8.1 (or newer) core designs and integrate a TPM 2.0 trusted platform module to support security. TPM may be implemented as firmware, a discrete chip, or in a secure enclave. Arm PCs must also adhere to PCI Express standards, and allow for virtualization through a System Memory Management Unit (SMMU).

“The PC Base System Architecture embeds the notion of levels of functionality,” Arm explains in the document. “Each level adds functionality better than the previous level, adding incremental features that software can rely on.” Technical specifications also cover memory maps, interrupt controllers, and device assignment.

Protection from supply chain attacks

Arm points out that PCs go through different stages as they progress along the supply chain, from manufacturing and provisioning through deployment, production, and finally decommissioning.

“To allow actors in the supply chain to determine the current security state of a system, the security-relevant state can be reflected in hardware through mechanisms such as fuses and one-time programmable (OTP) memory,” the document stipulates.

A software boost for Arm-based PCs

One of the challenges for owners of Arm-based Windows 11 PCs is that, apart from the operating system and the Microsoft 365 productivity suite, few applications were optimized for the Arm architecture.

There were some significant new Arm-compatible software releases at Microsoft’s Ignite event this week, though, with Google releasing a beta version of its Drive for Desktop ARM64 cloud storage client, and the secure Signal Messenger app getting an update that supports the Arm-based Qualcomm Snapdragon X processors in Copilot+ PCs.

Microsoft also demonstrated new search functions powered by the NPU in Copilot+ PCs that it will release sometime in early 2025. Users will be able to find files, documents, and photos by describing their content to Copilot, even when they are offline. For instance, they may search for “modes of transport,” and the model will bring up documents that discuss cars, buses, and airplanes, Microsoft explained.

Another new Microsoft capability for Copilot+ PCs, now in preview, is Click to Do. Its purpose is to simplify workflows by making text and images selectable so that AI can provide relevant action suggestions, such as summarizing text or editing images.

Microsoft has also introduced a new API for its lightweight open multimodal model, Phi 3.5, custom-built for the Copilot+ with Snapdragon X series. This will support text summarization, completion, and prediction.

Finally, the company rolled out new enterprise-grade controls for Recall, its controversial data snapshot tool. The AI-powered feature uses natural language to help people re-engage with content. It takes frequent snapshots of active screens, encrypting them and storing them on the PC where they can be searched by AI to make what Microsoft calls an “explorable timeline of your past on your PC.”

However, this feature has raised concerns about security and privacy, so Microsoft has turned it off by default for managed commercial devices. IT teams must choose to re-enable it to save screen snapshots.

New Windows 11 tool can fix devices that won’t boot remotely

Microsoft is working on a new Windows feature, “Quick Machine Recovery,” that will allow IT administrators to use Windows Update with “targeted fixes” to remotely fix systems that can’t boot, according to Bleeping Computer.

The new feature is part of the Windows Resiliency Initiative — Microsoft’s efforts to prevent a repeat of the outage that occurred in July 2024, when a buggy Crowdstrike update left hundreds of thousands of Windows computers unable to start, affecting hospitals, emergency services and airlines worldwide.

Microsoft plans to roll out the Quick Machine Recovery feature to the Windows 11 Insider Program in early 2025.

Will new Apple Pay oversight make Apple Bank a good idea?

As regulation threatens to tear Google apart and fundamentally both damage both Android and Apple, yet another regulatory noose is tightening around Cupertino, as its Apple Pay service will in future be regulated like a bank.

All this comes as company lawyers attempt to get the insanely flawed US Department of Justice anti-trust case against Apple quashed. and it climbs in on top of recent threats of further fines and challenges in Europe. You’d be forgiven if some of the leaders at Apple might feel a little as if they have been born in “interesting times.”

Apple Pay faces tougher regulation

The latest twist of the rope comes from the US Consumer Financial Protection Bureau (CFPB), which is about to introduce a new rule that puts Apple Pay and other digital wallet services under the same federal supervision as banks. That’s going to mean the CFPB can proactively examine Apple and other large companies in this space to ensure they are complying with consumer protection laws concerning privacy and surveillance, error and fraud, and maintaining service continuity in order to protect users against “debanking.”

The agency in 2022 warned some Big Tech firms providing such services about their obligations under consumer protection laws when using behavioral targeting for financial products. 

Announcing the regulation on X, CFPB Director Rohit Chopra explained his organization is also concerned about “how these apps can fuel surge pricing that jack up costs using your purchase history and personal data.”

You can read the new rules governing these companies here (PDF). But what is interesting is that elements of them that might have impacted crypto transactions appear to have been mitigated or removed.

Proactive, not reactive, oversight

Most of these matters were already regulated; what really changes is how rules around them are enforced. You see, while the previous regulation meant CFPB could only react to consumer complaints as they arose, it can now proactively investigate compliance. That’s the same kind of oversight banks and credit unions already face and means Apple and other payment providers covered by the rules will face deeper and, presumably, more intrusive oversight. 

The new rules will only affect digital wallet providers whose tech is handling 50 million or more transactions per year. Apple’s system is now easily the most widely used digital wallet in America, so it will most certainly face this oversight. The company also participated in the consultation process that preceded the new rule’s introduction. Other providers likely swooped up under the law will include Cash App, PayPal, Venmo, and Google Pay.

To some degree, the rules make sense, given that digital wallets are used to handle real money and consumer protection is vital. But what’s really interesting is the extent to which the new determination proves just how rapidly digital wallets have replaced real wallets across the last decade.

The rise and rise of digital payments

That’s certainly what the CFPB thinks. “Digital payments have gone from novelty to necessity and our oversight must reflect this reality,” said Chopra. “The rule will help to protect consumer privacy, guard against fraud, and prevent illegal account closures.”

If you think back, it wasn’t terribly long ago when the notion that Apple wanted to turn your iPhone into a wallet seemed impossibly extreme. That is no longer the case. Two years ago, researchers claimed Apple Pay had surpassed Mastercard in the dollar value of transactions made annually, making Apple Pay the world’s second most popular payment system, just behind Visa. Google’s G Play system then stood in fifth place. 

The regulator explains that payment apps are now a “cornerstone” of daily commerce, with people using them daily as if they were cash. “What began as a convenient alternative to cash has evolved into a critical financial tool, processing over a trillion dollars in payments between consumers and their friends, families, and businesses,” the CFPB said.

What next? 

I think it’s pretty clear that Apple has learned a lot about this business since the introduction of Apple Pay. Not only has it been in, and then exited, the lucrative Buy Now Pay Later market with Apple Pay Later, but it has also experienced the slings and arrows of outrageous fortune with its wildly popular credit card operation, Apple Card, which has ended in a tumultuous relationship with Goldman Sachs.

During all these adventures, the company will have learned a great deal about the sector — and now that it is being regulated as if it were a bank, I wouldn’t be terribly surprised if it decided to become one.

After all, if it’s getting regulated to the same extent as banks, why not get into more of the same business sectors banks now serve? I can’t help but imagine that Apple already has a weighty file of research documents in one of its Cupertino filing cabinets exploring how and where it might profitably extend Apple Pay into more traditional banking sectors.

The new CFPB oversight regime might well accelerate any such plans.

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe