Month: June 2024

Troubleshooting Windows with Reliability Monitor

The built-in Windows Reliability Monitor remains an oft-overlooked troubleshooting gem. It’s actually a specialized part of Windows’ general-purpose Performance Monitor tool (perfmon.exe). While more limited in scope and capability, Reliability Monitor (a.k.a. ReliMon) is much, much easier to use.

Reliability Monitor zeroes in on and tracks a limited set of errors and changes on Windows 10 and 11 desktops (and earlier versions going back to Windows Vista), offering immediate diagnostic information to administrators and power users trying to puzzle their way through crashes, failures, hiccups, and more.

Launch Reliability Monitor

There are many ways to get to Reliability Monitor in Windows 10 and 11. At the Windows search box, if you type reli you’ll usually see an entry that reads View reliability history pop up on the Start menu in response. Click that to open the Reliability Monitor application window.

You can also click Start > Settings, then type reli into that search box for the same menu option, shown in Figure 1.

launching windows reliability monitor via the start menu

Figure 1: Type “reli” into the Start menu, and you’ll see “View reliability history” to the right.

Ed Tittel / IDG

To navigate to this item in the Control Panel hierarchy, follow this sequence of selections: Start > Control Panel > Security and Maintenance > View reliability history (under the Maintenance heading). Yet another method of launching ReliMon is to press Win key + R to open the Run box, then type in perfmon /rel. That same command works from any Windows command-line interface.

However you invoke Reliability Monitor, you’ll find it has useful things to tell you. Figure 2 shows the main ReliMon screen tracking the reliability of my Windows 11 production PC from May 9 through May 28. It shows a near-optimal stability index of 10 at the left-hand side, errors (denoted by a red circle with a white X) on May 14 and 18, and a climb back to a “perfect 10” at the right.

main reliability monitor window showing healthy pc

Figure 2: The main ReliMon window, which traces reliability on a scale of 1 (bottom) to 10 (top) from May 9 through May 28. Note the error markers on May 14 and 18.

Ed Tittel / IDG

Look to the right of the timeline, which labels the rows in the table underneath the stability graph. They read:

  • Application failures: Provides timestamps and additional info about application or app crashes, hangs, and other issues, of which “stopped working” is most typical.
  • Windows failures: Indicates Windows OS or hardware errors that cause crashes, hangs, BSODs, and other issues. You can see a “Windows hardware error” in Figure 3 below (turns out to be graphics driver related).
  • Miscellaneous failures: Failures or crashes that fall outside the realm of apps, applications and the OS — usually something bus or peripheral related. In addition, a “shut down unexpectedly” or “not properly shut down” item is recorded when Windows 10 or 11 hangs and you cycle power to restart the OS. This also counts as a miscellaneous failure.
  • Warnings: Warnings and Information items both usually relate to updates applied to the current host, through Windows Update, the Microsoft Store, and so forth. Warnings usually document failed or incomplete updates.
  • Information: Related to updates from various sources successfully applied to the current host.

Next up is a somewhat more error-prone PC, a Lenovo ThinkPad X380 Yoga, to demonstrate what such a sequence looks like in ReliMon (see Figure 3). You’ll see Windows errors occurring on five days in a 10-day stretch, from May 16 through May 25. Except for the hardware error shown on May 23, the rest of the errors come from the built-in Windows facilities and applications that include the AppX Distribution Service, Teams, Phone Link, and so forth.

reliability monitor window showing unhealthy pc

Figure 3: The highlighted item (May 23) shows a Windows hardware error; other errors occurred on May 16, 19, 20, and 25. Ouch!

Ed Tittel / IDG

What Reliability Monitor can tell you about critical errors

By clicking on a specific day in the timeline (shown as a vertical blue bar for May 23 in Figure 3), you can see events reported for that day in a list below the timeline. Double-click any item in the list to pop up a detail pane with more information.

After clicking May 23 in Figure 3, I double-clicked the entry labeled “Windows” with a summary that reads “Hardware error,” which brought up the Problem Details screen shown in Figure 4.

reliability monitor problem details window

Figure 4: For this somewhat generic “Hardware error,” the most useful info appears in the Bucket ID line. More often, the Problem Event Name and Code fields lead directly to good info.

Ed Tittel / IDG

The bucket ID from which the error originates (shown under “Extra information about the problem” in Figure 4) includes the string igkdmd64.sys. A quick Google search confirms this is the Windows driver for Intel Graphics Kernel Mode (acronym: igkd). Thus, it’s pretty obvious that the Intel graphics module on the Lenovo ThinkPad X380 Yoga experienced a hiccup in the built-in Intel UHD Graphics 620 on its i7-8650U CPU. You can usually find fair-to-good guidance on problem information by visiting answers.microsoft.com and searching on problem event names, bucket IDs, and so forth.

What kinds of problems can ReliMon diagnose?

Knowing the source of failures can help you take action to prevent them. For example, certain critical events show APPCRASH as the Problem Event Name. This signals that some Windows app or application has experienced a failure sufficient to make it shut itself down. Such events are typically internal to an app, often requiring a fix from its developer. Thus, if I see a Microsoft Store app that I seldom or never use throwing crashes, I’ll uninstall that app so it won’t crash any more. This keeps the Reliability Index up at no functional cost (since I don’t use the app anyway).

The same approach works for update checkers of many kinds. (I prefer to update manually, or to use an update tool such as the Microsoft Winget package manager in PowerShell or at a Command Prompt.) As it turns out, Reliability Monitor is a great tool for catching and stopping updaters that one may have tried but failed to block through the Startup tab in Task Manager. I’ve used it to detect updaters for the Intel Driver & Support Assistant, CCleaner, MiniTool Partition Wizard, anti-malware packages, Office plug-ins, Java, and lots of other stuff. In many such cases, I decided to remove them (by uninstalling them or renaming their .exe files) because (a) I didn’t need or use them and (b) I wanted to remove a source of Windows errors.

Recently, I also found myself facing a “black screen with cursor” on a Lenovo ThinkPad P16. By using two keyboard sequences (Win key + Ctrl + Shift + B to restart the graphics driver, then Ctrl + Alt + Del to access the Windows “master control menu”), I regained control of the PC. A quick trip into Reliability Monitor showed me an error with this telltale string in the bucket ID information: “CreateBlackScreenLiveDump.” That’s a clear indication that something went wrong with the graphics driver, as was the system’s recovery after I entered the “reset graphics” key combo. I ended up reverting to the previous NVIDIA graphics driver to fix the problem.

Where ReliMon is less helpful

Sometimes, you’ll find that error sources are either applications you need or want to run, or they originate from OS components and executables. Uninstalling such things is not an option, and may not only be unproductive but render the OS inoperable. When that kind of thing pops up — and it often does — all you can do is report the issue via the Microsoft Feedback Hub, include the Reliability Monitor detail as an attachment, and hope that Microsoft gets around to fixing whatever’s broken sooner rather than later.

Here’s an example of such stuff from across my dozen or so Windows 11 PCs and VMs from the last 30 days (all show “Stopped working” or “Stopped responding and was closed” in the summary field in ReliMon):

  • AppX Deployment Service
  • Intel System Usage Report
  • Microsoft Phone Link
  • Microsoft Teams Updater
  • Windows Biometric Service
  • Windows Camera Frame Server
  • Windows Explorer

Nearly every item (except the one labeled “Intel…”) is an OS element that is required to keep Windows working. Thus, getting rid of them is not an option. Reporting them via Feedback Hub is the only responsible thing to do.

Add ReliMon to your troubleshooting toolkit

I hope I’ve shown that Reliability Monitor can be a useful and informative member of any Windows professional’s troubleshooting toolkit, both for Windows 10 and Windows 11. I check in on it no less than monthly on the machines that I manage, even when nobody’s complaining about something odd, slow, or broken. And when such complaints do come in, it’s one of the first tools I check to try to figure out and fix what’s causing trouble. I recommend you do likewise.

This article was originally published in October 2020 and most recently updated in June 2024.

IT pros find generative AI doesn’t always play well with others

While nine out of 10 IT professionals say they want to implement generative artificial intelligence (genAI) in their organization, more than half have integration, security and privacy concerns, according to a recent survey released Wednesday by Solarwinds, an infrastructure management software firm.

The SolarWinds 2024 IT Trends report, AI: Friend or Foe? found that very few IT pros are confident in their organization’s readiness to integrate genAI. The company surveyed about 7,000 IT professionals online regarding their views of the fast-evolving technology, and despite a near-unanimous desire to adopt genAI and other AI-based tools, less than half of respondents feel their infrastructure can work with the new technology.

Only 43% said they are confident that their company’s databases can meet the increased needs of AI, and even fewer (38%) trust the quality of data or training used in developing the technology. “Because of this, today’s IT teams see AI as an advisor (33%) and a sidekick (20%) rather than a solo decision-maker,” SolarWind said in its report.

Privacy and security worries were cited as the top barriers to genAI integration, and IT pros specifically called for increased government regulations to address security (72%) and privacy (64%) issues. When asked about challenges with AI, 41% said they’ve had negative experiences; of those, privacy concerns (48%) and security risks (43%) were most often cited.

More than half of respondents also believe government regulation should play a role in combating misinformation. “To ensure successful and secure AI adoption, IT pros recognize that organizations must develop thorough policies on ethics, data privacy, and compliance, pointing to ethical considerations and concerns about job displacement as other significant barriers to AI adoption,” the report said.

SolarWinds found that more than a third of organizations still lack ethics, privacy and compliance policies in place to guide proper genAI implementation. “While talk of AI has dominated the industry, IT leaders and teams recognize the outsize risks of the still-developing technology, heightened by the rush to build AI quickly rather than smartly,” said Krishna Sai, senior vice president, technology and engineering, at SolarWinds.

Indeed, leading security experts are predicting hackers will increasingly target genAI systems and attempt to poison them by corrupting data or the models themselves. Earlier this year, the US National Institute of Standards and Technology (NIST) published a paper warning that “poisoning attacks are very powerful and can cause either an availability violation or an integrity violation.

“In particular, availability poisoning attacks cause indiscriminate degradation of the machine learning model on all samples, while targeted and backdoor poisoning attacks are stealthier and induce integrity violations on a small set of target samples,” NIST said.

Overall, the IT industry’s sentiment reflects “cautious optimism about AI despite the obstacles,” SolarWinds reported. Almost half of IT professionals (46%) want their company to move faster in implementing the technology, despite costs, challenges, and concerns, but only 43% are confident that their company’s databases can meet the increased needs of AI. Moreover, even fewer (38%) trust the quality of data or training used in developing AI technologies.

IT pros cited AIOps (Artificial Intelligence for IT Operations) as the technology that will have the most significant positive impact on their role (31%), ranking above large language models and machine learning. More than a third of respondents (38%) said their companies already use AI to make IT operations more efficient and effective.  

IT pros find generative AI doesn’t always play well with others

While nine out of 10 IT professionals say they want to implement generative artificial intelligence (genAI) in their organization, more than half have integration, security and privacy concerns, according to a recent survey released Wednesday by Solarwinds, an infrastructure management software firm.

The SolarWinds 2024 IT Trends report, AI: Friend or Foe? found that very few IT pros are confident in their organization’s readiness to integrate genAI. The company surveyed about 7,000 IT professionals online regarding their views of the fast-evolving technology, and despite a near-unanimous desire to adopt genAI and other AI-based tools, less than half of respondents feel their infrastructure can work with the new technology.

Only 43% said they are confident that their company’s databases can meet the increased needs of AI, and even fewer (38%) trust the quality of data or training used in developing the technology. “Because of this, today’s IT teams see AI as an advisor (33%) and a sidekick (20%) rather than a solo decision-maker,” SolarWind said in its report.

Privacy and security worries were cited as the top barriers to genAI integration, and IT pros specifically called for increased government regulations to address security (72%) and privacy (64%) issues. When asked about challenges with AI, 41% said they’ve had negative experiences; of those, privacy concerns (48%) and security risks (43%) were most often cited.

More than half of respondents also believe government regulation should play a role in combating misinformation. “To ensure successful and secure AI adoption, IT pros recognize that organizations must develop thorough policies on ethics, data privacy, and compliance, pointing to ethical considerations and concerns about job displacement as other significant barriers to AI adoption,” the report said.

SolarWinds found that more than a third of organizations still lack ethics, privacy and compliance policies in place to guide proper genAI implementation. “While talk of AI has dominated the industry, IT leaders and teams recognize the outsize risks of the still-developing technology, heightened by the rush to build AI quickly rather than smartly,” said Krishna Sai, senior vice president, technology and engineering, at SolarWinds.

Indeed, leading security experts are predicting hackers will increasingly target genAI systems and attempt to poison them by corrupting data or the models themselves. Earlier this year, the US National Institute of Standards and Technology (NIST) published a paper warning that “poisoning attacks are very powerful and can cause either an availability violation or an integrity violation.

“In particular, availability poisoning attacks cause indiscriminate degradation of the machine learning model on all samples, while targeted and backdoor poisoning attacks are stealthier and induce integrity violations on a small set of target samples,” NIST said.

Overall, the IT industry’s sentiment reflects “cautious optimism about AI despite the obstacles,” SolarWinds reported. Almost half of IT professionals (46%) want their company to move faster in implementing the technology, despite costs, challenges, and concerns, but only 43% are confident that their company’s databases can meet the increased needs of AI. Moreover, even fewer (38%) trust the quality of data or training used in developing AI technologies.

IT pros cited AIOps (Artificial Intelligence for IT Operations) as the technology that will have the most significant positive impact on their role (31%), ranking above large language models and machine learning. More than a third of respondents (38%) said their companies already use AI to make IT operations more efficient and effective.  

WWDC: What’s new for Apple and the enterprise?

WWDC’s biggest news is and was Apple Intelligence, obviously. But beyond what might become the world’s most trusted generative AI (genAI) platform, WWDC 2024 has produced a variety of enhancements aimed at enterprise IT. 

Let’s start with device management. 

For this, Apple has built a range of new management capabilities for iPads, iPhones, Macs, and visionOS devices. These include changes to Activation Lock, software update, and Safari management. (Apple Business and School Manager also see changes.)

The intention behind most of these is to make it easier for IT to do what it does. That means easier adoption of Managed Apple IDs and Activation Lock and new tools to manage Safari extensions. Apple Vision Pro gets Zero-Touch deployments for IT with Automated Device Enrollment, along with more management controls, commands and restrictions.

A vision for business

There are new enterprise APIs for visionOS, too. These give you enhanced sensor access and better control, the intention behind which is to give enterprise developers more tools to build solutions for their business. 

The new APIs include:

  • Main camera access.
  • Passthrough in-screen capture.
  • Spatial barcode and QR code scanning.
  • Access to the Neural Engine and object tracking parameter adjustment.

The company is quite specific about where it thinks those APIs will make a difference, citing collaboration, medical, and engineering implementations to illustrate what these enhancements can deliver. Apple discussed some of these improvements and their applications at WWDC 2024; one way I like to see them is as steps toward truly realized team support for remote agents.  You can see how these technologies might also become powerful in future development of autonomous drones and robotics.

Xcode Code Completion

The big highlight in Xcode 16, Code Completion, is Apple Intelligence that runs locally on your Mac, even offline. It will produce code developers need to get their project done — think Github Copilot without the vulnerability.

Swift Assist

This is a natural-language model that works with servers in the cloud and lets developers use natural language to ask for help with their projects. The tool can generate prototype code, iterate it, and knows Apple’s latest SDKs and features. This helps developers experiment with new ideas, source the code they need to put those ideas into place, and over time should improve the quality of apps. It’s private, too. 

Private Cloud Compute

We looked at this in-depth here; enterprise pros will want to figure out the extent to which Apple’s provision of its own server-based, ultra-private LLM support in the cloud makes the service suitable for use across your company, but the promise of writing tools might help motivate a move to cross that line. It’s also interesting that Apple now lets developers use CoreML to run LLM models trained using most common frameworks (such as Mistral) in their apps on an Apple device. While not much is being made of that quite yet, it suggests new ways through which enterprise developers might be able to make their own business data highly actionable, while maintaining data protection guards.

I can’t help but think that Apple’s new cloud service could be the seed that grows into a full-fledged private enterprise compute cloud. 

App Intents and App Entities

Available in iOS, App Intents will let developers make specific content and actions within their apps available via Siri, Spotlight, Shortcuts, and Widgets. With Apple Intelligence, this means your app actions could be suggested to people when they make relevant requests from Siri, Shortcuts, or Spotlight. That’s good, because it brings your app to the surface, and evidently has potential in B2C and B2B communications. I also think this is a powerful step toward making Siri an agent-based assistant, capable of working on tasks over time, but that’s a story for another day. It does now have deeper and more integrated access to your apps than before.

Passkeys and Passwords

Apple’s decision to turn its existing iCloud Keychain password management tool into a full-fledged password manager is bad news for password management firms, but could be a powerful addition for enterprise IT, who I imagine will be able to fully provision staff with all relevant sign-ins as part of the deployment process.

For iPhones and iPads. a new registration API has been introduced. This creates passkeys automatically for eligible users the next time they sign into your app. Adding this requires the addition of a single parameter.

Elon Musk

While Elon Musk is quite evidently not an Apple product, his threat to ban Apple devices at his companies over Apple Intelligence betrays a lack of understanding. Not only is it worth noting that any Apple AI running on the Apple device or in its secure cloud is an Apple LLM model, with privacy by design, but I’d be surprised if device management (MDM) tools can’t be used to block access to external genAI machines. (If that’s not in the first beta, it should certainly show up by the time the operating systems ship this fall.)

Satellite Messaging

While not exactly an enterprise feature, Messages by Satellite comes across as a quietly spoken big deal. It means you should be able to send and receive texts, emoji, and Tapbacks over iMessage and SMS, so long as you can pick up a satellite connection. Apple didn’t share too much about this — we don’t know how widely available the service will be — but this does suggest the ambitions Apple had when it introduced Emergency SOS by satellite haven’t been fully realized yet.

There’s lots more coming out of WWDC, so stay tuned for more.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

With new APIs, Apple gears up Vision Pro for frontline workers

With a heavy focus on artificial intelligence during the WWDC keynote on Monday, Apple executives spent just a short time outlining features coming to the company’s Vision Pro “spatial computing” headset. There was little in the visionOS 2.0 update to boost the office productivity credentials save for a larger virtual monitor. 

There were, however, signs of how the device could become more useful for frontline workers with the addition of “enterprise APIs” for visionOS 2.0. 

These APIs let developers create custom apps with greater control over aspects of the Vision Pro’s sensors and other systems, and enable businesses to create visionOS apps for those in frontline roles, such as warehouse or production line workers, Apple said in a video on its developer site  this week. This includes the ability to build an app that provides see-what-I-see remote assistance apps, for example.

The enterprise APIs address “known functionality requirements” for the Vision Pro and show Apple’s intention to meet their customers “where they’re at,” said Tuong Nguyen, director analyst at Gartner. “It’s noteworthy that Apple has been this responsive in terms of opening up access/functionality to further empower developers and encourage enterprise adoption. I expect this will be a theme across future announcements regarding Vision Pro.”

With lagging sales of virtual and mixed reality headsets, and question marks about key use cases, frontline workers present an early route to adoption for Apple and others. “The near- and mid- term benefit [of AR/VR devices] will primarily be for frontline workers — usually in asset intensive industries,” said Nguyen. “The value for information/knowledge workers, as well as consumers,will come much later.” 

Apple outlined ways developers can create a wider range of visionOS apps for frontline staff in the developer video, including three APIs aimed at improvingaccess to Vision Pro sensors. 

One API provides access to the main forward-facing camera. This could be used in conjunction with a computer vision algorithm to create an anomaly detection app that helps a production worker detect faulty components, Apple said. 

Another enables recording of a user’s entire passthrough video feed — previously it was only possible to record the apps a user was looking at — which could provide remote assistance for field technicians. 

There’s also a QR code scanning API that enables custom app functionality to detect a QR code and receive its content. It could be used, for example, by a warehouse worker to scan package bar codes to verify they have the correct itemwithout the need for a hand-held scanner, Apple said.

Three other APIs are focused more on background processes. 

One lets machine learning tasks run on the Vision Pro’s neural engine in addition to the CPU. 

Another, an “object tracking” API, can track multiple dynamic and static objects that appear in a user’s field of view. This could be used to track tools and parts in a complex repair environment —  providing a technician with guidance on how to fix a machine, for instance, Apple said. 

There’s also an API that lets users increase demands on computing resourcesbeyond the Vision Pro’s default limits, essentially compromising battery life and increasing fan noise for a demanding scenario such as rendering a high-fidelity,mixed-reality display of a race car. 

These “behind the scenes” capabilities add flexibility to the Vision Pro and couldbe useful in a variety of scenarios, said Ramon Llamas, research director with IDC’s devices and displays team.

“That’s absolutely key, especially if you’re in the business of looking at a lot of objects in a quick amount of time, because the computing power for that may sometimes go beyond what the Vision Pro can offer out-of-the-box,” he said. “Giving developers and enterprise users the power to spin up or down [compute  resources] can be the difference between the Vision Pro being nice-to-have and must-have.”

Llamas said the new APIs enable Apple to catch up with others in the market when it comes to enterprise functions. “That’s where the market is right now and it’s important for Apple to have these kinds of functionalities built in so that they are part of the enterprise solution conversation,” said Llamas.

The additional workplace functionality reflects broad potential use cases for mixed reality, said Nguyen, and should help Apple maximize adoption of the Vision Pro, “because this early in the market, no one — including Apple — will get massive adoption volume off a single use case, or functionality.

“Similar to the smartphone era, there’s no killer app,” he said. “It’s a collection of applications and use cases that will make Vision Pro (and other head-mounted displays) a valuable device.”

The enterprise APIs are currently in beta, according to Apple.

Microsoft’s Copilot+ AI PCs: Still a privacy disaster waiting to happen

Imagine that your Windows PC took screenshots of everything you do on it, including of personal data, credit card and other financial information, passwords, web sign-ins, emails, a list of web sites that you don’t want anyone to know you’ve been visiting, business information and more. Imagine your PC creates a searchable database of it all — and imagine how valuable that information would be if accessed by someone other than you.

No security or privacy issues there — after all, what could go wrong? 

The answer is plenty. In a world in which Windows has been successfully hacked for decades, and continues to be hacked, in which Windows has allowed information to be regularly stolen from top tech companies, including Microsoft itself, and from high-ranking government officials as well as countless individual users, the screenshot scenario seems as if it’s the ultimate security-and-privacy nightmare.

And yet many security pros said that’s exactly the Pandora’s Box Microsoft is about to open with the new line of AI-powered Copilot+ Windows PCs. Microsoft argues those new PCs, available beginning on June 18, will make it easy for you to find files and remember things you’ve done on your computer using the new Recall feature, which takes screenshots, stores them in a database, and uses AI to help you find and use whatever you want.

Microsoft claims there’s nothing to fear, that rock-solid security is baked directly into the new feature (though it did announce on Friday the feature would be opt-in and its data better secured — a nod to the backlash that emerged after Recall was unveiled). 

Who to believe? To find out, let’s take a look at how Recall works.

Recall: the AI-driven memory machine

Microsoft’s Copilot+ PCs, to be released by manufacturers including HP, Dell, Samsung, Asus, Acer, Lenovo and Microsoft itself, are “the most significant change to the Windows platform in decades” and “the fastest, most intelligent Windows PCs ever built,” claims Microsoft Executive Vice President, Consumer Chief Marketing Officer Yusuf Mehdi in a blog post.

The machines are powered by a system architecture that connects a PC’s CPU, GPU, and a new high-performance Neural Processing Unit (NPU) to AI large language models (LLMs) running on Microsoft’s Azure Cloud and AI small language models (SLMs) running on the PCs themselves.

Microsoft touts a variety of benefits offered by the new line of machines, including dramatically faster speeds, improved battery life, and better overall performance. However, the core of the benefits are related to AI — turbo-driven AI processing; sped-up AI image creation and photo- and image-editing; accelerated AI for applications such as Adobe Photoshop, Lightroom and Express; and increased performance of Microsoft’s AI Copilot software.

The benefit Microsoft touts the most — the first one Mehdi points to in his blog post — is the Recall feature. Recall, he says, solves “one of the most frustrating problems we encounter daily — finding something we know we have seen before on our PC.”  He claims it will let you “access virtually what you have seen or done on your PC in a way that feels like having photographic memory.”

To do that, Recall takes screenshots of your PC every five seconds, and stores them all in a searchable database. AI does the heavy lifting of analyzing those screenshots, extracting information from them, creating the database and searching through them. Microsoft claims the processing is done on the machine itself rather than in the cloud, and that the screenshots and database are safe because they’re encrypted, so users don’t have to worry about privacy or security issues.

The privacy problem?

Many security experts beg to differ. “I think a built-in keylogger and screen-shotter that perfectly captures everything you do on the machine within a certain time frame is a tremendous privacy nightmare for users,” Jeff Pollard, vice president and principal analyst at Forrester, told Computerworld shortly after Recall was announced.

Another potential issue: Even if the database and data are encrypted, a hacker with access to the machine might still do damage. “Initial access is all that is needed to potentially steal sensitive information such as passwords or company trade secrets,” said Douglas McKee, executive director of threat research at security firm SonicWall.

Security expert Kevin Beaumont, who worked for Microsoft for a short time in 2020, also weighed in, noting in a blog post that hackers gain access to devices “every day on home PCs, and corporate systems…. In essence, a keylogger is being baked into Windows as a feature.”

His research uncovered an even bigger problem. When Beaumont got his hands on the new Copilot+ software he found that Recall’s data is “written into an SQLite database in the user’s folder. This database file has a record of everything you’ve ever viewed on your PC in plain text.”

That means that a hacker doesn’t even need to gain control over someone’s PC to get at their Recall data. The hacker only needs to get at the database file, something that is straightforward and simple to do remotely. Beaumont even posted a video of Microsoft employees doing it

The criticism was sharp in Europe, as well. Kris Shrishak, who advises European legislators on AI governance, echoed Pollard in warning that Recall is a potential “privacy nightmare.” And the UK’s Information Commissioner’s Office is concerned enough about the issue that it’s already gotten in touch with Microsoft about the privacy implications.

Faced with those very public concerns, Microsoft did at least shift gears last week to try and alleviate security fears. In addition to making Recall an opt-in feature for users, the company now requires Windows Hello biometric authentication to enable the feature — requiring a “proof of presence” to search in Recall or view a timeline. Going further, Microsoft will add “just in time” decryption protected by Windows Hello Enhanced Sign-in Security (ESS). That means Recall snapshots will “only be decrypted and accessible when the user authenticates,” Pavan Davuluri, corporate vice president for Windows and Devices, said in a blog post.

Beaumont pointed to the Friday announcement in an online post, but remained skeptical: “Turns out speaking out works. Microsoft [is] making significant changes to Recall, including making it specifically opt in, requiring Windows Hello face scanning to activate and use it, and actually encrypting the database. There are obviously going to be devils in the details — potentially big ones. Microsoft needs to commit to not trying to sneak users to enable it in the future, and it needs turning off by default in Group Policy and Intune for enterprise orgs.”

He continued: “There are obviously serious governance and security failures at Microsoft around how this played out that need to be investigated, and suggests they are not serious about AI safety.”

What should you do about Recall?

Anyone who cares about privacy should think seriously about whether the benefits of the new feature are worth the dangers. Since Recall will now be turned off by default on the new Copilot+ PCs when they ship, you’ll have to think long and hard about whether you should turn it on. If you’re worried about any privacy implications, leave it off. 

As for enterprises, Recall is big trouble just waiting to happen. The feature has the potential to expose corporate data and secrets, not just files and data from individual users. Businesses should carefully consider how to protect themselves should they buy into the Copilot+ line, including making sure Recall is off on every device they buy — and that it stays turned off. 

Why Microsoft keeps adding new features to Windows 10

Microsoft’s approach to Windows 10 has been chaotic lately. The company looks like it’s ramping up development of new Windows 10 features this summer — all while insisting it will end support for the OS in October 2025.

Whether you’re a business user or a consumer, it’s natural to wonder what exactly Microsoft is doing, why it’s happening, and what it all means for Windows 10’s support timeline.

I’ve got some answers to why Windows 10 looks like it’s getting another burst of life — and why it will be short-lived.

Want tips for making the most of your PC? Sign up for my free Windows Intelligence newsletter — I’ll send you three things to try every Friday. Plus, get free copies of Paul Thurrott’s Windows 11 and Windows 10 Field Guides (a $10 value) for signing up.

It all started with Copilot

When Microsoft announced Copilot back in September, the message was clear: Copilot on the taskbar was a Windows 11 feature for modern Windows 11 PCs. Why would Microsoft add its generative AI software to Windows 10 PCs? After all, those PCs wouldn’t last much longer.

In fact, the lack of updates was part of Windows 10’s new advantage! For businesses and users who just want a stable version of Windows that didn’t change much, Windows 10 was just the ticket. All the new features would be added to Windows 11, which made Windows 10 ideal — especially in business.

A few months later, Microsoft changed its mind and announced Copilot would be coming to Windows 10’s taskbar, too. It was a shrewd business move, using Windows 10 as a platform to deliver Microsoft’s web-based AI assistant to most Windows PCs in the world.

After all, Copilot-on-the-taskbar is entirely dependent on Microsoft’s cloud servers. You can use Copilot just as well on a Windows 10 PC as you can on the Copilot website or a Copilot Android or iPhone app. As I put it months ago, Microsoft is using Windows PCs as a platform to push its web-based services.

At the time it changed tracks, Microsoft explained that it was ‘maximizing value in Windows 10.’ As the company put it: “We are revisiting our approach to Windows 10 and will be making additional investments to make sure everyone can get the maximum value from their Windows PC including Copilot in Windows.”

(You could just as well say Microsoft was getting the maximum value from its Windows 10 users — delivering Copilot to as many people as possible.)

Windows 10 has gotten other recent updates

That Copilot taskbar icon is just the most obvious update to arrive on Windows 10 PCs recently. Microsoft recently launched an expanded lock screen weather widget that now also displays sports scores and stock price updates on the lock screen. (Here’s how to hide that widget from your PC’s lock screen.)

That particular update arrived for both Windows 11 and Windows 10 PCs around the same time. As with Copilot, it’s all about using PCs as a platform to push Microsoft’s web-based services, getting them in front of the maximum number of people and getting extra clicks to Microsoft Start.

Windows 10 weather and more
Both Windows 10 and Windows 11 now have “Weather and more” on their lock screens.

Chris Hoffman, IDG

Microsoft is also rolling out an update that will offer “Spotlight” as a desktop background option on Windows 10, letting your PC download fresh new desktop backgrounds every day — another feature previously only found in Windows 11. That’s good news, and it means Windows 10 users who want this sort of thing will be able to uninstall the Bing Wallpaper tool and select “Spotlight” under Settings > Personalization > Background — just like on Windows 11.

There have also been app updates \. Windows 10’s Photos app has gotten a variety of AI-powered features, just like Windows 11’s version. You can erase objects from photos with generative AI in your PC’s Photos app — whether you’re using Windows 10 or 11. And Windows 10 got a new “Windows Backup” app that encourages you to back up your data with OneDrive.

Now, the Windows 10 beta channel reopens

After all of this, you would think that Windows 10 feature development might be winding down. But Microsoft appears to be ramping up. On June 4, the company announced it was re-opening the beta channel for Windows Insiders who want to test new features on Windows 10. That seems like a sign that even more Windows 10 features are on the way — and they need testing.

Just in case there was any confusion, Microsoft’s reiterated that the October 14, 2025 end-of-support date hasn’t been changed.

Clearly, Microsoft isn’t done. Expect to see more new features and changes. For example, Microsoft is transforming the Copilot experience in Windows from a sidebar to a more traditional windowed app — something that looks very similar to the new ChatGPT desktop app that OpenAI recently showed off. That announcement was about Windows 11, but it’s easy to imagine Copilot’s modern new interface might come to Windows 10 as well.

I also imagine you can expect more new features and tweaks that tie in with Microsoft’s online services.

Microsoft Copilot as window
Microsoft showed off a new, ChatGPT-style windowed experience for Copilot at its Copilot+ PC launch event.

Chris Hoffman, IDG

Microsoft keeps putting its Windows division under new management

You can’t really understand what’s going on here without turning your eye to the corporate machinations inside Microsoft. If it seems as if the company’s Windows 10 strategy keeps shifting as the management in charge of Windows keeps changing. Microsoft has had three different people in charge of Windows over the past year.

Until September 2023 — shortly before the announcement of Copilot — Panos Panay was in charge of Windows (and Surface hardware) at Microsoft. Panay left just before the Copilot launch. After that, Microsoft split up Surface and Windows, putting Mikhail Parakhin in charge of “Windows and web experiences.” Then, in March 2024, Microsoft put Pavan Davuluri in charge of Windows (and Surface hardware).

It’s interesting how these dates line up with shifts in the Windows 10 strategy. Under Panay, Windows 10 wasn’t getting new features and the focus was on Windows 11. Then, under Parakhin — who was in charge of both Windows and web experiences like Bing — Windows 10 feature development began once again, with more web-based features added to Windows 10.

Now, with a new person overseeing Windows, we get what seems like a strategy change once again, with Windows 10’s beta channel coming back to life. It’s a “glass half full” type of strategy: You could say that Windows 10 only has 16 months of support left, so why add new features? Or, you could say that Windows 10 has 16 months of support left — that’s 16 months Windows 10 users could be using these new features that help Microsoft’s bottom line!

Windows 10 update complaining
Microsoft doesn’t plan on offering the Windows 11 upgrade to more PCs, either.

Chris Hoffman, IDG

Windows 10’s end of support deadline remains

Ultimately, most PCs still use Windows 10 and Microsoft wants to deliver its web-based features on those PCs to get more people using them. It’s as simple as that.

Still, Microsoft continues repeating that it’s not going to extend Windows 10’s support. Windows 10 will be done as of Oct. 14, 2025 — never mind new features!

But Microsoft has never been in a position quite like this. There’s never been a version of Windows that was this popular right up to its end-of-support deadline. I imagine the company hopes that between new Copilot+ PC hardware enabling more powerful genAI features and other upgrades, enough people will move on to Windows 11 that Microsoft can put its predecessor out to pasture on schedule.

Still, the company has done one big thing: Microsoft has announced it will, for the first time, offer paid Extended Security Updates (ESUs) to home users, not just businesses. You’ll be able to pay for another three years of security updates for your Windows 10 PC after the deadline, whether you have a business contract with Microsoft or you’re just an individual user.

Microsoft hasn’t finished announcing pricing for home users yet. Business users will pay $61 per device for the first year, and schools will pay just $1 per device for the first year. That’s as far as Microsoft has gone with the specifics.

The company is likely waiting to see just how many people will still be using Windows 10 next year. There’s a good chance the extended security update pricing will be on the inexpensive side for home users, just as it it is for educational institutions.

Just don’t expect Microsoft to keep adding new features to Windows 10 during that extended support period, which is all about security updates.

Of course, never say never; who knows what a future manager may decide to do with Windows 10!?

Sign up for my Windows Intelligence newsletter to get three things to try in your inbox each Friday and free copies of Paul Thurrott’s Windows 11 and Windows 10 Field Guides.

Afraid AI will steal your job? You’re not alone

More than four in 10 workers in a massive, global survey said they’re threatened by the increasing presence of artificial intelligence (AI) in their workplace.

Responses to an ADP survey from nearly 35,000 workers in 18 countries explored new challenges and technologies reshaping the labor market. Among the top concerns: AI, with 42% citing it as a job threat.

Data and an evaluation of the results by ADP focused on how workers’ priorities, expectations and feelings have shifted, including their sense of job security and how the implementation of AI in the workplace impacts that shift. The details are included in ADP’s recently released “People At Work 2024: A Global Workforce View” annual report.

Eighty-five percent of workers believe AI will impact their job in the next two to three years, ADP’s survey found. Among workers who say AI will help them every day, 70% say they have the skills they need to advance their career to the next level within three years. Of those who say AI will replace most of their existing functions, only 45% think they have the skills they need.

AI job automation

ADP

Workers in some ways have remained constant in their priorities; they still put great value on financial compensation and job security, for example. In other ways, however, they feel under siege from technology, stress, and changing workplace norms. ADP’s report this year focused on a “Great Transition” from a “troubled, pandemic-driven economy to a post-pandemic world.”

Overall, people feel better about their job security than they did a year ago, but they’re now worried by the increasing presence of AI.

The survey results are backed by financial services market predictions that AI could replace the equivalent of 300 million full-time jobs. According to a 2023 report by investment bank Goldman Sachs, two-thirds of all jobs could be partially automated by AI. “If generative AI delivers on its promised capabilities, the labor market could face significant disruption,” Goldman reported. “And… generative AI could substitute up to one-fourth of current work.”

In the US, office and administrative support jobs face the highest risks, with 46% of those jobs facing automation by AI, according to Goldman Sachs. That figure is 44% for legal work and 37% for tasks within architecture and engineering.

“The good news is that worker displacement from automation has historically been offset by creation of new jobs, and the emergence of new occupations following technological innovations accounts for the vast majority of long-run employment growth,” Goldman Sachs noted. “The combination of significant labor cost savings, new job creation, and higher productivity for non-displaced workers raises the possibility of a productivity boom that raises economic growth substantially, although the timing of such a boom is hard to predict.”

AI Automation

Goldman Sachs

Goldman’s report suggested genAI could raise annual US labor productivity growth by just under 1.5% over a 10-year period after widespread adoption, although the boost could be much smaller or larger depending on the difficulty level of tasks AI will be able to perform and how many jobs are automated.

Of the ADP survey respondents, 17% said AI has the power to make their work easier and were less likely to feel insecure about losing their jobs. And 43% think having AI on the job will be a good thing.

While most (60%) agree they have the skills needed to advance their career in the next three years, less than half (47%) feel their employer invests in the skills they need to grow professionally. As workers grow less confident in their employers’ willingness or ability to invest in them, however, they grow more concerned about AI muscling in on their jobs. Workers who most fear AI have the least confidence (45%) that they have the skills they’ll need.

A widening IT skills shortage is dogging organizations in all industries and across all regions, IDC research shows. But far from eliminating most jobs, AI is expected to boost the productivity of the existing workforce and help to create new roles.

IDC AI Automation

IDC

“For all the concern about job loss…, many AI-based changes to work have led to new job opportunities, including enabling workers to focus on more engaging and innovative tasks,” IDC’ said. “AI use cases are creating new opportunities for content creation, education, entertainment, and content generation, provoking a shift in thinking about the role of AI on jobs and roles.

IDC data indicates that 32% of business and IT leaders now expect advancing AI constructs such as genAI to save time and improve productivity. For example, genAI tools support greater access to diverse knowledge resources by enabling employees to conversationally access them using natural language queries.

“GenAI and skills are increasingly tightly related: Organizations spanning all industries and geographies face a widening shortage of all IT tech skills, regardless of those skills relating to security, cloud, IT service management, or AI itself,” IDC said. “GenAI tools used in conjunction with or inside of tech training platforms can and do accelerate training.”

Nearly two-thirds (62%) of about 1,100 IT leaders surveyed earlier this year told IDC that a lack of skills has resulted in missed revenue growth objectives — and more than 60% say it has led to quality problems and a loss of customer satisfaction. IDC predicts that by 2026, more than 90% of organizations worldwide will feel similar pain, amounting to some $5.5 trillion in losses caused by product delays, impaired competitiveness, and loss of business.

“There is no escaping it: The time to plan for AI skills and roles is now. …The question is not whether enterprises must skill up employees for the age of AI, but when and how they will do it,” IDC wrote in its report.

At the same time, many organizations are still asking whether the rapidly evolving technology will profoundly change jobs and skills development at a fundamental level. “Automation has long impacted human activities, most notably during the industrial era when machines rapidly replaced the manual work of humans,” IDC said.

The impact of AI and genAI varies by profession and industry, with creative and engagement-based skill sets and work being affected first. Marketers, customer support teams, web designers, professional services groups, healthcare roles, and many other positions are already seeing changes in their work practices from AI.

The same IDC data shows that 29% of business and IT leaders expect AI and genAI to enable faster decision timing for back-office centers of leadership like HR, operations, and finance. As compared to two years ago, businesses are changing their narratives around AI enablement to see what it can do for the business and its workforce rather than worrying about what uncontrollable challenges it could pose.

WWDC: Apple’s Private Cloud Compute is what all cloud services should be

Apple Intelligence is the big buzz at WWDC. But when it comes to AI and the cloud, if you aren’t a huge enterprise or well-funded government, privacy and data security have always been a challenge when using any cloud service. With the introduction of Private Cloud Compute (PCC), Apple just did cloud services right — and put a real competitive moat in place.

Apple seems to have solved the problem of offering cloud services without undermining user privacy or adding additional layers of insecurity. It had to do so, as Apple needed to create a cloud infrastructure on which to run generative AI (genAI) models that need more processing power than its devices could supply while also protecting user privacy

While you can also use ChatGPT with Apple Intelligence, you do not need to (and OpenAI has promised not to store your data under the Apple deal, I think); PCC helps you run Apple’s own GenAI models instead.

The Apple achievement explained

“You should not have to hand over all the details of your life to be warehoused and analyzed in someone’s AI cloud,” Apple Senior Vice President of Software Engineering Craig Federighi said when announcing the service Monday at WWDC.

To achieve this, Apple has poured what I imagine is more or less a nation-state level security budget into creating a highly secure cloud-based system that provides the computational power some problems will require to be resolved.

The introduction comes at a time when providers are rolling out a range of trusted cloud and data sovereignty solutions to answer similar challenges across enterprise IT; Apple’s Private Cloud Compute service represents the best attempt yet to provide trusted cloud access to the mass market. It comes as security experts warn against unconstrained use of cloud-based genAI services in the absence of a privacy guarantee. 

(Remarkably, Elon Musk’s first reaction on hearing of Apple Intelligence was to label it a security threat, when that is precisely what it has been built not to be — you don’t have to use OpenAI at all, and I expect device management tools will be able to close off access to doing so. Perhaps a PCC-style service will eventually form part of the ecosystem for autonomous vehicles that are actually safe?)

What is Private Cloud Compute?

Private Cloud Compute consists of a network of renewable-energy powered Apple Silicon servers Apple is deploying across US data centers. These servers run Apple’s own genAI models remotely when a query demands more computational power than is available on an Apple device. (We don’t expect some of the newly introduced Apple Intelligence services to be available outside the US until 2025, likely reflecting the time it will take to deploy servers locally to support them.)

The idea is that while many Apple Intelligence tasks will run quite happily at the edge, on your device, some queries will require more computational power — and that’s where the PCC kicks in. 

But what about the data you share when making a query? Apple says you don’t need to worry, promising that the information you provide isn’t accessible to anyone other than the user, not even to Apple. 

This has been achieved through a combination of hardware, software, and an all-new operating system. The latter has been specially tailored to support Large Language Model (LLM) workloads, while presenting a very limited potential attack surface. 

This is the power of what is at core server-side Unix, coupled with Apple’s own proprietary system security software and a range of on-device, on-system hardened and highly secure components.

How does this all fit together?

The hardware itself is built around Apple Silicon, which means the company has been able to protect the servers with built-in security protections such as Secure Enclave and Secure Boot. These systems are also protected by iOS security tools, such as Code Signing and sandboxing.

To provide additional protection, Apple has closed down traditional tools such as remote shells and replaced them with purpose-built proprietary tools. There is also something Apple calls Swift on Server on which the company’s cloud services exist. The use of Swift, Apple says, ensures memory safety, which helps further limit any attack surface.

This is what happens when you make an Apple Intelligence request:

  • Your device figures out if it can process the request itself.
  • If it needs more computational power, it will get help from PCC.
  • In doing so, the request is routed through an Oblivious HTTP (OHTTP) relay operated by an independent third party, which helps conceal the IP address from which the request came.
  • It will only send data relevant to your task to the PCC servers.
  • Your data is not stored at any point, including in server metrics or error logs; is not accessible; and is destroyed once the request is fulfilled.
  • That also means no data retention (unlike any other cloud provider), no privileged access, and masked user identity.

Where Apple really seems to have made big steps is in how it protects its users against being targeted. Attackers cannot compromise data that belongs to a specific Private Cloud user without compromising the entire PCC system. That doesn’t just extend to remote attacks, but also to attempts made on site, such as when an attacker has gained access to the data center. This makes it impossible to grab database credentials to mount an attack.

What about the hardware?

Apple has also made the entire system open to independent security and privacy review — indeed, unless the server identifies itself as being open to such review, the information will not be transmitted — so no spoof PCC for you. 

The company didn’t stop there. “We supplement the built-in protections of Apple Silicon with a hardened supply chain for PCC hardware, so that performing a hardware attack at scale would be both prohibitively expensive and likely to be discovered,” the company said. “Private Cloud Compute hardware security starts at manufacturing, where we inventory and perform high-resolution imaging of the components of the PCC node before each server is sealed and its tamper switch is activated.”

What that means: Apple has put protections in place to maintain server security that extend all the way from the factory where those servers are made. That’s a huge step on its own account.

What about Apple’s genAI software?

Apple also maintained a focus on user security while developing the tools it makes available within Apple Intelligence. These follow what it calls its “Responsible AI principles,” as explained on the company site. These are:

  • “Empower users with intelligent tools: We identify areas where AI can be used responsibly to create tools for addressing specific user needs. We respect how our users choose to use these tools to accomplish their goals.
  • “Represent our users: We build deeply personal products with the goal of representing users around the globe authentically. We work continuously to avoid perpetuating stereotypes and systemic biases across our AI tools and models.
  • “Design with care: We take precautions at every stage of our process, including design, model training, feature development, and quality evaluation to identify how our AI tools may be misused or lead to potential harm. We will continuously and proactively improve our AI tools with the help of user feedback.
  • “Protect privacy: We protect our users’ privacy with powerful on-device processing and groundbreaking infrastructure like Private Cloud Compute. We do not use our users’ private personal data or user interactions when training our foundation models.”

Apple has also thought about whose data it uses to train its models and promises that its Apple Intelligence LLMs are trained on licensed data, “as well as publicly available data collected by our web-crawler, AppleBot.” (If you don’t want your content crawled by AppleBot for use in training models you can opt out, as explained here.)

While researchers will kick Apple’s systems around, it looks very much like the company has crafted a highly secure approach to genAI from the device used to request a service all the way to the cloud, with software and hardware protections in place every step of the way. 

What Apple has achieved

There is a lot more to consider — take a look at this white paper — but Apple has achieved something potentially very good here: an ecosystem that provides private genAI services, and can be extended over time. 

I’m seeing positive reaction from across the security community to Apple’s news.

“If you gave an excellent team a huge pile of money and told them to build the best ‘private’ cloud in the world, it would probably look like this,” Johns Hopkins cryptography lecturer Mathew Green said. He also warned that the right to opt out of using Apple Intelligence should be more visible and suggested that the impact of Apple’s move will effectively lead toward more use of cloud services.

 “We believe this is the most advanced security architecture ever deployed for cloud AI compute at scale,” said Apple Head of Security Engineering and Architecture Ivan Krstić

Is that really the case? Perhaps. Apple has promised that additional transparency to confirm its security promise is on the way. Though I do wonder how this service will gel with those bandit nations (such as the UK) that legislate for pretty much constant data surveillance in order to protect nothing much at all.

But in terms of end user protection and a fully figured out system to support cloud services, Apple’s new offering shows what every cloud service should aspire to exceed. 

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.