Author: Security – Computerworld

Apple MDM vendor Mosyle gets into asset management

Service providers who cater to Apple in the enterprise space continue to explore new ways to differentiate their offerings in the fast-expanding sector. This might be why Mosyle has acquired asset management software provider Assetbots.

The acquired firm will continue to be led as an independent company by founder Chad Burggraf, but will be supplemented by its strengthened relationship with its new owner, Mosyle. 

Bringing asset management into Apple MDM

The deal, for an undisclosed amount, sees Mosyle’s Apple device MDM solutions expanded with the future introduction of cost-effective asset management tools, which can help users track and monitor assets such as IT equipment, printers, users and more. This solution apparently has strong automation to help track item lifecycles, giving IT a good bird’s eye overview of the condition of their equipment; that should allow for more accurate purchasing and management.

The focus, according to statements from both firms, will be to become a leading asset management platform across small and medium-sized businesses (SMBs) and schools, which is where Assetbots has focused its efforts on. 

Making complex things simple

Mosyle CEO Alcyr Araujo says he pursued the deal because he recognized the quality of the software, the simplicity of the user interface, and the feature set. “Their obsessive goal of creating extremely high-quality software with unparalleled simplicity and affordability, …immediately made me want to be part of their mission and growth,” he said in a statement.

“The total alignment on the vision of creating the highest quality tools on the market, while also achieving simplicity and affordability made Mosyle a perfect home for Assetbots,” said Burggraf, also in a statement. “Their proven success in achieving that combined with their scale and resources will allow Assetbots to do the same for Asset Management software for schools and SMBs.”

It’s worth noting that the acquisition seems to reflect an evolution in the relationship between the two firms. In May, Assetbots published a report explaining how to sync assets through Mosyle. Two additional reports explaining additional integrations between the services followed.

Some of the benefits include fast two-way sync between both services, along with robust security, compliance and scalability. It also explains that Mosyle customers will be able to subscribe to Assetbots’ services at a highly preferential rate. “Mosyle and Assetbots will continue to work together to create even more benefits for common customers,” the companies said.

Apple’s enterprise ecosystem continues to mature

For many, the world of software-driven asset management might seem a little, well, niche. But the evolution taking place across Apple’s third-party enterprise ecosystem is noteworthy. 

Vendors in the market already recognize the benefits of the platform they work within, both in terms of Apple’s expanding market share in global business and within the frame of resilience, security, and privacy. All these factors mean they and their clients can expect better business resilience, which is never a bad thing.

While many across Apple’s developer ecosystem bemoan some of the impacts of the company’s walled garden approach, all of them also recognize the benefits of playing within a rock-solid, highly secure environment already suited to the emerging needs of digitally transformed businesses

In this multi-platform, multi-device, distributed world, it makes sense that every endpoint be secure by design, while the ability to effectively manage devices using Apple’s existing APIs and the software built by device management firms is a highly sellable commodity. 

Bigger fish in a growing pond

While it is true that vendors in the Apple enterprise space are competing against each other, it also means the market in which they compete is expanding, not shrinking. At this point, the fish get to continue getting bigger while swimming in a larger pond.

And while many who are deeply invested in the Windows enterprise ecosystem might want to ignore it, opportunity knocks for Apple as CIOs and CISOs come under intense pressure to explain away the damage done by the recent Microsoft/Crowdstrike disaster. “Windows is the most fragile platform,” is not a headline designed to instill confidence in any business leader making a purchasing decision. It’s the kind of lede that leaves those fish gasping.

Some will consider switching ponds

Is Apple the future of business? 

This really is a continuation of the trend. As Araujo put it earlier this year: “While Apple devices have always been the device of choice for modern companies, very large industrial and traditional service companies are now embracing the technology. This should continue for the coming years, and Macs will ultimately become the leader in the enterprise for all businesses.” 

Step by step, the Apple enterprise ecosystem is evolving to serve a diversity of business needs to replace existing suppliers across the fading Windows-centric enterprise IT space

Please follow me on LinkedInMastodon, or join me in the AppleHolic’s bar & grill group on MeWe.

How to set up Windows 11 Hyper-V virtual machines

Though it may seem that virtual machines, a.k.a. VMs, have been around forever, Microsoft introduced its first version of the Hyper-V virtualization environment as part of Windows Server 2008. Hyper-V and its VMs didn’t appear in a Windows desktop operating system until May 2012 with the Windows 8 technical preview.

Since then, however, Hyper-V VMs have come to represent approximately 1 in every 10 VMs in global use. This is no small number, given that the cloud more or less runs on VMs of one kind or another. We’re talking billions and billions (and not hamburgers, either).

That said, Windows 11 users and admins have plenty of good reasons to run VMs on PCs on a vastly smaller scale than in the cloud. Among them are:

  • The ability to run multiple OSes, including various versions of Windows 10 and 11 (or older, unsupported versions), Linux, and more, side-by-side. Among other benefits, this lets users run legacy applications not compatible with Windows 11.
  • Supports virtual networks, complete with virtualized gateways, routers, servers, and network nodes on one or more private or public networks.
  • Supports easy isolation and testing, useful to testing software, experimenting with specific configurations, and avoiding device or software conflicts with the host PC.
  • Enables superior backup and disaster recovery capabilities, thanks to checkpoint and snapshot mechanisms, with recovery to other PCs or into the cloud easily done.
  • Works through remote access and management, so that VMs can be maintained across multiple devices and locations, with secure boot, live migration, and lots of options for network storage.

Microsoft provides two tools for creating Hyper-V VMs in Windows 11. In this guide I’ll provide some background, discuss how to use each tool, and detail the drawbacks each entails.

In this article:

  • Understanding VMs and hypervisors
  • Requirements for a Windows 11 VM
  • Creating VMs with Hyper-V Manager
  • Creating VMs with Dev Home (Preview)
  • Net-Net: It really could be easier

Understanding VMs and hypervisors

Hyper-V is a kind of hypervisor: a program that can create, run, and manage one or more virtual machines on some kind of physical computer. Essentially, a hypervisor creates a runtime environment in which an administrator can define the properties of one or more VMs that run in its embrace.

Such properties include virtual processors and cores, virtual memory, virtual storage, virtual networking connections, and a virtualized operating system. VMs use such resources to run commands, programs, and more, each inside its own independent and isolated runtime environment.

From inside a VM, Windows looks and runs as it would on any computer. From outside the VM, the hypervisor handles interactions with the host OS and translates between virtual resources allocated to the VM and physical resources made available to the hypervisor.

PCs that run Hyper-V Manager (the hypervisor built into Windows) or other hypervisors, such as VMware Workstation, are called host PCs, where individual VMs function as hypervisor clients.

Requirements for a Windows 11 VM

Because a Windows VM represents an instance of a virtualized Windows operating system, Windows 11 VMs inherit the system requirements for Windows 11 itself. Hyper-V adds additional items to this list (they’re preceded by an asterisk):

  • * Windows Edition: Windows 11 Enterprise, Education, or Pro. (Windows 11 Home does not support Hyper-V.)
  • CPU: A 1.0GHz (or faster), 64-bit ARM or x86 CPU that supports second-level address translation (SLAT is its Intel name, RVI its AMD name, and “Stage-2 page tables” is what ARM calls it) is needed to run Windows 11 inside Hyper-V. Windows 11 also imposes CPU requirements for SSE4.2 instruction support for 24H2 or newer OS versions (specifically, the POPCNT instruction must be available).
  • RAM: Windows 11 VMs need at least 4GB of RAM to operate, ideally on a physical PC equipped with 8GB RAM (or more). For Windows PCs that will run Hyper-V, you’ll want at least 4GB of RAM for the base OS and at least 4GB more for each VM you want to run in parallel. Thus, a Windows 11 PC with 32GB of RAM could handle up to seven Windows 11 VMs, but no more than that. Fewer is better, in fact.
  • Graphics: The physical PC should support DirectX 12 or higher as well as a WDDM 2.0 (or higher) driver. Display resolution to accommodate a VM window should be 720p (1280 x 720 pixels) or better. This is the same as for the Windows 11 OS itself.
  • * VM generation: VMs come in two forms: Generation 1 and Generation 2. A Gen 1 VM uses BIOS firmware and supports only limited hardware devices and features. A Gen 2 VM uses UEFI firmware and supports a much wider range of hardware devices and features. Microsoft recommends that admins always create generation 2 VMs if they can. Indeed, Windows 11 works only on a Gen 2 VM.
  • Other noteworthy requirements: The physical PC should support the Trusted Platform Module (TPM) version 2.0 and UEFI with Secure Boot. Nearly all PCs built in mid-2018 or later should meet these requirements with ease, although TPM 2.0 must be enabled on some PCs.

Today, there are two ways to set up a Hyper-V VM in Windows 11: the standard way, and a developer-oriented way. The standard way uses Hyper-V Manager to create and configure VMs. The developer-oriented way, which made its debut in April 2024, uses a Microsoft utility named Dev Home (Preview). It’s available from the Microsoft Store.

As you will see, the developer-oriented way is simpler and less fraught with obstacles than the traditional one. But there’s a catch, as I’ll explain in the Dev Home section below.

Creating VMs with Hyper-V Manager

Although Hyper-V is included with modern Windows versions, it is not enabled by default. Thus, you must first turn Hyper-V on using the Windows Features element from Control Panel (Control Panel > Programs and Features > Turn Windows features on or off).

Turn on Hyper-V Manager

Click the top-level Hyper-V item. When you do so, you’re turning on both the Hyper-V platform itself (Hyper-V Platform) and Hyper-V Management Tools to run VMs through the Hyper-V Manager or through some remote access toolset (e.g., the Remote Desktop Connection, a.k.a. mstsc.exe, or the Remote Desktop app, ID=9WZDNCRFJ3PS).

I usually turn on the item labeled Windows Hypervisor Platform as well, as shown in Figure 1. Once you’ve selected these items, click OK and Windows will install and enable them for you.

enabling hyper-v in windows 11

Figure 1: Turn on top-level Hyper-V and the Windows Hypervisor Platform elements.

Ed Tittel / IDG

Windows Features will find, install and enable the elements necessary to run Hyper-V Manager — and along with it, the support necessary to run Hyper-V VMs inside its embrace. When Windows Features is done, it reports: “Windows completed the requested changes.” It also informs you that a reboot is needed to finish the installation process.

Thus, you must restart the PC before you can run or use the Hyper-V platform in any way. The easiest way to do that is simply to click the “Restart now” button at the lower right in Figure 2.

windows needs to reboot message

Figure 2: Click “Restart now” to get your PC ready for Hyper-V VM action!

Ed Tittel / IDG

After the reboot, typing “hyper” into the Start menu search box brings Hyper-V Manager up as its first choice, as shown in Figure 3.

hyper-v manager in windows 11 start menu

Figure 3: Once rebooted, Hyper-V Manager is ready to run.

Ed Tittel / IDG

Run VM Quick Create using a provided OS

Accessing Hyper-V Manager from its host desktop — also known as running local, or local invocation — turns out to be the easiest way to set up a Hyper-V VM. The first time Hyper-V Manager is run, there’s literally nothing to see because no VMs are known to it yet. As you can see in Figure 4, each of the center panes for Hyper-V Manager reports “No VMs” or “No item.”

blank hyper-v manager screen

Figure 4: When run for the first time, Hyper-V Manager shows precisely nothing.

Ed Tittel / IDG

If you click Quick Create, the top item in the right-hand menu you can soon remedy this situation. By default, Microsoft provides pointers to various predefined runtime environments, including evaluation versions of Windows 10 and 11 aimed at developers and (until early July) three different Long Term Servicing (LTS) branches for Ubuntu Linux (which may be freely offered to anyone who’s interested).

The right-hand side of Figure 5 shows the details for the “Windows 11 dev environment” item, an evaluation copy worth installing as a short-lived example. Select the OS you want to install and click Create Virtual Machine.

viewing info for the windows 11 dev environment in hyper-v manager

Figure 5: Select Windows 11 dev environment (left), then click Create Virtual Machine (lower right).

Ed Tittel / IDG

This downloads a ~21GB Windows image file. (On my Lenovo P16 Mobile Workstation, a 12th-Gen x86 platform, that took about 5 minutes to complete, including image verification.) Then, Hyper-V Manager creates a VM, extracts the virtual boot/system disk, verifies the image again, and completes (this takes another 5 minutes or so), as announced in Figure 6. Click Connect to hook up the new test VM inside Hyper-V Manager.

virtual machine created successfully message

Figure 6: VM successfully created. Next step: click Connect.

Ed Tittel / IDG

What happens next is that Hyper-V Manager opens a virtual machine window. You must start the VM to turn it on, as shown in Figure 7. If you were using an ISO (covered in a moment), this is when the process of installing Windows inside the VM would occur.

blank virtual machine window with vm turned off

Figure 7: Before you can interact with a VM, you must start it up: click Start!

Ed Tittel / IDG

But because Microsoft has so kindly provided us with a predefined, ready-to-run VM for this development environment, clicking Start launches a boot-up screen (“Getting ready”) that grinds away for a minute or so. After that you’ll be able to log in to a Windows 11 desktop (account name User, no password). There’s your first Hyper-V VM!

Though this takes at least 15 minutes to work through, it’s about as fast and easy as creating a VM ever gets. Figure 8 shows the VM running inside a Hyper-V Virtual Machine Connection window (see the legend on the title bar for confirmation).

windows 11 vm running in a hyper-v window

Figure 8: The Windows 11 Dev Environment VM start menu shows a developer focus.

Ed Tittel / IDG

Setting up a VM from a predefined virtual hard disk (.vhdx file and configuration information) is incredibly simple. Basically, it means turning on and running an already set up VM that’s more or less ready to run.

VM Quick Create using an ISO

You can see another option for creating VMs if you look back to Figure 5, where you can navigate into a “Local installation source” by clicking that button at the lower left corner of the “Create Virtual Machine” window. This lets you navigate into local drives and pick an ISO — a disk image format used for Windows installations — from which to run VM setup. (See “The best places to find Windows ISOs” for some reliable sources.)

That approach offers the ability to pick any Windows version you might like, but because it requires OS installation into a new VM, it takes longer to work through. There’s more effort involved in getting configuration right, too.

By clicking the Local Installation source button shown in Figure 5 and navigating to the root of my E:\ drive where a May 30 Windows 11 Insider Preview ISO resides, I made it a target for a Hyper-V VM. You can see the results of this election in Figure 9, from whence you must click Create Virtual Machine (lower right) to proceed further with a local ISO.

viewing local installation source in hyper-v manager

Figure 9: By targeting an ISO through File Explorer, you open VM creation options to whatever the file system can see.

Ed Tittel / IDG

This time when you click Create Virtual Machine, the Hyper-V wizard works from the ISO. Because I accepted all defaults, this VM shows up as “New Virtual Machine” in Hyper-V Manager. I right-clicked that string in the top center pane (“Virtual Machines” and selected Rename from the pop-up menu to call it “Win11.26100.” It appears as such in Figure 10.

new vm present but not installed in hyper-v manager

Figure 10: Renamed to Win11.26100 (for its primary Windows 11 build number), the new VM must now be installed.

Ed Tittel / IDG

Because we’re working from an ISO file, we must now install this OS image before it can run as a VM. This is where timing becomes sensitive: once you click Connect, then click the Start button, you must be ready to hit the proverbial “any key” (because any key will do) to start the installer running. If you wait too long, you’ll see a PXE boot message instead, and the install will go no further. Be prepared and act fast!

If all goes well, you’ll see the Windows 11 setup screen “Select language settings.” After that, it’s just matter of marching through the Windows install process, so I won’t track that further except to note that you will need a valid product key for whatever version of Windows you’re installing.

By default, you will encounter a problem: At some point, you’ll get the error message “This PC doesn’t currently meet Windows 11 system requirements.” That’s because Hyper-V Manager Quick Create does not enable the Trusted Platform Module (TPM: special, secure chip-level storage used for hardware keys and other highly sensitive system data) by default.

Turn off the VM (click Action in the top menu, then click Turn off). In the main Hyper-V Manager screen, select the VM you’re installing (in this case, Win11.26100), then click the Settings option at the lower right. This opens the Settings page for the Win11.26100 VM. Click on Security. In the Security pane, you will see that while Secure Boot is enabled, TPM is not. Check the Enable Trusted Platform Module checkbox, as shown in Figure 11. Then click Apply and then OK.

enabling tpm in the windows 11 vm

Figure 11: TPM must be enabled before Windows 11 may be installed. Hyper-V Manager neglects to do this by default, so you must do so manually.

Ed Tittel / IDG

Now when you click Connect and get into the installer, it should run to the end.

Once the process is complete (it took another 40 minutes or so on my beefy P16 Mobile Workstation), you’ll see a typical Windows in a VM window. There you are: another VM for your Hyper-V collection.

Full-blown VM creation

In addition to the Quick Create option, Hyper-V Manager also offers a New > Virtual Machine option, as show in Figure 12. This provides more direct access to the various settings and selections possible when creating a Hyper-V VM.

selecting new virtual machine in hyper-v manager

Figure 12: The New > Virtual Machine action provides more control over details when creating a new VM.

Ed Tittel / IDG

Using this option opens a New Virtual Machine Wizard that walks you through the entire VM specification process, as shown in Figure 13. As you can see, it lets you specify a name, choose a file system location, select a VM generation, assign memory, and handle networking, virtual hard disk, and installation details. (Alas, you must still manage the TPM setting yourself manually, as described in the preceding section.)

hyper-v new vm wizard before you begin screen

Figure 13: The New Virtual Machine Wizard takes users through VM creation and settings, step-by-step.

Ed Tittel / IDG

In the screenshots that follow, I’ll create a new Windows 11 VM for version 24H2 (Build 26100.863) downloaded from the Insider Preview downloads page. Figure 14 shows the Specify Name and Location screen, where I’ve named the VM Win11.24H2. It uses the default storage location for a virtual hard disk file.

hyper-v manager new vm wizard specify name and location screen

Figure 14: The VM name is set to Win11.24H2, after which you click Next to continue.

Ed Tittel / IDG

By default, all Hyper-V VMs are designated Generation 1. Since Generation 1 doesn’t support Windows 11 VM requirements, you must select the Generation 2 radio button to install a Windows 11 VM, as shown in Figure 15.

hyper-v manager new vm wizard specify generation screen

Figure 15: Click the Generation 2 radio button to meet Windows 11 VM requirements.

Ed Tittel / IDG

The next step is to allocate 8GB (8,192MB) of RAM (double the default) for the Win11.24H2 VM, as shown in Figure 16.

hyper-v manager new vm wizard assign memory screen

Figure 16: Double the defaults (8GB instead of 4GB) this time around, please.

Ed Tittel / IDG

The next step is to select the Default Switch option for the Connection field under the Configure Networking heading, as shown in Figure 17. Here again, it’s important to note the default is “Not connected,” which means the VM cannot access any networks. Default Switch enables the VM to connect to the networks to which the host PC has access. If you have other switches defined in your host configuration, they should appear in the pull-down menu for this VM setup item (and you’ll be able to use them, if you like).

hyper-v manager new vm wizard configure networking screen

Figure 17: Use Default Switch if you want the VM to have network (and internet) access.

Ed Tittel / IDG

The next step is to connect a virtual hard disk for the VM to use. Here again we’ll use the default location mentioned earlier. Other options include “Use an existing virtual hard disk” (this is how the dev environment described earlier gets its contents) and “Attach a virtual hard disk later” (allows users to otherwise finish configuring a VM without allocating or linking to a virtual hard disk). See Figure 18 for the details for Win11.24H2.vhdx.

hyper-v manager new vm wizard connect virtual hard disk screen

Figure 18: This represents the default allocation (127GB) for Hyper-V VM virtual hard disks.

Ed Tittel / IDG

Next comes the fun part: providing a file system link to an ISO and electing how (or if) to install that image. This reads “Installation Options” on the left-hand side. In this case, we’ll link to the ISO I downloaded from the Insider Preview downloads page, and tell it to install the OS from that file, as shown in Figure 19.

hyper-v manager new vm wizard installation options screen

Figure 19: The selected radio button instructs the installer to find a specific Windows 11 ISO file.

Ed Tittel / IDG

At this point, the wizard is finished, so click Summary on the left to show your work so far. It will show all the settings you’ve made. Click Finish to complete the VM creation process. Then, you’ll return to Hyper-V Manager, where you now see a VM named Win11.24H2 in the upper center “Virtual Machines” pane, as shown in Figure 20.

new vm listed in hyper-v manager

Figure 20: The new VM, Win11.24H2, is turned off. That’s good!

Ed Tittel / IDG

You could try to connect to and start the install process for Win11.24H2 now, but we know one more change is needed — namely, to enable TPM under the new VM’s Security settings, as detailed before Figure 11 above. Once you’ve done so, you can get going on the install for this Windows 11 OS image, as described earlier in this story as well.

We’re done with the introduction to Hyper-V Manager and creating VMs. Now it’s time to dig into some down and dirty details.

A big Hyper-V Manager gotcha: remote VM access

By definition, all VM access is remote — that is, there’s no physical mouse, keyboard, or monitor attached to any VM. To interact with a VM, you must map virtual stuff onto real stuff — including the aforementioned peripherals but also CPUs, RAM, storage, networking, and yadda yadda yadda. Remember further that remote access is one of the benefits of VMs: indeed, they should readily support access across any network connection to a parent hypervisor.

Alas, when a remote connection uses the Windows remote desktop protocol (RDP) through Remote Desktop Connection or the Remote Desktop app, and that hypervisor is Hyper-V, things can get interesting. Let me explain how this can present obstacles above and beyond the gotchas I’ve already mentioned (issues getting the Windows Installer to start up for an ISO-based install, and the need to enable TPM for that install to actually work).

For starters, you can’t start a new Windows 11 VM from inside an RDP session, as it seeks to read and mount the targeted Windows 11 ISO to run its Setup.exe. For whatever odd reason, this works only from a local login on the host PC (not in an RDP session). If you click the Start button shown in Figure 7 in an RDP session, the VM won’t boot to run Setup.exe.

Indeed, if you click Start, you’ll get a black screen in the VM window, instead of running the Windows 11 installer. You must turn off the VM (click Action in the top menu, then click Turn off). Then click the Start button shown in Figure 7 from the physical host PC, using the local mouse or keyboard. Once Setup.exe is running, however, an RDP session shows the VM as you’d expect to see it, with the initial Windows 11 installer screen (see Figure 21).

windows installation screen in rdp session

Figure 21: Once Setup.exe is running, you can RDP into the VM, if you wish.

Ed Tittel / IDG

The next gotcha that makes itself felt occurs after you elect the Install now option that appears next. You will find you cannot copy and paste a Windows activation key into the Activate Windows prompt shown in Figure 22. Why? Because this only works in an enhanced session inside RDP, and you can’t elect that option until after Windows 11 is installed. Indeed, fixing this requires some fiddling to the Windows Hello login options. (Turned on by default, they don’t work with an enhanced session that permits copying and pasting from outside the RDP session into that session.)

windows key prompt in windows installation

Figure 22: When you get to the key prompt in Activate Windows, you’ll discover you cannot copy and paste a text key. Manual entry only!

Ed Tittel / IDG

You’ll have to enter the 25-character (letters and numbers) string for your chosen Windows key manually. Or you can use the 30-day eval for the Windows 11 developer environment instead (no key required but access doesn’t last very long).

But there’s one more RDP gotcha to surmount: you can’t log in to your new desktop until you uncheck the “Enhanced session” option in the View menu for the RDP session. Once you do that and log in, click Settings > Accounts > Sign-in options, then turn off the toggle under “Additional settings” that reads For improved security…, as shown in Figure 23. Then you can switch back to an enhanced session and log in using a password or a PIN.

view menu with enhanced session unchecked

Figure 23: Note that “Enhanced session” in the View menu up top is unchecked. Turn off the toggle under “Additional settings.” Then you can re-enable that option so that cut and paste will work in RDP.

Ed Tittel / IDG

Getting past plain-vanilla VMs

Thanks to our earlier efforts in this story, you’ve got yourself some working Windows 11 VMs set up mostly using Hyper-V Manager’s defaults. By setting up the plain-vanilla, all-default Win11.26100 VM via Quick Create and the slightly modified Win11.24H2 VM that follows it through the New Virtual Machine Wizard, you can learn a lot about what makes a VM tick, as well as provisioning defaults.

Those defaults will change according to the configuration of the host physical PC on which Hyper-V Manager runs. That is, machines with fewer cores, less RAM, and less storage will produce default VMs with fewer cores, less RAM, and less storage than those machines with more cores, more RAM and more storage, like the formidable Lenovo P16 Mobile Workstation I used as my test machine, with 24 cores, 64GB RAM, and ~4 TB total storage, 2 TB on the boot/system drive.

In most cases, the defaults that Hyper-V Manager chooses for the VMs it creates on your behalf work reasonably well. For those already familiar with Hyper-V, feel free to change values up or down. IMO, reducing values from their defaults doesn’t make too much sense except perhaps for special cases (or underwhelming physical PCs).

For more info on Hyper-V and VMs: Microsoft Learning offers a free 45-minute module entitled “Configure and Manage Hyper-V virtual machines” for those who want more details. Also, there’s a series of tutorials at Windows 11 Forum under its Virtualization heading (54 in all, across a range of VM topics) for those who really want all the minutiae.

Creating VMs with Dev Home (Preview)

When Microsoft released v0.13 of its Dev Home (Preview) developer toolbox on April 23, 2024, I noticed they added support for “Environments” as something new. Microsoft explains that environments provide “… the ability to create, manage, and configure Hyper-V VMs and Microsoft Dev Boxes” (see the GitHub Dev Home Preview v0.13 release notes).

Many readers may be indifferent to Dev Boxes (an Azure service to enhance developer productivity through self-service access to preconfigured, project-oriented development environments in the cloud via Azure, for which a subscription is required). If those readers want to use Hyper-V VMs, however, they should NOT be indifferent to its VM capabilities, which require Windows 11 22H2 or later.

Because I’m only too familiar with the gotchas outlined in the previous section that can impede creation (and use) of VMs through Hyper-V Manager, I wanted to see if Dev Home (Preview) could do any better. I deliberately used an RDP session to run Dev Home on a remote PC. Inside Dev Home, I opened the Environments option, shown with a small blue highlight bar to its left in Figure 24.

home dev environments list showing all vms

Figure 24: Home Dev Environments shows existing VMs along with “New Virtual Machine.”

Ed Tittel / IDG

Notably, Dev Home brought up all VMs already defined on the P16, and their status (Stopped, Running, Saved); I guess that means they count as “Environments” from the tool’s perspective. More notably, clicking the Create Environment button at the upper right sped me through the steps to create a new Hyper-V VM:

  1. Select Microsoft Hyper-V as the “environment provider.”
  2. Enter NewVM2 in the field tagged “New virtual machine name.”
  3. Select the 30-day evaluation for the “Windows 11 dev environment” as the Windows OS image source (it’s labeled an “Environment” below), purely as a test, as shown in Figure 25.
  4. Click Create Environment (see lower right, Figure 25).
windows 11 dev environment details

Figure 25: This screen shows you’ve chosen the Microsoft-supplied “Windows 11 dev environment” as your image source for a new VM.

Ed Tittel / IDG

Once you’ve clicked the Create Environment button, be prepared to wait a while. Dev Home must download the Windows 11 dev environment (over 20GB in size), then extract its interior files. On the P16 Mobile Workstation, that took about 15 minutes. Dev Home does report progress during this process, as you can see in Figure 26, which shows the download 76% complete.

windows 11 iso download in progress

Figure 26: Progress in downloading the ISO for the Windows 11 dev environment stands at 76%.

Ed Tittel / IDG

When the extraction process ends, the ISO is mounted and the VM ready to launch. You’ll see Environment information for your new VM (NewVM2, in this case) like that shown in Figure 27. You must click the Launch button (far right) to start the VM installation process.

new vm ready to launch in dev home

Figure 27: Click the Launch button to fire off the Windows installer for the VM’s OS.

Ed Tittel / IDG

When you do that, a small VM window opens to present you with a Start button to fire things off. Figure 28 depicts that VM window: click that Start button!

new vm turned off

Figure 28: Click the Start button to put Setup.exe to work to install the VM’s OS.

Ed Tittel / IDG

This starts the VM, which fires off the ISO image’s Setup.exe, at which point you’ll see a larger VM window labelled Hyper-V and a circular progress indicator. Then, you’ll be asked to size the VM window for further display (I recommend at least 1680 x 1050). At this point, a login window for a generic “User” (no password) appears, as shown in Figure 29. Remarkably, this took two minutes or less to complete. Click Sign in to get to the desktop.

win 11 running in vm

Figure 29: Because the predefined User account requires no password, click “Sign in” and you’re done.

Ed Tittel / IDG

The next thing you’ll see is the NewVM2 desktop, a mostly bare-bones Windows 11 install that also includes Visual Studio 2022 and Ubuntu on Windows. It’s running the current build as I write this for an Enterprise Evaluation (22621.3447) version. I also checked: you cannot use a valid Windows 11 Enterprise key to activate this install (it rejects all keys).

But here we are, having installed a working Hyper-V VM for Windows 11 from start to finish inside an RDP session! Thus, the Dev Home approach completely sidesteps all gotchas one encounters when using Hyper-V manager, to wit:

  • There’s no need to stop the VM after its first start, visit its Security settings and check the Enable TPM option as in Hyper-V Manager. Home Dev is smart enough to handle this in the background.
  • It starts, installs and boots from inside an enhanced RDP session. No local login is required to start up the VM to run setup.exe for the first time.
  • Built-in support for enhanced sessions also fixes the missing login prompt problem once the VM is up and running. There’s no need to tweak sign-in options, either. In fact, the OS doesn’t even show the “Only allow Hello logins” entry under Additional settings in Settings > Accounts > Sign-in options as shown in Figure 4 earlier.
  • Because enhanced sessions are turned on by default, you can cut and paste strings from outside the RDP session into the RDP session. That’s how I determined that a valid Windows 11 Enterprise key did not work to re-key the Windows 11 Dev image that Dev Home downloads and uses.

There’s just one problem: Dev Home environments don’t let you grab an arbitrary local ISO on a drive. You can only use Environments that Microsoft makes available (these are essentially the same as the “gallery images” shown in Figure 9 earlier in this story).

For any other Windows images you might want to run as VMs, you must use Hyper-V Manager and its quick or slow create processes — that is, unless Microsoft responds to my feature request to add access to local ISOs to Dev Home’s existing image options when creating a VM.

Net-Net: It really could be easier

What I learned from digging into Dev Home and its capabilities — especially when using RDP — is that it’s entirely possible for Microsoft to update and rationalize its Hyper-V VM creation process. Whether or not they choose to do so is up to them. I certainly hope they’ll figure this out, and do just that.

Ideally, Microsoft would fix Hyper-V Manager to make it Windows 11-aware (and friendly). And then they might add local ISO access to image selection options in Dev Home. Frankly, I’d be happy with either of these approaches (you can always tweak a VM created in Dev Home in Hyper-V Manager through its many Settings categories and options). Although it would be great to see both happen, I’m not holding my breath…

Intel is fighting a perception battle

Intel’s plunging stock price, which as of noon New York time on Tuesday was the lowest it has been since 2010, could cost the chip giant its coveted spot on the Dow Jones Industrial Average (DJIA).

It comes at a very difficult time for Intel, as it is trying to maintain its enterprise relevance in the face of more effective generative artificial intelligence (genAI) campaigns from the likes of Nvidia.

Reuters reported that Intel, which was the second technology company to join the DJIA in the late 1990s, was “likely to be removed from the Dow” because of a “near 60% decline in the company’s shares this year that has made it the worst performer on the index and left it with the lowest stock price on the price-weighted Dow.”

Analysts and financial observers were mixed on the ultimate implications for enterprise IT executives. On the one hand, Intel’s installed enterprise base is so huge that it is not likely to face any imminent danger. That gives Intel a couple of years to turn things around.

But genAI is the perception problem. If they are seen as lagging in that space, that perception could hurt them severely.

However Ryan Shrout, president of Signal65, thinks Intel’s huge installed base will provide a buffer. He spent almost five years at Intel before departing in September 2023, with his final role being Intel’s senior director for client segment strategy in the graphics and AI group.

“Even though Intel appears to be so far behind in the world of technology based on their earnings report and the race versus Nvidia in the AI space, you have to keep in mind that something like 80% of the client market — laptops and PCs — use Intel chips,” Shrout said. “Even in the data center CPU space, 70% or so are using Intel Xeon processors. If Intel disappeared tomorrow, nobody has the capacity to fill that gap.”

But Shrout echoed analysts and pointed to AI strategy, or at least the perception of that strategy, as the overwhelming cause of Intel’s current difficulties. 

“The competition that’s come into the market was allowed to come in because Intel didn’t see the writing on the wall for the AI movement. That’s a self-inflicted blind spot,” Shrout said. 

Intel has taken various steps to try and strengthen its financial numbers, such as recently having suspended its dividend and laying off about 15% of its employees, along with splitting its foundry operations from its design teams. 

“Intel CEO Pat Gelsinger and key executives are expected to present a plan later this month to the company’s board of directors to slice off unnecessary businesses and revamp capital spending,” said a Reuters report. “The plan will include ideas on how to shave overall costs by selling businesses, including its programmable chip unit Altera, that Intel can no longer afford to fund from the company’s once-sizeable profit.”

Forrester senior analyst Alvin Nguyen, who oversees their Intel coverage, said that he is still a fan of Intel’s long term strategy, but he sees various problems with their execution.

“Foundry is very expensive. It’s capital intensive,” Nguyen said. “They have made a big bet on the foundry business. If it works, they will have the best semiconductor fab process [in the industry]. If they win the foundry battle, people will look at them differently.”

Some have questioned whether Intel was wrong to decline to invest in OpenAI, but Nguyen said that he thinks it might have been the right decision for Intel. Indeed, he saId, “I am wondering if Microsoft today is questioning the wisdom of their decision [to invest in OpenAI].”

Nguyen added that Intel’s “push towards AI everywhere seems like a smart bet.” He added that Intel’s lack of position within mobile and IoT devices is a problem.

As for the prospect of Intel being removed from the DJIA, Nguyen doubted it would make much of an impact. “It’s just a status symbol. If they lose their Dow status, it’s more of a reputational hit than anything else,” he said.

Nguyen agreed with Shrout that Intel’s massive current installed base will insulate the company for at least a couple of years, giving them time to turn things around. 

“Intel is still in danger and the more hits they take, the worse their position,” Nguyen said.

Another Intel industry analyst is Mario Morales, the IDC group vice president for semiconductors and enabling technologies. 

“There is an ongoing battle for survival at Intel,” Morales said, adding that he thinks that splitting the company and selling off divisions may be the best move. “The parts of Intel are more valuable as pieces than as a whole.”

Morales’ sources have reported that Intel is “actively talking with more than 100 customers, but none of them have yet committed” to more major purchases, he said.

A critical problem for Intel in the perception realm is that they have been outsourcing too much; the manufacturing of both its Lunar Lake and Arrow Lake CPUs were almost entirely outsourced to Taiwan Semiconductor Manufacturing Company Limited (TSMC).

“Even Intel’s own products are being built somewhere else,” Morales said, suggesting that such a move is sending the wrong message to enterprise CIOs. This is happening just as those executives are thinking about creating their own on-prem operations for genAI deployments, in an attempt to gain more control than they now have in the cloud.

“Intel has always had a lot of technology that can enable genAI. They simply had the wrong product mix,” Morales said. As the industry moved from CPUs to GPUs, Intel didn’t move quickly enough, he said.

On an optimistic note, Morales said that there is industry precedent for exactly such a turnaround. Some ten years, AMD faced similar issues and overcame them.

“In 2014, AMD was a month or two away from bankruptcy,” Morales said, stressing that “because AMD was so close to death,” its CEO halted a wide range of side projects that were not central to their customers. 

“Intel has to suffer the tough pill [and decide that] ‘If we can’t lead (in a segment), then we can’t be in those spaces,'” Morales said. “It is well beyond a wakeup call. They are already late.”

OpenAI might use Apple’s TSMC for chips

In another interesting move that hints at a symbiotic relationship, ChatGPT maker OpenAI has reportedly followed Apple to become a lead customer for TSMC processors. Given the industry lead Apple has achieved with Apple Silicon, the move could be seen as tacit enthusiasm, rather than symbiosis, but follows reports Apple might stake an investment in OpenAI.

These moves by the biggest names in tech underscore the profound difference generative AI (genAI) has made in artificial intelligence, which has taken what’s been part of the industry for decades and placed it at the forefront of the zeitgeist. That Open AI plans to work with TSMC can also be seen as justification of the integrity of Apple’s approach to silicon design, as it concedes the computational power these processors provide while meeting real world needs in terms of energy supply.

The first OpenAI chips under the purported deal are set to slip off the lines some time in 2026.

A new platform battle?

As Apple stands at the cusp of becoming the world’s biggest multi-platform AI ecosystem, the move also hints at new competition down the road.  After all, it was only earlier this year that OpenAI CEO Sam Altman was reported to be getting into chip manufacturing. Now, the company has booked into early production of chips using TSMC’s A16 process, which are expected to enter production in 2026. 

Despite using the same foundry, the processors won’t be the same as Apple’s and will be designed apparently by Broadcom and Marvell.

While it is very possible that OpenAI wants to use its chips inside its own servers, it is also plausible it might also have plans to introduce its own devices, or to offer up its AI inside chips as options to other computer hardware manufacturers.

It takes energy to make things happen

Everyone with a passing interest in genAI recognizes that the scale of energy consumption required to deliver server-based services using the tech is very, very high. Even at this point in genAI deployment, the energy being used is higher than that required by some smaller nations — and those demands will only increase.

With that in mind, Apple’s M-series chip message around computational performance per watt turns out to be even more prescient than earlier believed. After all, if genAI is to be woven into global use, it must meet those needs without using all the world’s energy; reducing energy demands is mandatory. This also implies tech firms will continue to make major investments in renewable energy supply to drive those server farms, and suggests the carbon offset market will be forced to prove its legitimacy, rather than continuing to be a kind of 21st century equivalent of Papal Indulgences (as George Monbiot once described it).

Power, profit, people

The chips Apple makes deliver excellent computational performance at significantly less power than rival processors. Once Apple’s production moves to TSMC’s A16 process, you’ll see another 8-10% spike in performance for up to 20% less power, a report claims.

That’s great for Mac, iPad, and iPhone users — who doesn’t want more powerful devices that use less energy? But for server-based services handling millions of requests daily, that power difference affects both environmental performance and operational costs in terms of energy bills.

With that in mind, OpenAI doesn’t need to be looking to become a hardware competitor to unlock value from chip design; its own running costs will be reduced dramatically through the introduction of more efficient chips — particularly as the number of people it serves grows from millions to billions.

While people in tech might see AI everywhere, most people haven’t begun using genAI tools and services just yet — something which is going to change within the next few weeks as Apple ships its AI-ready devices, starting with the next iPhone.

But if the direction of travel is anything to go by — a trajectory in which Apple and Microsoft seem set on investing in a company that could yet compete with both of them — it seems the people at the summit of Tech Power Mountain don’t merely see OpenAI as a service provider, but as a peer player in the future of IT. We just have to hope that neither they, nor the AI, are hallucinating.

Please follow me on LinkedInMastodon, or join me in the AppleHolic’s bar & grill group on MeWe.

European semiconductor group urges accelerated support and policy overhaul

The European Semiconductor Industry Association (ESIA) has urged the European Union to expedite financial support, develop a revised “Chips Act 2.0” package, and appoint a dedicated envoy to advocate for the semiconductor sector.

In a statement, the organization emphasized the need for policies that prioritize competitiveness, enabling the sector to expand and invest further in Europe.

“The adoption of the [current] EU Chips Act has been a fundamental building block,” ESIA said in the statement. “Its implementation and further development will be decisive for the EU’s success in championing the global race for technology leadership. To not lose momentum, ESIA advocates for an immediate ‘Chips Act 2.0’ process.”

ESIA represents major chipmakers like Infineon, STMicroelectronics, and NXP, as well as top equipment producer ASML and research institutions such as imec, Fraunhofer, and CEA-Leti.

Challenges in the current Chips Act

Europe’s existing Chips Act, which came into effect in 2023, aims to secure 20% of the global semiconductor market by 2030.

However, achieving this will require speeding up the approval process for “first-of-a-kind” manufacturing facilities, according to the ESIA.  

Manish Rawat, a semiconductor analyst at TechInsights, emphasized that any new Chips Act should prioritize streamlining aid processes to speed up approval and disbursement. Simplifying bureaucratic procedures and setting clear timelines for funding decisions could help minimize project delays.

“Secondly, the act should focus investment on niche areas where Europe has a competitive edge, such as advanced semiconductor equipment and power semiconductors,” Rawat said. “By concentrating resources in these areas, the EU can optimize investments and enhance its market position.”

Strengthening public-private partnerships and incentivizing local supply chain development are also essential steps to reduce reliance on external sources.

“Finally, the ‘Chips Act 2.0’ should include mechanisms for flexibility and adaptability to rapidly respond to industry shifts and geopolitical changes, ensuring that the EU’s strategy remains relevant and effective in the evolving semiconductor landscape,” Rawat added.

Overcoming export restrictions

A significant challenge has been the trade restrictions placed on companies like ASML regarding exports to China. The ESIA has urged a more constructive approach, advocating for incentives over protectionism.

“By curbing sales of advanced semiconductor manufacturing equipment, European companies risk losing substantial revenue streams, which could weaken Europe’s position as a leader in this high-tech industry,” Rawat said. “Moreover, the reduced market size might lead to a slowdown in research and development investments, ultimately hampering innovation within Europe’s semiconductor ecosystem.”

Moreover, these restrictions could trigger retaliatory actions from affected nations, potentially disrupting global supply chains and driving up operational costs for European companies.

“To mitigate these risks, the EU could consider implementing targeted restrictions that allow the sale of certain technologies while safeguarding the most sensitive advancements,” Rawat said. “Another approach could involve shifting from purely restrictive measures to incentivizing the development of secure, exportable technologies.”

European semiconductor group urges accelerated support and policy overhaul

The European Semiconductor Industry Association (ESIA) has urged the European Union to expedite financial support, develop a revised “Chips Act 2.0” package, and appoint a dedicated envoy to advocate for the semiconductor sector.

In a statement, the organization emphasized the need for policies that prioritize competitiveness, enabling the sector to expand and invest further in Europe.

“The adoption of the [current] EU Chips Act has been a fundamental building block,” ESIA said in the statement. “Its implementation and further development will be decisive for the EU’s success in championing the global race for technology leadership. To not lose momentum, ESIA advocates for an immediate ‘Chips Act 2.0’ process.”

ESIA represents major chipmakers like Infineon, STMicroelectronics, and NXP, as well as top equipment producer ASML and research institutions such as imec, Fraunhofer, and CEA-Leti.

Challenges in the current Chips Act

Europe’s existing Chips Act, which came into effect in 2023, aims to secure 20% of the global semiconductor market by 2030.

However, achieving this will require speeding up the approval process for “first-of-a-kind” manufacturing facilities, according to the ESIA.  

Manish Rawat, a semiconductor analyst at TechInsights, emphasized that any new Chips Act should prioritize streamlining aid processes to speed up approval and disbursement. Simplifying bureaucratic procedures and setting clear timelines for funding decisions could help minimize project delays.

“Secondly, the act should focus investment on niche areas where Europe has a competitive edge, such as advanced semiconductor equipment and power semiconductors,” Rawat said. “By concentrating resources in these areas, the EU can optimize investments and enhance its market position.”

Strengthening public-private partnerships and incentivizing local supply chain development are also essential steps to reduce reliance on external sources.

“Finally, the ‘Chips Act 2.0’ should include mechanisms for flexibility and adaptability to rapidly respond to industry shifts and geopolitical changes, ensuring that the EU’s strategy remains relevant and effective in the evolving semiconductor landscape,” Rawat added.

Overcoming export restrictions

A significant challenge has been the trade restrictions placed on companies like ASML regarding exports to China. The ESIA has urged a more constructive approach, advocating for incentives over protectionism.

“By curbing sales of advanced semiconductor manufacturing equipment, European companies risk losing substantial revenue streams, which could weaken Europe’s position as a leader in this high-tech industry,” Rawat said. “Moreover, the reduced market size might lead to a slowdown in research and development investments, ultimately hampering innovation within Europe’s semiconductor ecosystem.”

Moreover, these restrictions could trigger retaliatory actions from affected nations, potentially disrupting global supply chains and driving up operational costs for European companies.

“To mitigate these risks, the EU could consider implementing targeted restrictions that allow the sale of certain technologies while safeguarding the most sensitive advancements,” Rawat said. “Another approach could involve shifting from purely restrictive measures to incentivizing the development of secure, exportable technologies.”

PricewaterhouseCoopers’ new CAIO – workers need to know their role with AI

Multinational consultancy PricewaterhouseCoopers (PwC) expects to spend billions of dollars to build out its use of artificial intelligence (AI) and focus more of its client services on the technology.

Last year, for example, PwC announced it would spend $1 billion over three years to expand and scale its AI offerings, and another $2.3 billion on “modernizing” its internal platforms to embed new generative AI (genAI) tools. One initiative is called My+, which is focused on using the technology to “personalize careers and give employees more agency in how and where they work.”

As part of that program, PwC US is upskilling its 65,000 employees on AI tools and capabilities to improve efficiency and productivity. The initiative also trains employees how to advise clients on the benefits of AI.

Two years ago, PwC named Yolanda Seals-Coffield as the firm’s chief people and inclusion officer; her duties included upskilling tens of thousands of workers on the use of AI. Then, in July, as part of the larger initiative, PwC announced it had appointed its first chief AI officer, Dan Priest.

Over the last five years, the number of chief AI officers (CAIOs) has almost tripled, according to LinkedIn data. And the Biden administration has required many federal agencies to name CAIOs to promote the use of AI and manage its risks.

Though it’s a relatively new title, the CAIO role is gaining prominence at organizations deploying genAI. The position requires someone who can handle a myriad number of overlapping responsibilities, not the least of which is extracting corporate value from rapidly evolving technology.

Priest has been tasked with leading PwC’s US operations and helping clients navigate the complexities and opportunities of AI. Computerworld recently asked Priest and Seals-Coffield to describe their roles and how AI has affected the organization’s business strategy and workforce.

CAIO Dan Priest:

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/08/Dan-Priest-CAIO-at-PwC-1.png?quality=50&strip=all 1200w, https://b2b-contenthub.com/wp-content/uploads/2024/08/Dan-Priest-CAIO-at-PwC-1.png?resize=225%2C300&quality=50&strip=all 225w, https://b2b-contenthub.com/wp-content/uploads/2024/08/Dan-Priest-CAIO-at-PwC-1.png?resize=768%2C1023&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2024/08/Dan-Priest-CAIO-at-PwC-1.png?resize=769%2C1024&quality=50&strip=all 769w, https://b2b-contenthub.com/wp-content/uploads/2024/08/Dan-Priest-CAIO-at-PwC-1.png?resize=1153%2C1536&quality=50&strip=all 1153w, https://b2b-contenthub.com/wp-content/uploads/2024/08/Dan-Priest-CAIO-at-PwC-1.png?resize=523%2C697&quality=50&strip=all 523w, https://b2b-contenthub.com/wp-content/uploads/2024/08/Dan-Priest-CAIO-at-PwC-1.png?resize=126%2C168&quality=50&strip=all 126w, https://b2b-contenthub.com/wp-content/uploads/2024/08/Dan-Priest-CAIO-at-PwC-1.png?resize=63%2C84&quality=50&strip=all 63w, https://b2b-contenthub.com/wp-content/uploads/2024/08/Dan-Priest-CAIO-at-PwC-1.png?resize=360%2C480&quality=50&strip=all 360w, https://b2b-contenthub.com/wp-content/uploads/2024/08/Dan-Priest-CAIO-at-PwC-1.png?resize=270%2C360&quality=50&strip=all 270w, https://b2b-contenthub.com/wp-content/uploads/2024/08/Dan-Priest-CAIO-at-PwC-1.png?resize=188%2C250&quality=50&strip=all 188w" width="769" height="1024" sizes="(max-width: 769px) 100vw, 769px">

PwC CAIO Dan Priest

PwC

What was your role before becoming PwC’s CAIO, and why do you believe organizations need a CAIO today? Can those duties also be performed by existing IT leaders? “Prior to becoming PwC’s US CAIO, I was a cloud and digital leader at the firm, helping clients across industries transform business models and create essential advantages using data, tech and modern architectures. The need for a CAIO shows us our workforce is changing. Realizing the full potential of [AI] demands new skills, knowledge and ways of working from everyone across an organization. And for the C-suite in particular, there’s a critical need to close a vision and skills gap to establish AI-native operations and business models.

“It’s a big job, and it’s the reason why more and more companies, including PwC, have added — or are considering whether they need to add — a CAIO to their executive team.

“For most companies today, there’s no single existing role in the C-suite with a clear, natural mandate to oversee AI, and in many organizations the responsibility has fallen to the chief technology officer or chief information officer. But as organizations look to both drive growth and transform operations with AI, a dedicated CAIO can steer these initiatives to success.

“The CAIO also serves as a connection point between leaders across the entire organization, bridging the gap between AI capabilities and business objectives. By collaborating with leaders from various functions, the CAIO can gain a comprehensive understanding of their needs and align AI initiatives accordingly.”

What are your primary responsibilities? “In this new role, my top priority is to continue leveraging AI to help our firm and our clients reinvent business and strengthen the workforce. We are focused on unlocking the transformative power of AI to help our clients achieve an advantage in an increasingly competitive market; I will help drive that new strategy.

“To do this, I am focused on these five key areas with clients: 

  • Helping leaders assess the impact of AI on their business function — Understand what level of efficiencies are made possible by AI, function by function, and the degree to which those efficiencies will erode with AI-driven pricing pressures. This will also help determine what new revenue opportunities are possible. 
  • Updating their strategies to win with AI — Depending on the degree to which AI will impact our clients’ business, their business strategy will need to be updated. Knowing how to win with AI will be a critical success factor.  
  • Training and activating their people — AI’s value is unlocked when it becomes intrinsic, embedded in everything we make and do. A company’s workforce will make that possible if they know how to work with AI and are motivated to adopt it. 
  • Changing their architecture and OP model — New AI-driven efficiencies and savings … will be available to every business, making it possible to harvest savings to remain cost competitive. Modern, AI-powered architectures will enable a wave of innovation. To avoid being disrupted by being a disruptor, it’s important to invest in your AI architecture.  
  • Running a responsible AI agenda — People will remain the centerpiece of your business strategy; they will need to know their role with AI to trust it, and customers will need to know you’re using their data responsibly. Responsible AI is essential.”

When did PwC begin embracing AI (or genAI), and for what purposes? Where is it primarily used today — marketing, software development, customer service? “PwC has been at the forefront of AI for many years. Our technologists know how to harness the technology for both the firm and clients’ benefit. And we’re not just talking about AI; we’re showing clients how to use this tech to transform their businesses and workforces with our unique ecosystem approach and tech alliances. 

“Just last year, we made a three-year, $1 billion commitment to expand and scale our AI capabilities and help our clients reimagine their businesses through the power of genAI. Building upon this commitment, in May we signed an agreement with OpenAI, making PwC OpenAI’s first reseller for ChatGPT Enterprise and largest user of the product, further enhancing our leadership position in AI.

“We also have strategic alliances with all major AI technology vendors, including foundation model providers AWS, Anthropic, Google, Meta, and Microsoft.  We leverage alliance relationships with leading enterprise application vendors that are integrating genAI capabilities into their products, including Adobe, Google, Microsoft, Oracle, Salesforce, SAP and Workday.”

How has AI increased productivity and/or efficiency at PwC? “Our firm has already experienced significant benefits from using AI tools. Those who regularly utilize our genAI tools have observed efficiency gains of 20% to 30%. This has allowed our employees to focus on more strategic work and deliver greater value to our clients.

“There are various ways to measure the efficiencies brought about by AI. These can include increased revenue or price margins, cost savings, time saved, improved quality and more, depending on the specific application of AI. It is important to note that if the sole objective is cost savings, it may be challenging to fully make AI a core of a business. However, by considering and targeting multiple areas of impact, a higher return on investment can be achieved.”

Are you primarily using SaaS-based AI from Microsoft, Amazon, Google and the like, or do you use your own open-source language models to create task-specific AI? What models are you using? “We are both buying and building the AI tools we use. Last year, we built ChatPwC, an internal generative AI tool integrated with Azure OpenAI services along with internal innovation and services to help our workforce accelerate the use of AI. Most recently, we also became OpenAI’s first reseller for ChatGPT Enterprise and the largest user of the product.

“We follow an ecosystem approach at PwC … as well as key alliances with major tech vendors, which gives us early access to their AI technologies and a head start on responsibly developing the solutions our clients need.”

Where do you see the future in AI, i.e., small, in-house, task specific models or a continuation of cloud-based models? “AI is becoming a natural part of everything we make and do. We’re moving past the AI exploration cycle, where managing AI is no longer just about tech, it is about helping companies solve big, important and meaningful problems that also drive a lot of economic value.

“But the only way we can get there is by bringing AI into an organization’s business strategy, capability systems, products and services, ways of working and through your people. AI is more than just a tool — it can be viewed as a member of the team, embedding into the end-to-end value chain. The more AI becomes naturally embedded and intrinsic to an organization, the more it will help both the workforce and business be more productive and deliver better value.   

“In addition, we will see new products and services that are fully AI-powered come into the market — and those are going to be key drivers of revenue and growth.”

What were the greatest challenges to deploying AI, and how did you address security and privacy risks? “Deploying AI can pose a few challenges specific to one’s organization, with some of these hurdles including:

  • Data quality and availability: Getting access to clean and relevant data in sufficient quantities.
  • Have the right talent and expertise: Finding skilled AI experts and data scientists.
  • Integration with existing systems and infrastructure: Making sure that AI systems work seamlessly with current systems and infrastructure.
  • Scalability and performance: Handling increased workloads and keeping up with efficient performance as AI applications grow.
  • Cost and resource requirements: Developing and maintaining AI systems can be costly, requiring investments in infrastructure and resources.

With any technology we deploy at PwC, we consider what could go wrong and what safeguards are in place. According to our recent Responsible AI survey, when it comes to assessing the risks of organizations’ AI and genAI efforts, only 58% of respondents have completed a preliminary assessment of AI risks in their organization. Responsible AI (RAI), however, can enable business objectives far beyond risk management — and many executives report that their organizations are targeting this value.

“At PwC, we take [these] actions to help manage AI risk and also encourage our clients to do the same, including

  • Create ownership. Today, ownership of RAI is varied and often fragmented. It needs to be owned by a single individual who can then assemble a multi-disciplinary team to support the business. 
  • Think beyond AI. You need to consider the bigger picture, understanding how AI is becoming integrated in all aspects of your organization. That means having your RAI leader working closely with your company’s CAIO (or equivalent) to understand changes in your operating model, business processes, products and services.  
  • Act end-to-end. Responsible AI needs to start at the start — assessing and prioritizing potential use cases based on both value and risk — and go through the entire AI life cycle, including output validation and performance monitoring. 
  • Move beyond the theoretical. While many companies have done the paper exercise of setting up policies, governance structures and committees, this is just the start. RAI should become operational, scaling across the business. 
  • Focus on ROI. While it has been challenging to date to quantify RAI’s value, that’s changing quickly. Forthcoming regulations, the need for AI to be audited and rising societal expectations will all contribute to the ROI equation. Companies that are already advancing their RAI efforts will be better prepared to respond and will be least burdened by changing expectations and requirements.
  • Assess impact on trust. Develop a plan for transparency and ongoing reporting to stakeholders to monitor whether your RAI programs have in fact earned trust.”

If there were one piece of advice you’d offer other organizations on deploying AI based, what would that be? “We all know navigating genAI is complex. …My biggest piece of advice is having a detailed roadmap, as it really is critical to scaling AI. At PwC, we follow a leadership guide and have found great success in being able to provide clients with the knowledge and insights necessary to navigate these complexities and make informed decisions about incorporating AI into their organizations. 

Chief People and Inclusion Officer Yolanda Seals-Coffield:

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/08/Yolanda-Seals-Coffield-chief-people-and-inclusion-officer-at-PwC-.jpeg?quality=50&strip=all 1200w, https://b2b-contenthub.com/wp-content/uploads/2024/08/Yolanda-Seals-Coffield-chief-people-and-inclusion-officer-at-PwC-.jpeg?resize=150%2C150&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2024/08/Yolanda-Seals-Coffield-chief-people-and-inclusion-officer-at-PwC-.jpeg?resize=300%2C300&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2024/08/Yolanda-Seals-Coffield-chief-people-and-inclusion-officer-at-PwC-.jpeg?resize=768%2C768&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2024/08/Yolanda-Seals-Coffield-chief-people-and-inclusion-officer-at-PwC-.jpeg?resize=1024%2C1024&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2024/08/Yolanda-Seals-Coffield-chief-people-and-inclusion-officer-at-PwC-.jpeg?resize=697%2C697&quality=50&strip=all 697w, https://b2b-contenthub.com/wp-content/uploads/2024/08/Yolanda-Seals-Coffield-chief-people-and-inclusion-officer-at-PwC-.jpeg?resize=168%2C168&quality=50&strip=all 168w, https://b2b-contenthub.com/wp-content/uploads/2024/08/Yolanda-Seals-Coffield-chief-people-and-inclusion-officer-at-PwC-.jpeg?resize=84%2C84&quality=50&strip=all 84w, https://b2b-contenthub.com/wp-content/uploads/2024/08/Yolanda-Seals-Coffield-chief-people-and-inclusion-officer-at-PwC-.jpeg?resize=480%2C480&quality=50&strip=all 480w, https://b2b-contenthub.com/wp-content/uploads/2024/08/Yolanda-Seals-Coffield-chief-people-and-inclusion-officer-at-PwC-.jpeg?resize=360%2C360&quality=50&strip=all 360w, https://b2b-contenthub.com/wp-content/uploads/2024/08/Yolanda-Seals-Coffield-chief-people-and-inclusion-officer-at-PwC-.jpeg?resize=250%2C250&quality=50&strip=all 250w" width="1024" height="1024" sizes="(max-width: 1024px) 100vw, 1024px">

Chief People and Inclusion Officer Yolanda Seals-Coffield

PwC

What were your greatest challenges in retraining and/or upskilling PwC’s workforce to take advantage of genAI capabilities? “Employee buy-in is crucial for any new program or initiative. We want them to be excited and engaged in what we’re doing as a firm.

“Last year, we launched our AI upskilling strategy, My AI, as part of our My+ people experience. It has been successful because we’ve brought key leaders and teams along from the very beginning. Our [learning and development] team, our technology team and senior leadership teams, to name just a few, have been working closely together to deliver a high-quality program [that] 95% of our employees engaged in last year to learn.”

How did PwC go about deciding who needed what training (whether it was those in IT with tech skills versus those in business roles)? “We believe that everyone at the firm, myself included, should be a savvy and responsible user of genAI technology. It is rapidly transforming the business landscape, and we feel everyone can benefit from continuous learning to effectively collaborate with AI systems.

“That’s why My AI is aimed to upskill and train all 75,000 of our employees in genAI, no matter their level or work function.  Examples of what our genAI Foundations learning bundle include:

  • A module that teaches business leaders how genAI can produce key insights that will help reinvent the future of their business.
  • A module called “A non-techie’s 10-minute guide to genAI,” which focuses on high-level, foundational basics on what AI and genAI are and their prevalence in our daily lives.”

How did you execute the upskilling and retraining of PwC’s workforce and how is that ongoing today or expanding into different training? “A one-size-fits-all methodology does not work when upskilling an entire organization. No two individuals are the same when it comes to their preferred learning style. For example, a manager in our Tax practice may be a visual learner. She prefers watching videos and participating in demonstrations. Another manager in the Tax practice may like to read and listen to really understand and reinforce what he’s learning.

“Therefore, since the launch of My AI, the learning modalities have expanded to not only include the traditional e-learning courses but also podcasts, videos, thought leadership, in-person trainings and gamification. We’ve also held firm-wide prompting parties, where our people came together in-person and virtually to get hands-on experience with our genAI tools. Our people had a great time, and these events really helped drive the adoption of our tools.

“The multitude of learning opportunities we are offering our employees reflects vast possibilities of what it means to become a savvy, responsible user of genAI technology. We’ve said: ‘First, let’s help you understand how to use the technology. Second, let’s help you reimagine how to use genAI to enable the work you do every day.’ This approach helps demystify AI and show our people that the technology is designed to work with them — not against them — and positively impact their careers.”

Did the way you train employees on security and privacy change due to the rollout of AI? “A key component of My AI is teaching our people about responsible use of genAI. Our learning courses are packed full of the latest leading practices, industry standards and other content that we may not think of when it comes to genAI, such as inclusivity and other possible biases.

“We also encourage our people to dive into the tools and put what they learn into practice, frequently reinforcing genAI prompting strategies. And, in training our people to use genAI responsibly, we emphasize the need to review all genAI output and apply human oversight at every stage.”

What advice would you offer organizations hoping to reskill/upskill their workforce for AI? “Implementing a genAI upskilling program can be a big undertaking! I believe the most important thing is to realize that this should be a test-and-learn process. AI is evolving so quickly that it requires us to be agile and adjust along the way. We’re all learning together what’s going to change or shift in the next six months and year.

“When we designed My AI, we did it in chunk-sized sprints. We slowly rolled out the different learning courses, and along the way incorporated different modalities of learning based on feedback we were getting from our people to make sure what we were sharing resonated with employees.

“Make it fun! Think of ways to get your people excited and curious about AI. Help them realize how genAI can make them more valuable and future-proof their skills.”

China is a mere three years behind TSMC in some chip technology

China’s sophistication in some of its chip technology is approaching three years behind that of top chip manufacturer Taiwan Semiconductor Mfg. Co. (TSMC) despite the best efforts by the US to delay advancements through a broad strategy of trade restrictions.

Analysis done by a Tokyo-based company called TechanaLye found a processor from a new Huawei smartphone released in April rivals TSMC chips in processing capability, according to the findings reported in Nikkei Asia. TechanaLye makes it their business to disassemble electronic devices and analyze their component technology.

TechanaLye CEO Hiroharu Shimizu showed semiconductor circuit diagrams for two application processors for Huawei smartphones to Nikkei; one was from Huawei Technologies’ Pura 70 Pro, released in April, and one from a top Huawei smartphone model from 2021, according to the report.

Huawei subsidiary HiSilicon designed the Kirin 9010 chip from the Pura 70 Pro; it was mass-produced by Semiconductor Manufacturing International Corp. (SMIC), a major Chinese contract chipmaker. The other chip design analyzed and presented was a Kirin 9000 chip, also designed by HiSilicon but produced by TSMC.

SMIC’s 7-nanometer (nm) mass-produced chip is 118.4 square millimeters, while TSMC’s 5-nm chip is 107.8 sq. mm, according to the report. In general, a smaller nanometer size means higher performance and a smaller chip. However, TechanaLye found that TSMC’s Kirin 9000 chip and SMIC’s Kirin 9010 chip were nearly comparable in performance, though a difference in yield still exists.

Are US trade restrictions failing in intent?

The findings demonstrate that despite the Biden administration’s ban on exporting certain chip technology to China in an effort to stymie development there–fearing the nation’s growing geopolitical power—the nation continues to evolve its processor technology, buoyed by a surge of activity by in-country manufacturers, Shimizu noted.

“The US regulations so far have only slightly delayed Chinese innovation, while sparking efforts by the Chinese chip industry to boost domestic production,” he told Nikkei Asia, according to the report.

Indeed, HiSilicon, which designed about 14 of 37 semiconductors in the Pura 70 Pro, also is demonstrating improvements that show Chinese progression, according to Shimizu. Other device chips — such as those for memory, sensors, power supply, display, and other functions — were from other Chinese and foreign manufacturers, with the bulk of them, or 86 percent, produced in China.

Last October, the Biden administration issued new export controls that block US companies from selling advanced semiconductors as well as equipment used to make them to certain Chinese manufacturers unless they receive a special license.

Then in mid-December, the administration expanded those restrictions to include 36 additional Chinese chip makers from accessing US chip technology, including Yangtze Memory Technologies Corporation (YMTC), the largest contract chip maker in the world. The purpose behind the regulations, according to officials, is to deny China access to advanced technology for military modernization and human rights abuse.

The results of TechanaLye’s analysis show that US restrictions may only end up affecting cutting-edge processors for servers aimed at advancing technologies such as artificial intelligence (AI) and not trickle down to technology such as smartphones, according to Shimizu.

“As long as the chips do not pose a military threat, the US is probably allowing their development,” he told Nikkei.

Further advancement would cause a ripple effect

Though it’s too soon to know if and when China will catch up to TSMC and other top manufacturers in its development of processors, if it does, it would “represent a significant shift in the global semiconductor landscape,” noted Akshat Vaid, partner, Everest Group. This likely would cause a ripple effect on global competition, geopolitics, technology, and economics.

“Such a development would diversify the semiconductor supply chain, reducing reliance on a few vendors and lessening the impact of regional disruptions,” he told Computerworld.

China’s advancement in the space also could tip the geopolitical balance in technology and trade, and create even more competition and conflict between China and Western nations, “given the strategic importance of the semiconductor industry and its broader implications for other sectors,” Vaid said.

This ultimately could spur disruptive changes in semiconductor supply-chain strategies and new policies to support domestic semiconductor industries or regulate technology transfer, security, and trade concerns, he added.