Author: Security – Computerworld

Intel’s CHIPS Act grant reduced as production delays and losses mount

The US government has scaled back Intel’s preliminary CHIPS Act grant from $8.5 billion to under $8 billion, reflecting concerns over the company’s delayed investments and financial woes, The New York Times reported. The funding was part of the government’s effort to boost domestic semiconductor manufacturing amid growing global competition.

Intel, originally seen as the largest beneficiary of the CHIPS Act, has struggled to meet expectations following its biggest quarterly loss in its 56-year history. The cut coincides with a $3 billion military contract offered to Intel to produce chips for the Department of Defense, the report said citing sources who did not wish to be identified.

In March 2024, the Biden administration and Intel signed a preliminary memorandum of terms (PMT) for an $8.5 billion funding package. This support was part of Intel’s broader plan to invest over $100 billion in expanding its US manufacturing operations, including the construction of new chip facilities in Arizona, Ohio, Oregon, and New Mexico.

The agreement also included up to $11 billion in additional loans from the US government, aimed at strengthening Intel’s position as a key player in the evolving AI-driven semiconductor landscape.

The decision to reduce the grant underscores the challenges Intel faces as it attempts to reclaim technological leadership while fulfilling the US administration’s vision of revitalizing domestic chip manufacturing.

However, there is no clarity on the other terms and conditions of the reduced grant package.

Investment delays and strategic setbacks

The funding reduction comes as Intel pushes back the timeline for completing its Ohio chip manufacturing project from 2025 to the end of the decade. The delay, coupled with persistent challenges in matching the technological advancements of rivals like Taiwan Semiconductor Manufacturing Company (TSMC), has dampened confidence in the company’s ability to deliver on its commitments.

“The delay in Intel’s investment is especially concerning given the current surge in demand for chips, driven by the rise of AI,”  said Rachita Rao, senior analyst at Everest Group. “With AI transforming the industry, the existing IT infrastructure is becoming insufficient to handle its processing requirements.”

Intel’s difficulties come as the Biden administration seeks to reduce US reliance on Asian supply chains through the CHIPS Act, a $39 billion initiative aimed at boosting domestic chip production. In March, President Joe Biden highlighted Intel’s role in transforming the semiconductor industry during a high-profile visit to Arizona.

However, Intel’s setbacks now present significant hurdles to achieving that vision, the report noted.

Oversight and milestones

Commerce Department officials, tasked with ensuring accountability for CHIPS Act funding, have set stringent performance benchmarks, such as building plants, producing chips, and securing customer commitments for domestically made products.

Intel’s struggles to meet these goals reportedly complicated its negotiations for the final grant terms, according to the report.

Meanwhile, TSMC has secured a $6.6 billion grant under the program while committing over $65 billion of its own funds to US factory construction.

“Additionally, Intel is pursuing riskier strategies at a time when TSMC is focusing on a low-risk, high-production model that appears to be yielding strong results,” Rao noted. “Given Intel’s inability to effectively compete in the current market, the reduction in funding seems justified to some extent.”

This, certainly, is not a piece of good news for Intel which has been grappling with significant financial challenges at the moment. The company reported an 85% year-on-year decline in profits and announced plans to cut 15,000 jobs recently. Additionally, the financial downturn has prompted Intel to suspend dividend payments.

The path ahead for US chip manufacturing

The Biden administration viewed the funding as a strategic initiative to lessen reliance on foreign semiconductor supply chains. The US has highlighted the program’s success in driving factory construction, pointing out that the country will soon host facilities from all five of the world’s leading chipmakers.

“Intel is struggling to keep pace with its competitors, particularly TSMC, which dominates the market with its competitive pricing and significant market share,” Rao said.

Intel’s success is vital not just for the company, but for the broader US semiconductor ecosystem. As AI is poised to drive future demand for advanced chips, Intel’s manufacturing capabilities and technological innovations will be crucial in ensuring the US remains competitive in the global market.

However, the reduction in Intel’s grant underscores the challenges of balancing federal investments with corporate accountability. A query to Intel remains unanswered.

Just what the heck does an ‘AI PC’ do?

Virtually every PC manufacturer has announced, or is already producing, machines with embedded artificial intelligence (AI) functionality. The question is: Why?

Generative AI (genAI) for consumer use already exists through any number of cloud-based services, from OpenAI’s ChatGPT to Google’s Gemini and others.

Even so, next year will be “the year of the AI PC,” according to Forrester Research.

The research firm defines an AI PC as one that has an embedded AI processor and algorithms specifically designed to improve the experience of AI workloads across the central processing unit (CPU), graphics processing unit (GPU), and neural processing unit, or NPU. (NPUs allow the PCs to run AI algorithms at lightning-fast speeds by offloading specific functions.)

“While employees have run AI on client operating systems (OS) for years — think background blur or noise cancellation — most AI processing still happens within cloud services such as Microsoft Teams,” Forrester explained in a report. “AI PCs are now disrupting the cloud-only AI model to bring that processing to local devices running any OS.”

AMD, Dell, HP, Intel, AMD, Apple, Nvidia, and Lenovo have all been touting AI PC innovations to come over the next year or so. Those announcements come during a crucial timeframe for Windows users: Windows 10 will hit its support end of life next October, giving them a real reason to upgrade to Windows 11 — and buy new hardware.

Gartner’s latest worldwide AI PC shipment forecast projects a total 114 million units in 2025, an increase of 165.5% from this year. Key findings in the forecast include:  

  • AI PCs will represent 43% of all PC shipments by 2025, up from just 17% in 2024. 
  • The demand for AI laptops is projected to be higher than that of AI desktops, with shipments of AI laptops accounting for 51% of all laptops in 2025.
  • By 2026, AI laptops will be the only choice of laptop available to large businesses, up from less than 5% in 2023.

“The debate has moved from speculating which PCs might include AI functionality, to the expectation that most PCs will eventually integrate AI NPU capabilities,” said Ranjit Atwal, senior director analyst at Gartner. “As a result, NPU will become a standard feature for PC vendors.”

As the PC market moves to AI PCs, x86 processor dominance will lessen over time, especially in the consumer AI laptop market, as Arm-based AI devices grab more share from Windows x86 AI and non-AI laptops, according to Atwal. “However, in 2025, Windows x86-based AI laptops will lead the business segment,” Atwal said.

But why bother embedding AI algorithms in a computer’s firmware or software — thus, requiring more expensive processors to power them — when you can access those same tools on the web? According to Tom Butler, Lenovo’s executive director of worldwide commercial product management, AI will fundamentally transform PCs, making them not only smarter but also more responsive and secure.

“We see AI-enabled PCs evolving to provide more personalized, adaptive experiences that are tailored to each user’s needs,” Butler said. “The rise of generative AI was a pivotal moment, yet reliance on cloud processing raises concerns around data privacy.”

Each component of a PC plays a unique role in making AI tasks efficient, but the NPU is key for accelerating AI computations with minimal power consumption, according to Butler. In general, he said, AI PCs assist in or handle routine tasks to be more efficient and intuitive for users without the need to access an external website or service.

Apple, for example, last month announced an updated iMac powered by its new M4 chip with an NPU core and Apple Intelligence, an AI-powered assistant that can help users write emails or other content. (More intensive or complex tasks can still be handed off to OpenAI’s ChatGPT. Apple also unveiled M4-powered MacBook Pro laptops and Mac minis — all while touting their strength in handling AI-related tasks.

AI PCs can also boost productivity by handling routine tasks such as scheduling and organizing emails, and by enhancing collaboration with real-time translation and transcription features, according to Butler.

Stuff AI does on PCs

Intel Corp.

Depending on the device, AI technology can also seamlessly integrate with cloud and edge computing for real-time data processing, enabling faster and more informed decision-making. AI-enabled PCs also increase device security by automating threat detection and adapting to new threats as they arise.

For example, Butler said, Lenovo’s Smart Connect enhances device synergy, allowing users to transition seamlessly between Lenovo devices, while ThinkShield provides security across the ecosystem, protecting users in real time.

AI-powered PCs however, generally require more RAM to handle advanced tasks. Apple, for example, is moving from a minimum of 8GB of RAM to 16GB.

Lenovo took a slightly different approach in its RAM support of AI tasks. The company’s “Smarter AI for All” tries to match the complexity of tasks to processing needs. For example, 16GB is suitable for lighter AI tasks when combined with a more powerful NPU, while 32GB or more is suited for users handling complex applications, large language models, or deep learning.

“Users working within AI development spheres will most likely require more RAM, combined with powerful GPU and CPU to ensure low latency and AI model fine-tuning capabilities,” Butler said.

Could AI make things harder?

Ironically, though, the results of a new survey and study conducted by Intel found that current AI PC owners spend more time on tasks than people who use PCs without AI technology. The survey of 6,000 consumers from Germany, the UK and France, found about 53% believe AI-enabled PCs are only good for “creatives or technical professionals.” And 44% see AI PCs mainly as “a gimmick or futuristic technology.”

In all, the survey showed that, in general, users spend 899 minutes cumulatively, nearly 15 hours, on computer-related chores weekly. Intel’s study showed that current AI PC owners spend longer on tasks than their counterparts using traditional PCs because many spend “a long time identifying how best to communicate with AI tools to get the desired answers or response.

“Organizations providing AI-assisted products must offer greater education in order to truly showcase the potential of ‘everyday AI,'” Intel argued.

What saps time on a PC?

Intel Corp.

When its uses are understood, leveraging AI tools to handle repetitive tasks, streamline workflows, or even assist in research can greatly boost productivity, according to a 2023 study by AI safety and research company Anthrophic.

While only 32% of respondents who aren’t familiar with AI PCs would consider purchasing one for their next upgrade, that percentage jumps significantly to 64% among respondents who have used one before.  The survey and study also stated that “early data” suggests AI-enabled PCs can also save users about 240 minutes a week on routine tasks.

The problem is that many AI-PC owners simply aren’t aware of the benefits of AI or don’t know how to access the tools, Anthrophic argued. “Despite AI PCs becoming more available to people, 86% of respondents have either never heard of or used an AI PC. Meanwhile those respondents who already own an AI PC are actually spending longer on digital chores than those using a traditional PC. 

The study concluded that “greater consumer education is needed to bridge the gap between the promise and reality of AI PCs.”

For business-to-business (B2B) purposes, AI PCs offer a promising solution, according to Mike Crosby, executive director and industry advisory service Circana.

Just three of 20 US business sectors defined by federal government, including professional and scientific, finance, and health care, represent nearly 50% of the total AI PC unit sales, Crosby said. “Companies are evaluating these new technologies carefully, weighing the benefits of innovation against the risks to their established environments.”

The upcoming October 2025 sunset of Windows 10 support further amplifies the urgency for AI PCs, with nearly 60-70% of the installed base still on older versions. Microsoft’s extended security update (ESU) offers a temporary reprieve, but Circana expects modernization to ramp up quickly as the deadline approaches.

Just what the heck does an ‘AI PC’ do?

Virtually every PC manufacturer has announced, or is already producing, machines with embedded artificial intelligence (AI) functionality. The question is: Why?

Generative AI (genAI) for consumer use already exists through any number of cloud-based services, from OpenAI’s ChatGPT to Google’s Gemini and others.

Even so, next year will be “the year of the AI PC,” according to Forrester Research.

The research firm defines an AI PC as one that has an embedded AI processor and algorithms specifically designed to improve the experience of AI workloads across the central processing unit (CPU), graphics processing unit (GPU), and neural processing unit, or NPU. (NPUs allow the PCs to run AI algorithms at lightning-fast speeds by offloading specific functions.)

“While employees have run AI on client operating systems (OS) for years — think background blur or noise cancellation — most AI processing still happens within cloud services such as Microsoft Teams,” Forrester explained in a report. “AI PCs are now disrupting the cloud-only AI model to bring that processing to local devices running any OS.”

AMD, Dell, HP, Intel, AMD, Apple, Nvidia, and Lenovo have all been touting AI PC innovations to come over the next year or so. Those announcements come during a crucial timeframe for Windows users: Windows 10 will hit its support end of life next October, giving them a real reason to upgrade to Windows 11 — and buy new hardware.

Gartner’s latest worldwide AI PC shipment forecast projects a total 114 million units in 2025, an increase of 165.5% from this year. Key findings in the forecast include:  

  • AI PCs will represent 43% of all PC shipments by 2025, up from just 17% in 2024. 
  • The demand for AI laptops is projected to be higher than that of AI desktops, with shipments of AI laptops accounting for 51% of all laptops in 2025.
  • By 2026, AI laptops will be the only choice of laptop available to large businesses, up from less than 5% in 2023.

“The debate has moved from speculating which PCs might include AI functionality, to the expectation that most PCs will eventually integrate AI NPU capabilities,” said Ranjit Atwal, senior director analyst at Gartner. “As a result, NPU will become a standard feature for PC vendors.”

As the PC market moves to AI PCs, x86 processor dominance will lessen over time, especially in the consumer AI laptop market, as Arm-based AI devices grab more share from Windows x86 AI and non-AI laptops, according to Atwal. “However, in 2025, Windows x86-based AI laptops will lead the business segment,” Atwal said.

But why bother embedding AI algorithms in a computer’s firmware or software — thus, requiring more expensive processors to power them — when you can access those same tools on the web? According to Tom Butler, Lenovo’s executive director of worldwide commercial product management, AI will fundamentally transform PCs, making them not only smarter but also more responsive and secure.

“We see AI-enabled PCs evolving to provide more personalized, adaptive experiences that are tailored to each user’s needs,” Butler said. “The rise of generative AI was a pivotal moment, yet reliance on cloud processing raises concerns around data privacy.”

Each component of a PC plays a unique role in making AI tasks efficient, but the NPU is key for accelerating AI computations with minimal power consumption, according to Butler. In general, he said, AI PCs assist in or handle routine tasks to be more efficient and intuitive for users without the need to access an external website or service.

Apple, for example, last month announced an updated iMac powered by its new M4 chip with an NPU core and Apple Intelligence, an AI-powered assistant that can help users write emails or other content. (More intensive or complex tasks can still be handed off to OpenAI’s ChatGPT. Apple also unveiled M4-powered MacBook Pro laptops and Mac minis — all while touting their strength in handling AI-related tasks.

AI PCs can also boost productivity by handling routine tasks such as scheduling and organizing emails, and by enhancing collaboration with real-time translation and transcription features, according to Butler.

Stuff AI does on PCs

Intel Corp.

Depending on the device, AI technology can also seamlessly integrate with cloud and edge computing for real-time data processing, enabling faster and more informed decision-making. AI-enabled PCs also increase device security by automating threat detection and adapting to new threats as they arise.

For example, Butler said, Lenovo’s Smart Connect enhances device synergy, allowing users to transition seamlessly between Lenovo devices, while ThinkShield provides security across the ecosystem, protecting users in real time.

AI-powered PCs however, generally require more RAM to handle advanced tasks. Apple, for example, is moving from a minimum of 8GB of RAM to 16GB.

Lenovo took a slightly different approach in its RAM support of AI tasks. The company’s “Smarter AI for All” tries to match the complexity of tasks to processing needs. For example, 16GB is suitable for lighter AI tasks when combined with a more powerful NPU, while 32GB or more is suited for users handling complex applications, large language models, or deep learning.

“Users working within AI development spheres will most likely require more RAM, combined with powerful GPU and CPU to ensure low latency and AI model fine-tuning capabilities,” Butler said.

Could AI make things harder?

Ironically, though, the results of a new survey and study conducted by Intel found that current AI PC owners spend more time on tasks than people who use PCs without AI technology. The survey of 6,000 consumers from Germany, the UK and France, found about 53% believe AI-enabled PCs are only good for “creatives or technical professionals.” And 44% see AI PCs mainly as “a gimmick or futuristic technology.”

In all, the survey showed that, in general, users spend 899 minutes cumulatively, nearly 15 hours, on computer-related chores weekly. Intel’s study showed that current AI PC owners spend longer on tasks than their counterparts using traditional PCs because many spend “a long time identifying how best to communicate with AI tools to get the desired answers or response.

“Organizations providing AI-assisted products must offer greater education in order to truly showcase the potential of ‘everyday AI,'” Intel argued.

What saps time on a PC?

Intel Corp.

When its uses are understood, leveraging AI tools to handle repetitive tasks, streamline workflows, or even assist in research can greatly boost productivity, according to a 2023 study by AI safety and research company Anthrophic.

While only 32% of respondents who aren’t familiar with AI PCs would consider purchasing one for their next upgrade, that percentage jumps significantly to 64% among respondents who have used one before.  The survey and study also stated that “early data” suggests AI-enabled PCs can also save users about 240 minutes a week on routine tasks.

The problem is that many AI-PC owners simply aren’t aware of the benefits of AI or don’t know how to access the tools, Anthrophic argued. “Despite AI PCs becoming more available to people, 86% of respondents have either never heard of or used an AI PC. Meanwhile those respondents who already own an AI PC are actually spending longer on digital chores than those using a traditional PC. 

The study concluded that “greater consumer education is needed to bridge the gap between the promise and reality of AI PCs.”

For business-to-business (B2B) purposes, AI PCs offer a promising solution, according to Mike Crosby, executive director and industry advisory service Circana.

Just three of 20 US business sectors defined by federal government, including professional and scientific, finance, and health care, represent nearly 50% of the total AI PC unit sales, Crosby said. “Companies are evaluating these new technologies carefully, weighing the benefits of innovation against the risks to their established environments.”

The upcoming October 2025 sunset of Windows 10 support further amplifies the urgency for AI PCs, with nearly 60-70% of the installed base still on older versions. Microsoft’s extended security update (ESU) offers a temporary reprieve, but Circana expects modernization to ramp up quickly as the deadline approaches.

AWS and Anthropic ink deal to accelerate model development, enhance AI chips

The announcement that Amazon Web Services (AWS) will be Anthropic’s primary training partner confirms rumors of an even tighter partnership between the two companies.

They announced Friday that Anthropic will use AWS Trainium processors to train and deploy its Claude family of models. Further, as predicted earlier this month, Amazon will invest an additional $4 billion in the startup, making its total investment $8 billion.

AWS is already Anthropic’s primary cloud provider, and the OpenAI rival will now also primarily use Trainium and Inferentia chips to train and deploy its foundation models. Anthropic will also contribute to Trainium development in what the companies call a “hardware-software development approach.”

While it’s unclear whether the agreement requires Anthropic to exclusively use AWS chips, it is a move by Amazon to challenge the likes of Nvidia and other dominant players as the AI chip race accelerates.

“This is a first step in broadening the accessibility of generative AI and AI models,” Alvin Nguyen, Forrester senior analyst, told Computerworld.

Accelerating Claude development

Anthropic, which launched in 2021, has made significant progress with its Claude large language models (LLMs) this year as it takes on OpenAI. Its Claude 3 family comprises three LLMs: Sonnet, Haiku (its fastest and most compact), and Opus (for more complex tasks), which are all available on Amazon Bedrock. The models have vision capabilities and a 200,000 token context window, meaning they support large volumes of data, equal to roughly 150,000 words, or 500 pages of material.

Notably, last month Anthropic introduced “Computer Use” to Claude 3.5 Sonnet. This capability allows the model to use computers as people do; it can quickly move cursors, toggle between tabs, navigate websites, click buttons, type, and compile research documents in addition to its generative capabilities. All told, the company claims that Sonnet outperforms all other available models on agentic coding tasks.

Claude has experienced rapid adoption since its addition to Amazon Bedrock, AWS’ fully-managed service for building generative AI models, in April 2023, and now supports “tens of thousands” of companies across numerous industries, according to AWS. The foundation models are used to build a number of functions, including chatbots, coding assistants, and complex business processes.

“This has been a year of breakout growth for Claude, and our collaboration with Amazon has been instrumental in bringing Claude’s capabilities to millions of end users on Amazon Bedrock,” Dario Amodei, co-founder and CEO of Anthropic, said in an announcement.

The expanded partnership between the two companies is a strategic one for both sides, signaling that Anthropic’s models are performant and versatile, and that AWS’ infrastructure can handle intense generative AI workloads in a way that rivals Nvidia and other chip players.

From an Anthropic point of view, the benefit is “guaranteed infrastructure, the ability to keep expanding models’ capabilities, and showcase them,” said Nguyen, noting that it also expands their footprint and access.

“It’s showing that they can work well with multiple others,” he said. “That increases comfort levels in their ability to get training done, to produce models, to get them utilized.”

AWS, meanwhile, has a “’premiere client, one of the faces of AI’ in Anthropic,” said Nguyen.

From silicon through the full stack

As part of the expanded partnership, Anthropic will also help to develop and optimize future versions of AWS’s purpose-built Trainium chip. The machine learning (ML) chip supports deep learning training for 100 billion-plus parameter models.

Anthropic said it is working closely with AWS’ Annapurna Labs to write low-level kernels that allow it to interact with Trainium silicon. It is also contributing to the AWS Neuron software stack to help strengthen Trainium, and is collaborating with the chip design team around hardware computational efficiency.

“This close hardware-software development approach, combined with the strong price-performance and massive scalability of Trainium platforms, enables us to optimize every aspect of model training from the silicon up through the full stack,” Anthropic wrote in a blog post published Friday.

This approach provides an advantage over more general purpose hardware (such as Nvidia’s GPUs) that do more than what is “absolutely necessary,” Nguyen pointed out. The companies’ long partnership also means they may have mitigated performance optimization advantages that Nvidia has with their CUDA platform.

“This type of deep collaboration between the software and hardware engineers/developers allows for optimizations in both the hardware and software that is not always possible to find when working independently,” said Nguyen.

OpenAI is thinking about building its own browser

OpenAI is reportedly thinking about developing its own browser with the aim of challenging Google’s dominance in the market, according to The Information. The new browser would have built-in support for Chat GPT and Open AI’s search engine Search GPT.

OpenAI representatives have apparently held talks with developers from Conde Nast, Redfin, Eventbrite, and Priceline, but so far no agreements have been signed.

Shares of Google’s parent company Alphabet declined on the Nasdaq exchange after the browser plans became public, Reuters reported.

Windows 11 will soon be available on Meta Quest 3 headsets

Meta Quest 3 and Quest 3S headset owners will soon gain access to the “full capabilities” of Windows 11 in mixed reality, Microsoft announced at its Ignite conference this week. 

Users will be able to access a local Windows PC or Windows 365 Cloud PC “in seconds,” Microsoft said in a blog post, providing access to a “private, high-quality, multiple-monitor workstation.” 

Although it’s already possible to cast a PC desktop to a Quest device, the update should make the process simpler. 

Microsoft has been working with Meta to bring its apps to the mixed-reality headsets for a while. Last year,the company launched several Microsoft 365 apps on Quest devices, with web versions of Word, Excel and PowerPoint, as well as Mesh 3D environments in Microsoft Teams. At its Build conference in May, Microsoft also announced Windows “volumetric apps” in a developer preview that promise to bring 3D content from Windows apps into mixed reality.

Meta is the market leader, with Quest headsets accounting for 74% of global AR and VR headset shipments, according to data from Counterpoint Research. At the same time, Microsoft has rolled back its own virtual and mixed reality plans, recently announcing it will discontinue its HoloLens 2 headset, with no sign of plans for new version in the works. 

The number of devices sold globally fell in the second quarter of 2024, according to IDC analysts, down 28% year on year. However, IDC predicts the total number of devices sold will grow from 6.7 million units in 2024 to 22.9 million in 2028 as cheaper devices come to market. 

Using a Quest headset as a private, large or multi-monitor setup makes sense from a productivity persective, said Avi Greengart, founder of research firm Technsponential. Access to all of Windows — rather than just a browser and select Windows 365 apps — adds “a lot of utility.” 

“Large virtual monitors are a key use case for investing in head-mounted displays, whether that’s a mainstream headset like the Quest 3, a high-end spatial computing platform like the Apple Vision Pro, or a pair of display glasses from XREAL that plug into your phone or laptop,” said Greengart.

Several hardware constrains limit the use of Quest devices for work tasks, including  display resolution and field of view (the amount of the observable virtual world visible with the device), and the discomfort of wearing a headset for extended periods.

Meta’s Quest 3 and 3S devices are more comfortable than Apple’s Vision Pro, but lack the high resolution of the more expensive device. 

Greengart added that some people — particularly older users — might struggle to focus on small text at a headset’s fixed distance focal length. Those that require vision correction lenses inside the headset can find the edges of the display distorted, he said.

“I love working in VR, but compared to a physical multi-monitor setup, it isn’t quite as productive and it gives me a headache,” said Greengart. “That said, I’ve been covering this space for years, and each iteration gets better.” 

Windows 11 will soon be available on Meta Quest 3 headsets

Meta Quest 3 and Quest 3S headset owners will soon gain access to the “full capabilities” of Windows 11 in mixed reality, Microsoft announced at its Ignite conference this week. 

Users will be able to access a local Windows PC or Windows 365 Cloud PC “in seconds,” Microsoft said in a blog post, providing access to a “private, high-quality, multiple-monitor workstation.” 

Although it’s already possible to cast a PC desktop to a Quest device, the update should make the process simpler. 

Microsoft has been working with Meta to bring its apps to the mixed-reality headsets for a while. Last year,the company launched several Microsoft 365 apps on Quest devices, with web versions of Word, Excel and PowerPoint, as well as Mesh 3D environments in Microsoft Teams. At its Build conference in May, Microsoft also announced Windows “volumetric apps” in a developer preview that promise to bring 3D content from Windows apps into mixed reality.

Meta is the market leader, with Quest headsets accounting for 74% of global AR and VR headset shipments, according to data from Counterpoint Research. At the same time, Microsoft has rolled back its own virtual and mixed reality plans, recently announcing it will discontinue its HoloLens 2 headset, with no sign of plans for new version in the works. 

The number of devices sold globally fell in the second quarter of 2024, according to IDC analysts, down 28% year on year. However, IDC predicts the total number of devices sold will grow from 6.7 million units in 2024 to 22.9 million in 2028 as cheaper devices come to market. 

Using a Quest headset as a private, large or multi-monitor setup makes sense from a productivity persective, said Avi Greengart, founder of research firm Technsponential. Access to all of Windows — rather than just a browser and select Windows 365 apps — adds “a lot of utility.” 

“Large virtual monitors are a key use case for investing in head-mounted displays, whether that’s a mainstream headset like the Quest 3, a high-end spatial computing platform like the Apple Vision Pro, or a pair of display glasses from XREAL that plug into your phone or laptop,” said Greengart.

Several hardware constrains limit the use of Quest devices for work tasks, including  display resolution and field of view (the amount of the observable virtual world visible with the device), and the discomfort of wearing a headset for extended periods.

Meta’s Quest 3 and 3S devices are more comfortable than Apple’s Vision Pro, but lack the high resolution of the more expensive device. 

Greengart added that some people — particularly older users — might struggle to focus on small text at a headset’s fixed distance focal length. Those that require vision correction lenses inside the headset can find the edges of the display distorted, he said.

“I love working in VR, but compared to a physical multi-monitor setup, it isn’t quite as productive and it gives me a headache,” said Greengart. “That said, I’ve been covering this space for years, and each iteration gets better.” 

Windows 11 will soon be available on Meta Quest 3 headsets

Meta Quest 3 and Quest 3S headset owners will soon gain access to the “full capabilities” of Windows 11 in mixed reality, Microsoft announced at its Ignite conference this week. 

Users will be able to access a local Windows PC or Windows 365 Cloud PC “in seconds,” Microsoft said in a blog post, providing access to a “private, high-quality, multiple-monitor workstation.” 

Although it’s already possible to cast a PC desktop to a Quest device, the update should make the process simpler. 

Microsoft has been working with Meta to bring its apps to the mixed-reality headsets for a while. Last year,the company launched several Microsoft 365 apps on Quest devices, with web versions of Word, Excel and PowerPoint, as well as Mesh 3D environments in Microsoft Teams. At its Build conference in May, Microsoft also announced Windows “volumetric apps” in a developer preview that promise to bring 3D content from Windows apps into mixed reality.

Meta is the market leader, with Quest headsets accounting for 74% of global AR and VR headset shipments, according to data from Counterpoint Research. At the same time, Microsoft has rolled back its own virtual and mixed reality plans, recently announcing it will discontinue its HoloLens 2 headset, with no sign of plans for new version in the works. 

The number of devices sold globally fell in the second quarter of 2024, according to IDC analysts, down 28% year on year. However, IDC predicts the total number of devices sold will grow from 6.7 million units in 2024 to 22.9 million in 2028 as cheaper devices come to market. 

Using a Quest headset as a private, large or multi-monitor setup makes sense from a productivity persective, said Avi Greengart, founder of research firm Technsponential. Access to all of Windows — rather than just a browser and select Windows 365 apps — adds “a lot of utility.” 

“Large virtual monitors are a key use case for investing in head-mounted displays, whether that’s a mainstream headset like the Quest 3, a high-end spatial computing platform like the Apple Vision Pro, or a pair of display glasses from XREAL that plug into your phone or laptop,” said Greengart.

Several hardware constrains limit the use of Quest devices for work tasks, including  display resolution and field of view (the amount of the observable virtual world visible with the device), and the discomfort of wearing a headset for extended periods.

Meta’s Quest 3 and 3S devices are more comfortable than Apple’s Vision Pro, but lack the high resolution of the more expensive device. 

Greengart added that some people — particularly older users — might struggle to focus on small text at a headset’s fixed distance focal length. Those that require vision correction lenses inside the headset can find the edges of the display distorted, he said.

“I love working in VR, but compared to a physical multi-monitor setup, it isn’t quite as productive and it gives me a headache,” said Greengart. “That said, I’ve been covering this space for years, and each iteration gets better.” 

Apple plans for a smarter LLM-based Siri smart assistant

Once upon a time, we’d say software is eating the planet. It still is, but these days our world is being consumed by generative AI (genAI), which is seemingly being added to everything. Now, Apple’s Siri is on the cusp of bringing in its own form of genAI in a more conversational version Apple insiders are already calling “LLM Siri.”

What is LLM Siri?

Apple has already told us to expect a more contextually-aware version of Siri in 2025, part of the company’s soon-to-be-growing “Apple Intelligence” suite. This Siri will be able to, for example, respond to questions and requests concerning a website, contact, or anything else you happen to be looking at on your Mac, iPhone, or iPad. Think of it like an incredibly focused AI that works to understand what you are seeing and tries to give you relevant answers and actions that relate to it.

That’s what we knew already. What we learn now (from Bloomberg) is that Apple’s AI teams are working to give Siri even more capabilities. The idea is to ensure Apple’s not-so-smart smart assistant can better compete against chatbots like ChatGPT, thanks to the addition of large language models (LLMs) like OpenAI or Gemini already use. 

What will Smart Siri do?

This smarter Siri will be able to hold conversations, and drill into enquiries, just like those competing engines — particularly Advanced Voice Mode on ChatGPT. Siri’s responses will also become more human, enabling it to say, “I have a stimulating relationship with Dr. Poole,” and for you to believe that.

These conversations won’t only need to be the equivalent of a visit to the therapist on a rainy Wednesday; you’ll also be able to get into fact-based and research-focused conversations, with Siri dragging up answers and theories on command.

In theory, you’ll be able to access all the knowledge of the internet and a great deal of computationally-driven problem solving from your now-much-smarter smartphone. Apple’s ambition is to replace, at least partially, some of the features Apple Intelligence currently hands off to ChatGPT, though I suspect the iPhone maker will be highly selective in the tasks it does take on.

The company has already put some of the tools in place to handle this kind of on-the-fly task assignment; Apple Intelligence can already check a request to see whether it can be handled on the device, on Apple’s own highly secure servers, or needs to be handed over for processing by OpenAI or any other partners that might be in the mix.

When will LLM Siri leap into action?

Bloomberg speculates that this smarter assistant tech could be one of the highlight glimpses Apple offers at WWDC 2025. If that’s correct, it seems reasonable to anticipate the tech will eventually be introduced across the Apple ecosystem, just like Apple Intelligence.

You could be waiting a while for that introduction; the report suggests a spring 2026 launch for the service, which the company is already testing as a separate app across its devices.

In the run-up to these announcements, Siri continues to develop more features. As of iOS 18.3 it will begin to build a personal profile of users in order to provide better responses to queries. It will also be able to use App Intents, which let third-party developers make the features of their apps available across the system via Siri. ChatGPT integration will make its own debut next month.

Will it be enough?

Siri as a chatbot is one area in which Apple does appear to have fallen behind competitors. While it seems a positive — at least in competitive terms — that Apple is working to remedy that weakness, its current competitors will not be standing still (though unfurling AI regulation might put a glass ceiling to limit some of their global domination dreams).

Apple’s teams will also be aware of work in the background taking place between former Apple designer Jony Ive and Sam Altman’s OpenAI, and will want to ensure it has a moat in place to protect itself against whatever the fruits of that labor turn out to be.

With that in mind, Apple’s current approach — to identify key areas in which it can make a difference and to work towards edge-based, private, secure AI — makes sense and is likely to remain the primary thrust of Apple’s future efforts.

Though if there’s one net positive every Apple user already enjoys out of the intense race to AI singularity it is that the pre-installed memory inside all Apple devices has now increased. Which means that even those who never, ever, ever want to have a conversation with a machine can get more stuff done quicker than before. Learn more about Apple Intelligence here.

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe

AI agents are unlike any technology ever

The agents are coming, and they represent a fundamental shift in the role artificial intelligence plays in businesses, governments, and our lives.

The biggest news in agentic AI happened this month when we learned that OpenAI’s agent, Operator, is expected to launch in January.

OpenAI Operator will function as a personal assistant that can take multi-step actions on its own. We can expect Operator to be put to work writing code, booking travel, and managing daily schedules. It will do all this by using the applications already installed on your PC and by using cloud services. 

It joins Anthropic, which recently unveiled a feature for its AI models called “Computer Use.” This allows Claude 3.5 Sonnet to perform complex tasks on computers autonomously. The AI can now move the mouse, click on specific areas, and type commands to complete intricate tasks without constant human intervention.

We don’t know exactly how these tools will work or even whether they’ll work. Both are in what you might call “eta” — aimed mainly at developers and early adopters.

But what they represent is the coming age of agentic AI. 

 

What are AI agents?  

A great way to understand agents is to compare them with something we’ve all used before: AI chatbots like ChatGPT. 

Existing, popular LLM-based chatbots are designed around the assumption that the user wants, expects, and will receive text output—words and numbers. No matter what the user types into the prompt, the tool is ready to respond with letters from the alphabet and numbers from the numeric system. The chatbot tries to make the output useful, of course. But no matter what, it’s designed for text in, text out. 

Agentic AI is different. An agent doesn’t dive straight away into the training data to find words to string together. Instead, it stops to understand the user’s objective and comes up with the component parts to achieve that goal for the user. It plans. And then it executes that plan, usually by reaching out and using other software and cloud services. 

AI agents have three abilities that ordinary AI chatbots don’t: 

1. Reasoning: At the core of an AI agent is an LLM responsible for planning and reasoning. The LLM breaks down complex problems, creates plans to solve them, and gives reasons for each step of the process.

2. Acting: AI agents have the ability to interact with external programs. These software tools can include web searches, database queries, calculators, code execution, or other AI models. The LLM determines when and how to use these tools to solve problems. 

3. Memory Access: Agents can access a “memory” of what has happened before, which includes both the internal logs of the agent’s thought process and the history of conversations with users. This allows for more personalized and context-aware interactions.

Here’s a step-by-step look at how AI agents work: 

  1. The user types or speaks something to the agent. 
  2. The LLM creates a plan to satisfy the user’s request.
  3. The agent tries to execute the plan, potentially using external tools.
  4. The LLM looks at the result and decides if the user’s objective has been met. If not, it starts over and tries again, repeating this process until the LLM is satisfied. 
  5. Once satisfied, the LLM delivers the results to the user. 

Why AI agents are so different from any other software

“Reasoning” and “acting” (often implemented using the ReACT — Reasoning and Acting) framework) are key differences between AI chatbots and AI agents. But what’s really different is the “acting” part. 

If the main agent LLM decides that it needs more information, some kind of calculation, or something else outside the scope of the LLM itself, it can choose to solve its problem using web searches, database queries, calculations, code execution, APIs, and specialized programs. It can even choose to use other AI models or chatbots.

Do you see the paradigm shift?

Since the dawn of computing, the users who used software were human beings. With agents, for the first time ever, the software is also a user who uses software.

Many of the software tools agents use are regular websites and applications designed for people. They’ll look at your screen, use your mouse to point and click, switch between windows and applications, open a browser on your desktop, and surf the web — in fact, all these abilities exist in Anthropic’s “Computer Use” feature. Other tools that the agent can access are designed exclusively for agent use. 

Because agents can access software tools, they’re more useful, modular, and adaptable. Instead of training an LLM from scratch, or cobbling together some automation process, you can instead provide the tools the agent needs and just let the LLM figure out how to achieve the task at hand. 

They’re also designed to handle complex problem-solving and work more autonomously. 

The oversized impact of the coming age of agents

When futurists and technology prognosticators talk about the likely impact of AI over the next decade, they’re mostly talking about agents. 

AI agents will take over many of the tasks in businesses that are currently automated, and, more impactfully, enable the automation of all kinds of things now done by employees looking to offload mundane, repetitive and complicated tasks to agents. 

Agents will also give rise to new jobs, roles, and specialties related to managing, training, and monitoring agentic systems. They will add another specialty to the cybersecurity field, which will need agents to defend against cyber attackers who are also using agents. 

As I’ve been saying for many years, I believe augmented reality AI glasses will grow so big they’ll replace the smartphone for most people. Agentic AI will make that possible. 

In fact, AI smart glasses and AI agents were made for each other. Using streaming video from the glasses’ camera as part of the multimodal input (other inputs being sound, spoken interaction, and more), AI agents will constantly work for the user through simple spoken requests. 

One trivial and perfectly predictable example: You see a sign advertising a concert, looking directly at it (enabling the camera in your glasses to capture that information), and tell your agent you’d like to attend. The agent will book the tickets, add it to your calendar, invite your spouse, hire a babysitter and arrange a self-driving car to pick you up and drop you off. 

Like so many technologies, AI will both improve and degrade human capability. Some users will lean on agentic AI like a crutch to never have to learn new skills or knowledge, outsourcing self-improvement to their agent assistants. Other users will rely on  agents to push their professional and personal educations into overdrive, learning about everything they encounter all the time.

The key takeaway here is that while agentic AI sounds like futuristic sci-fi, it’s happening in a big way starting next year.