Page 26 of 124

Google faces new labor board complaint over contractor union bargaining

The US National Labor Relations Board (NLRB) has filed a fresh complaint against Google, alleging that the company acts as the employer of certain contract workers and must negotiate with their union, Reuters reports.

The Board has said Google is a “joint employer” for roughly 50 San Francisco-based content creators hired through IT contractor Accenture Flex.

These workers, who joined the Alphabet Workers Union in 2023, should be considered under the tech giant’s purview, according to the agency, the report said.

An administrative judge will now hear the complaint, with the decision subject to review by the NLRB’s five-member panel.

If the Board confirms Google’s status as a joint employer for the Accenture Flex contractors, the tech giant would be compelled to engage in collective bargaining and could be held accountable for breaches of federal labor law.

NLRB is also looking into a separate complaint from October, which accuses Google and Accenture Flex of altering working conditions without consulting the union first, according to the report.

This follows the NLRB’s January 2024 ruling requiring Google to negotiate with employees at YouTube Music — an Alphabet subsidiary — hired through a different staffing firm. Google has appealed the decision, and a US federal court is scheduled to review the case later this month.

Google has faced growing labor challenges, marked by worker protests and layoffs. Last year, the company removed a $15-an-hour minimum wage for contractors and implemented changes aimed at sidestepping union negotiations.

Implications for the industry

Google has stated that it does not have sufficient control over contract workers to qualify as their joint employer, according to the report.

The outcome of the case could set a precedent for how contract workers are treated across the tech industry, where companies frequently rely on third-party staffing firms.

“Companies may need to rethink their mix of employment types and how they engage contract and gig workers,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “In a worst-case scenario, this work could be moved to locations where such regulations don’t exist. Alternatively, companies might face additional compliance requirements, costs, and audits if the NLRB wins against major corporations.”

Meanwhile, large corporations may need to adopt a more flexible stance on the issue, as the number of contract and gig workers is expected to grow, Gogia added.

A decision against Google could also energize unionization efforts within the tech sector, offering a roadmap for organizing workers in an industry that has traditionally resisted union activity. “The topic is also profoundly interlinked with the country’s political climate,” Gogia said. “If one were to consider the past stand that the Trump administration had on the subject, it is clear that the concept of joint employer may not see the light of day after all.”

Trump tariffs could raise laptop, tablet prices by 46%, cut sales by 68%

A new report from the Consumer Technology Association (CTA) indicates the tariffs President-elect Donald Trump has threatened to impose against foreign shipments of technology into the US could threaten the products consumers rely on.

Trump, who re-takes office on Jan. 20, recently threatened to impose significant tariffs on technology and other imports from Canada, Mexico, and China. On Nov. 25, for example, he unveiled plans to implement a 25% tariff on all goods from Canada and Mexico, using the measure to pressure the two nations to address illegal immigration and drug trafficking.

Additionally, he proposed a 10% tariff on Chinese imports, citing concerns over trade imbalances and unfair practices.

Specifically, levies on technology product and parts imports could reduce US consumer purchasing power by between $90 billion and $143 billion. That, in turn, could force laptop and tablet prices up by as much as 46% — and potentially cut laptop and tablet purchases by 68%, gaming consoles by 58%, and smartphones by 37%, according to the CTA report.

The prospect of new tariffs has raised concerns among economists and trading partners. Maurice Obstfeld, former chief economist at the International Monetary Fund, warned in an interview with MarketWatch that such measures could lead to the formation of hostile trading blocs and a potential global economic downturn.

Trump tariff impacts

Consumer Technology Assoociation

In response to the Trump threats, the Canadian government has indicated it would retaliate against any tariffs on Canadian goods; for its part, Mexico has said any such tariffs would not effectively address immigration issues.

“The incoming administration must address how tariffs impact American businesses and consumers,” said CTA Vice President of Trade Ed Brzytwa. “Retaliation from our trading partners raises costs, disrupts supply chains, and hurts the competitiveness of US industries. US trade policy should protect consumers and help American businesses succeed globally.”

Without tariffs in place, the CTA expects robust growth for the US consumer tech industry in 2025, projecting record retail revenue will rise 3.2% compared to 2024 to $537 billion this year. 

Stephen Minton, IDC vice president of data and analytics research, said the impact of tariffs on PC, tablet, and smartphone prices and sales will depend on tariff size, exemptions, timing, and the inclusion of PCs and components.

“So, it’s too early to get specific, but what we do know is that a large share of PCs are currently still manufactured in China — almost 90% of the global market — which makes PCs more exposed to some of the proposed tariffs than most other IT segments,” Minton said.

US vendors like Apple, HP, and Dell still manufacture most of their PCs in China, but some of those companies have begun shifting production to countries like Vietnam and Thailand, Minton noted. (Apple has also made a push to move manufacturing to India.)

Even so, any large new tariffs on imports from China would almost certainly lead to PC price increases, Minton said.

“This could force enterprises to purchase fewer PC upgrades in order to stay within their allocated 2025 budgets,” Minton said. “This would be especially true at the lower end of the market, where there’s very little margin for vendors to absorb the impact of any new tariffs. …It’s likely that any significant new tariffs would be passed on to all customers.”

Additionally, in reaction to the possibility of tariffs, tech suppliers could stockpile inventory in early 2025 to avoid future price hikes, according to Greg Davis, an analyst with market research firm Canalys.

Commercial demand for PCs and tablets remained strong in late 2024, with 12% shipment growth in Q3. The Windows 11 refresh is ongoing — especially with the end of support for Windows 10 coming in October — and commercial strength is expected to persist into early 2025, according to Davis. Total PC shipments to the US are expected to rise 6% to just under 70 million units in 2024 followed by modest 2% growth in both 2025 and 2026.

Consumer purchases drove growth earlier this year, but the commercial market now leads US PC sales, according to Davis. Businesses large and small are upgrading to Windows 11 PCs more actively in the second half of the year.

Even so, macroeconomic conditions in the US are not expected to be as stable in the near-term as they have been over the last year or two, Davis said. “With reports of import tariffs seemingly on the horizon, the PC market will likely be impacted in a noticeable way,” he said.

Trump tariffs could raise laptop, tablet prices by 46%, cut sales by 68%

A new report from the Consumer Technology Association (CTA) indicates the tariffs President-elect Donald Trump has threatened to impose against foreign shipments of technology into the US could threaten the products consumers rely on.

Trump, who re-takes office on Jan. 20, recently threatened to impose significant tariffs on technology and other imports from Canada, Mexico, and China. On Nov. 25, for example, he unveiled plans to implement a 25% tariff on all goods from Canada and Mexico, using the measure to pressure the two nations to address illegal immigration and drug trafficking.

Additionally, he proposed a 10% tariff on Chinese imports, citing concerns over trade imbalances and unfair practices.

Specifically, levies on technology product and parts imports could reduce US consumer purchasing power by between $90 billion and $143 billion. That, in turn, could force laptop and tablet prices up by as much as 46% — and potentially cut laptop and tablet purchases by 68%, gaming consoles by 58%, and smartphones by 37%, according to the CTA report.

The prospect of new tariffs has raised concerns among economists and trading partners. Maurice Obstfeld, former chief economist at the International Monetary Fund, warned in an interview with MarketWatch that such measures could lead to the formation of hostile trading blocs and a potential global economic downturn.

Trump tariff impacts

Consumer Technology Assoociation

In response to the Trump threats, the Canadian government has indicated it would retaliate against any tariffs on Canadian goods; for its part, Mexico has said any such tariffs would not effectively address immigration issues.

“The incoming administration must address how tariffs impact American businesses and consumers,” said CTA Vice President of Trade Ed Brzytwa. “Retaliation from our trading partners raises costs, disrupts supply chains, and hurts the competitiveness of US industries. US trade policy should protect consumers and help American businesses succeed globally.”

Without tariffs in place, the CTA expects robust growth for the US consumer tech industry in 2025, projecting record retail revenue will rise 3.2% compared to 2024 to $537 billion this year. 

Stephen Minton, IDC vice president of data and analytics research, said the impact of tariffs on PC, tablet, and smartphone prices and sales will depend on tariff size, exemptions, timing, and the inclusion of PCs and components.

“So, it’s too early to get specific, but what we do know is that a large share of PCs are currently still manufactured in China — almost 90% of the global market — which makes PCs more exposed to some of the proposed tariffs than most other IT segments,” Minton said.

US vendors like Apple, HP, and Dell still manufacture most of their PCs in China, but some of those companies have begun shifting production to countries like Vietnam and Thailand, Minton noted. (Apple has also made a push to move manufacturing to India.)

Even so, any large new tariffs on imports from China would almost certainly lead to PC price increases, Minton said.

“This could force enterprises to purchase fewer PC upgrades in order to stay within their allocated 2025 budgets,” Minton said. “This would be especially true at the lower end of the market, where there’s very little margin for vendors to absorb the impact of any new tariffs. …It’s likely that any significant new tariffs would be passed on to all customers.”

Additionally, in reaction to the possibility of tariffs, tech suppliers could stockpile inventory in early 2025 to avoid future price hikes, according to Greg Davis, an analyst with market research firm Canalys.

Commercial demand for PCs and tablets remained strong in late 2024, with 12% shipment growth in Q3. The Windows 11 refresh is ongoing — especially with the end of support for Windows 10 coming in October — and commercial strength is expected to persist into early 2025, according to Davis. Total PC shipments to the US are expected to rise 6% to just under 70 million units in 2024 followed by modest 2% growth in both 2025 and 2026.

Consumer purchases drove growth earlier this year, but the commercial market now leads US PC sales, according to Davis. Businesses large and small are upgrading to Windows 11 PCs more actively in the second half of the year.

Even so, macroeconomic conditions in the US are not expected to be as stable in the near-term as they have been over the last year or two, Davis said. “With reports of import tariffs seemingly on the horizon, the PC market will likely be impacted in a noticeable way,” he said.

Nvidia’s Project DIGITS puts AI supercomputing chips on the desktop

Nvidia has built its generative AI (genAI) business on delivering massive computing capacity to data centers where it can be used to train and refine large language models (LLMs). 

Now, the company is readying a diminutive desktop device called Project DIGITS, a “personal AI supercomputer” with a lightweight version of the Grace Blackwell platform found in its most powerful servers; it’s aimed at data scientists, researchers, and students who will be able to prototype, tune, and run large genAI models.

Nvidia CEO Jensen Huang unveiled Project DIGITS in a keynote speech on the eve of the CES 2025 electronics show in Las Vegas.

Project DIGITS is similar in size to the Windows 365 Link thin client Microsoft unveiled in November. Microsoft’s Link measures 120mm (4.72 inches) square and is 30mm (1.18 inches) high. 

Nvidia hasn’t given precise dimensions for Project DIGITS, with Allen Bourgoyne, director of product marketing, saying only that the square device will be about as wide as a coffee mug, including the handle, and about half as high. No international standard exists for mugs, but they are typically about 120mm across, including the handle, and around 90mm high, making Project DIGITS as wide as the Link but half as thick again. There the resemblance ends.

The philosophies behind the two devices are quite different: Where the Link pushes almost all the computing capacity to the cloud, Nvidia’s hardware is moving it down to the desktop. 

Microsoft’s Link has 8GB of RAM, no local data storage, and an unspecified Intel processor with no special AI capabilities: If you want to use Windows’ Copilot features they — like everything else — will run in the cloud. Link will sell for around $350 when it goes on sale in April.

One wall outlet, one petaflop

Project DIGITS, on the other hand, will cost upwards of $3,000 when it arrives in May. For that money, buyers will get 4TB of NVMe storage, 128GB of unified Low-Power DDR5X system memory, and a new GB10 Grace Blackwell Superchip; it comes with 20 ARM cores in the Grace CPU and a mix of CUDA cores, RT cores and fifth-generation tensor cores in the Blackwell GPU. 

Together, those cores offer up to 1 petaflop of AI processing capability — enough, said Bourgoyne, to work with a 200-billion-parameter model at “FP4” accuracy locally, with no need for the cloud. By connecting two Project DIGITS devices together via their built-in ConnectX networking chips, it’s possible to work with 400-billion-parameter models, he said.

The GB10 was co-developed with Mediatek, a company known for its power-efficient mobile chips. Compared to the GB200 processors used in data centers, an NV72 rack full of which can draw as much as 120kW, Project DIGITS is more power efficient. “So, you can plug it into a standard wall outlet,” Bourgoyne said. “It doesn’t require any additional power than what you have at your desktop.”

Project DIGITS won’t run Windows: Instead, it will run DGX OS, a version of Ubuntu Linux customized with additional drivers and tools for developing and running AI applications. That’s the same software that runs on Nvidia DGX systems in the data center, meaning models built and tested locally on Project DIGITS can be deployed straight to the cloud or data center, the company said. 

Other Nvidia AI tools the device can run include orchestration tools, frameworks, and models on the Nvidia Developer portal and in its NGC. That includes the NeMo framework for fine-tuning models and the RAPIDS libraries for data science.

Nvidia Blueprints and NIM microservices are available under lightweight licenses via its developer program for building agentic AI applications, with an AI Enterprise license needed only when they are moved to production environments, the company said. 

More generative AI on the desktop

Recognizing that you don’t need a GB10 processor to accelerate AI development on the desktop, Nvidia is also introducing a range of NIM microservices and AI Blueprints for building applications on PCs containing its Geforce RTX GPUs — what it calls RTX AI PCs.

Nvidia is introducing a range of AI foundation models — both its own and those of other developers — containerized as NIM microservices that can be downloaded and connected together. Using low-code and no-code tools such as AnythingLLM, ComfyUI, Langflow and LM Studio, developers will be able to build and deploy workflows using these NIM microservices, with its AI Blueprints providing preconfigured workflows for particular tasks such as converting between media formats. One of the newest Blueprints can convert PDFs to podcasts.

With o3 having reached AGI, OpenAI turns its sights toward superintelligence

OpenAI CEO Sam Altman has reinvigorated discussion of artificial general intelligence (AGI), boldly claiming that his company’s newest model has reached that milestone.

In an interview with Bloomberg, he noted that OpenAI’s o3, which was announced in December and is currently being safety tested, has passed the ARC-AGI challenge, the leading benchmark for AGI. Now, Altman said, the company is setting its sights on superintelligence, which is leaps and bounds beyond AGI, just as AGI is to AI. 

According to ARC-AGI, “OpenAI’s new o3 system — trained on the ARC-AGI-1 Public Training set — has scored a breakthrough 75.7% on the Semi-Private Evaluation set at our stated public leaderboard $10k compute limit. A high-compute (172x) o3 configuration scored 87.5%.”

Planes, trains and third-party risks — a tale of two IT-related shutdowns

Christmas Eve (and Christmas Day) are arguably the most important time-frame for transportation companies. So it was a big deal when an American Airlines system glitch forced the airline to ask the government for a full shutdown on Christmas Eve. And it was an even bigger deal the next day for Bane NOR, which runs the Norwegian rail system and had to shut down all trains in Norway.

Both involved IT issues and both were mostly — if not entirely — caused by third-party firms. Now, third-party risks are nothing new. But few CIOs truly internalize that one error from a vendor can shut down all enterprise operations. That’s a lot of trust to offer an outside company that typically undergoes minor due diligence, assuming it was subjected to any meaningful due diligence at all.

What happened with these Christmas nightmares? Let’s drill into each and note how the two transportation giants differed in their approach.

The more interesting of the two was the Norwegian train shutdown, which lasted 13 hours on Christmas Day, from roughly 8 a.m. until 9 p.m. The problem: trains couldn’t communicate with any traffic control centers, which meant they couldn’t operate safely. The cause: a bad firewall setting.

Let that sink in. Because systems today overwhelmingly run through the internet, firewalls can and will block anything. Until this incident, how many IT managers at Bane NOR realized a firewall setting could shut down every train everywhere?

That was a key reason for the long delay in getting the trains back online. When communications stop, managers think the communications gear is somehow failing.

“It took us a while before we could trace it to a firewall issue. It was not one of the obvious causes to look at,” Strachan Stine Smemo, the Bane external communications manager, said in an email to Computerworld. “It was tricky to find the problem.”

Bane’s team opted against changing any firewall settings and instead — as a temporary measure — switched communications to a different firewall. (They later changed the impacted components, Smemo said.)

Arild Nybrodahl, Bane’s information and communications technology director, said his team detected “system instability” on Christmas Eve, which is when “troubleshooting efforts were initiated.” Things didn’t get bad enough to shutdown operations until 8 a.m. the next day, he said. 

“The fault affected the railway’s closed mobile network (GSM-R) and other critical communication systems,” Nybrodahl said. “When any emergency calls and other communication between the train and the train conductor do not work, we cannot operate trains. We have located where the error lies in our own nationwide IT infrastructure and we are now working on a solution to correct the error. We have not yet corrected the root cause, but have taken measures so that the part of the network where the error was located is isolated from the rest of the infrastructure.”

Unlike American Airlines, Bane did not identify the relevant third-party and even praised that vendor’s efforts. Bane received “good help from our supplier,” Smemo said. 

American Airlines, however, not only identified the vendor at issue as DXC, but went out of its way to tell reporters that the problems it ran into were that vendor’s fault. This is known as throwing a partner under the bus.

It’s not clear precisely what happened between the two companies, as neither have discussed the particulars. But American made those comments shortly after the one-hour outage ended. That means emotions were at play, and someone at at the airline was very unhappy.

(DXC is likely unhappy, too, since its stock price has taken a hit.)

Though DXC has been a longtime supplier to American — the DXC website says “more than 20 years” — but it’s not precisely clear what role it had in the shutdown. The company has some role in the airline’s flight operations systems and has been working to modernize American’s systems, including moving legacy code to the cloud. 

The airline blamed a network hardware issue, without being specific, that forced the airline to ask the US Federal Aviation Administration for a nationwide group stop that ended up lasting about an hour.

According to a report on MSN , the incident delayed more than 900 flights affecting “around 900,000 passengers across 200 US airports, leaving many stranded and sleeping in terminals.”

Given that both of these incidents happened on major holidays, one obvious factor is that the companies had only skeleton crews on duty. Though it’s unlikely that holiday staffing caused either situation, it likely slowed down the responses.

One other wrinkle in the DXC situation: the company on Christmas Eve was already in the middle of an IT leadership change. CIO Kristie Grinnell had given notice about her move to a new job as CIO of TD SYNNEX. That was announced on Dec. 19; two weeks later DXC announced its new CIO would be Brad Novak

The problem with throwing a vendor partner under the bus — aside from the fact you haven’t done a full investigation or determined who’s at fault —is that it leaves important questions unanswered. Did this third-party firm have the appropriate skills and personnel to deliver what it was supposed to deliver? If not, then shouldn’t the fault lie with whoever hired that firm?

Let’s say the selection process was appropriate. The question then becomes, “Who was supposed to oversee that vendor?” And was the vendor given everything needed to do the job?

From the perspective of shareholders, the fault is more often going to lie with the people who overseeing and bringing in the outside firm. Unless the third-party company ignored instructions or engaged in bad behavior, most mishaps are going to be blamed on the enterprise.

Put bluntly, an enterprise that is quick to blame a contractor is likely trying to change the subject before its own failings are examined.

Planes, trains and third-party risks — a tale of two IT-related shutdowns

Christmas Eve (and Christmas Day) are arguably the most important time-frame for transportation companies. So it was a big deal when an American Airlines system glitch forced the airline to ask the government for a full shutdown on Christmas Eve. And it was an even bigger deal the next day for Bane NOR, which runs the Norwegian rail system and had to shut down all trains in Norway.

Both involved IT issues and both were mostly — if not entirely — caused by third-party firms. Now, third-party risks are nothing new. But few CIOs truly internalize that one error from a vendor can shut down all enterprise operations. That’s a lot of trust to offer an outside company that typically undergoes minor due diligence, assuming it was subjected to any meaningful due diligence at all.

What happened with these Christmas nightmares? Let’s drill into each and note how the two transportation giants differed in their approach.

The more interesting of the two was the Norwegian train shutdown, which lasted 13 hours on Christmas Day, from roughly 8 a.m. until 9 p.m. The problem: trains couldn’t communicate with any traffic control centers, which meant they couldn’t operate safely. The cause: a bad firewall setting.

Let that sink in. Because systems today overwhelmingly run through the internet, firewalls can and will block anything. Until this incident, how many IT managers at Bane NOR realized a firewall setting could shut down every train everywhere?

That was a key reason for the long delay in getting the trains back online. When communications stop, managers think the communications gear is somehow failing.

“It took us a while before we could trace it to a firewall issue. It was not one of the obvious causes to look at,” Strachan Stine Smemo, the Bane external communications manager, said in an email to Computerworld. “It was tricky to find the problem.”

Bane’s team opted against changing any firewall settings and instead — as a temporary measure — switched communications to a different firewall. (They later changed the impacted components, Smemo said.)

Arild Nybrodahl, Bane’s information and communications technology director, said his team detected “system instability” on Christmas Eve, which is when “troubleshooting efforts were initiated.” Things didn’t get bad enough to shutdown operations until 8 a.m. the next day, he said. 

“The fault affected the railway’s closed mobile network (GSM-R) and other critical communication systems,” Nybrodahl said. “When any emergency calls and other communication between the train and the train conductor do not work, we cannot operate trains. We have located where the error lies in our own nationwide IT infrastructure and we are now working on a solution to correct the error. We have not yet corrected the root cause, but have taken measures so that the part of the network where the error was located is isolated from the rest of the infrastructure.”

Unlike American Airlines, Bane did not identify the relevant third-party and even praised that vendor’s efforts. Bane received “good help from our supplier,” Smemo said. 

American Airlines, however, not only identified the vendor at issue as DXC, but went out of its way to tell reporters that the problems it ran into were that vendor’s fault. This is known as throwing a partner under the bus.

It’s not clear precisely what happened between the two companies, as neither have discussed the particulars. But American made those comments shortly after the one-hour outage ended. That means emotions were at play, and someone at at the airline was very unhappy.

(DXC is likely unhappy, too, since its stock price has taken a hit.)

Though DXC has been a longtime supplier to American — the DXC website says “more than 20 years” — but it’s not precisely clear what role it had in the shutdown. The company has some role in the airline’s flight operations systems and has been working to modernize American’s systems, including moving legacy code to the cloud. 

The airline blamed a network hardware issue, without being specific, that forced the airline to ask the US Federal Aviation Administration for a nationwide group stop that ended up lasting about an hour.

According to a report on MSN , the incident delayed more than 900 flights affecting “around 900,000 passengers across 200 US airports, leaving many stranded and sleeping in terminals.”

Given that both of these incidents happened on major holidays, one obvious factor is that the companies had only skeleton crews on duty. Though it’s unlikely that holiday staffing caused either situation, it likely slowed down the responses.

One other wrinkle in the DXC situation: the company on Christmas Eve was already in the middle of an IT leadership change. CIO Kristie Grinnell had given notice about her move to a new job as CIO of TD SYNNEX. That was announced on Dec. 19; two weeks later DXC announced its new CIO would be Brad Novak

The problem with throwing a vendor partner under the bus — aside from the fact you haven’t done a full investigation or determined who’s at fault —is that it leaves important questions unanswered. Did this third-party firm have the appropriate skills and personnel to deliver what it was supposed to deliver? If not, then shouldn’t the fault lie with whoever hired that firm?

Let’s say the selection process was appropriate. The question then becomes, “Who was supposed to oversee that vendor?” And was the vendor given everything needed to do the job?

From the perspective of shareholders, the fault is more often going to lie with the people who overseeing and bringing in the outside firm. Unless the third-party company ignored instructions or engaged in bad behavior, most mishaps are going to be blamed on the enterprise.

Put bluntly, an enterprise that is quick to blame a contractor is likely trying to change the subject before its own failings are examined.

Apple Intelligence: Is AI an opportunity or a curse?

Does the rise of artificial intelligence represent more of an opportunity for the world — or a curse? Because for all the clamor about boosted productivity and enhanced human potential, there’s also the rising demand for energy, processor power, memory requirements and ever more bloat on the machine.

Just look at Apple Intelligence, which now demands almost twice as much data storage on your devices than was originally advertised.

I fear that’s the thin end of this cursed wedge. And it’s not as though storage is the only demand AI makes. 

AI is a greedy beast

Apple has been forced to roll out major hardware changes to support Apple Intelligence:

  • Memory: Apple has increased base memory across all of its machines. Macs, iPhone, and iPads all now ship with much more memory than before, boosting manufacturing costs.
  • Processor: Apple has really pushed the boat out on processors in its latest hardware. The company effectively raised everyone up an extra grade during the last 18 months as it primed its ecosystem for Apple Intelligence with new, faster, more energy-efficient processors.
  • Energy efficiency: Not only is Apple Silicon more energy efficient, but the company wants to give its devices more energy capacity. To do so, it is expected to shift to silicon-anode cells over the next 12 months. These hold around 15% more energy, which will be useful for the energy demands of edge AI.
  • Server infrastructure: Reflecting its realization that not every task can be accomplished on edge devices, Apple has now re-entered the server market, introducing its own take on secure server-based cloud computing services, Private Cloud Compute.

Apple isn’t alone in any of this, but its actions highlight the extent of the hundreds of billions being spent on the sector today — costs that extend into essential infrastructure resources such as water, rare resources, and energy supply. All of this costs enterprises money, focus, and time. The rewards? Even OpenAI, arguably the doyen of AI tech, is shedding cash faster than it makes it, even on its priciest $200-per-month ChatGPT Pro plan.

What need does the greed feed?

Right now, all we really seem to be experiencing is more targeted ads placement, email and website summaries, stupid pictures in messages, deep employment insecurity, rising energy costs, and an increasingly homogenized trade in optimized job resumes, press releases, and student exam papers. Oh, and don’t forget the fake video influencers hawking their wares on heavily AI-SEO’d social media.

We’re sold on potential, but may yet wind up with little more than a smarter search engine and a deeply intrusive invasion of privacy. Fantasia or dystopia? Even Elon Musk seems unsure, warning of the perils of AI at one point, only to introduce his own AI model later on. 

The hype is unavoidable at this week’s Consumer Electronic Show (CES), where AI is going to appear in some form across all the exhibit stands. Everyone and anyone who can link their product up to some form of AI service will do so. 

As is usual, some of the claims will turn out to be vaporware, while other combinations won’t really deliver much tangible benefit. To invent an example, do we really need an AI tool to order groceries toward personalized dinner plans it builds based on what it knows about our plans that week? Or do we just need a recipe book and a takeaway menu? 

What about the consequences of this kind of data being weaponized by AI? How is the information that AI gathers stored, who else can access it, and what control over it do we have? Do we really need dodgy surveillance-as-a-service firms to be able to identify information about us that they can then use to send convincingly authentic AI-targeted and developed phishing attacks to gain access to our digital lives? How well thought through are the solutions rapidly appearing on the table, and how much consideration has gone into weighing the potential consequences?

Behind the hype

Am I being unfair? 

I’m certain there are AI proponents who think the potential of what we are investing in far outweighs the risks. But there are always people prepared to make such claims. Right now, for most of us (even with Apple Intelligence), the hype, hoop-la, and costs haven’t yet delivered on the clamor. The rest of us watching tech bros snicker and smile on their shiny AI cavalcade remain to be convinced.

With that in mind, it seems a slow and steady approach to AI deployment could end up being the King’s Gambit in the game. Rather than chasing the evangelists, the industry should focus on putting solutions together that deliver genuine benefit, rather than simply looking good in headlines, (whoever writes them). We need to see true and tangible improvements to foster trust, and if the people behind them genuinely believe AI will drive future hardware sales, they’ll make sure their AI solutions do just that.

Or fail. 

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe

Apple Intelligence: Is AI an opportunity or a curse?

Does the rise of artificial intelligence represent more of an opportunity for the world — or a curse? Because for all the clamor about boosted productivity and enhanced human potential, there’s also the rising demand for energy, processor power, memory requirements and ever more bloat on the machine.

Just look at Apple Intelligence, which now demands almost twice as much data storage on your devices than was originally advertised.

I fear that’s the thin end of this cursed wedge. And it’s not as though storage is the only demand AI makes. 

AI is a greedy beast

Apple has been forced to roll out major hardware changes to support Apple Intelligence:

  • Memory: Apple has increased base memory across all of its machines. Macs, iPhone, and iPads all now ship with much more memory than before, boosting manufacturing costs.
  • Processor: Apple has really pushed the boat out on processors in its latest hardware. The company effectively raised everyone up an extra grade during the last 18 months as it primed its ecosystem for Apple Intelligence with new, faster, more energy-efficient processors.
  • Energy efficiency: Not only is Apple Silicon more energy efficient, but the company wants to give its devices more energy capacity. To do so, it is expected to shift to silicon-anode cells over the next 12 months. These hold around 15% more energy, which will be useful for the energy demands of edge AI.
  • Server infrastructure: Reflecting its realization that not every task can be accomplished on edge devices, Apple has now re-entered the server market, introducing its own take on secure server-based cloud computing services, Private Cloud Compute.

Apple isn’t alone in any of this, but its actions highlight the extent of the hundreds of billions being spent on the sector today — costs that extend into essential infrastructure resources such as water, rare resources, and energy supply. All of this costs enterprises money, focus, and time. The rewards? Even OpenAI, arguably the doyen of AI tech, is shedding cash faster than it makes it, even on its priciest $200-per-month ChatGPT Pro plan.

What need does the greed feed?

Right now, all we really seem to be experiencing is more targeted ads placement, email and website summaries, stupid pictures in messages, deep employment insecurity, rising energy costs, and an increasingly homogenized trade in optimized job resumes, press releases, and student exam papers. Oh, and don’t forget the fake video influencers hawking their wares on heavily AI-SEO’d social media.

We’re sold on potential, but may yet wind up with little more than a smarter search engine and a deeply intrusive invasion of privacy. Fantasia or dystopia? Even Elon Musk seems unsure, warning of the perils of AI at one point, only to introduce his own AI model later on. 

The hype is unavoidable at this week’s Consumer Electronic Show (CES), where AI is going to appear in some form across all the exhibit stands. Everyone and anyone who can link their product up to some form of AI service will do so. 

As is usual, some of the claims will turn out to be vaporware, while other combinations won’t really deliver much tangible benefit. To invent an example, do we really need an AI tool to order groceries toward personalized dinner plans it builds based on what it knows about our plans that week? Or do we just need a recipe book and a takeaway menu? 

What about the consequences of this kind of data being weaponized by AI? How is the information that AI gathers stored, who else can access it, and what control over it do we have? Do we really need dodgy surveillance-as-a-service firms to be able to identify information about us that they can then use to send convincingly authentic AI-targeted and developed phishing attacks to gain access to our digital lives? How well thought through are the solutions rapidly appearing on the table, and how much consideration has gone into weighing the potential consequences?

Behind the hype

Am I being unfair? 

I’m certain there are AI proponents who think the potential of what we are investing in far outweighs the risks. But there are always people prepared to make such claims. Right now, for most of us (even with Apple Intelligence), the hype, hoop-la, and costs haven’t yet delivered on the clamor. The rest of us watching tech bros snicker and smile on their shiny AI cavalcade remain to be convinced.

With that in mind, it seems a slow and steady approach to AI deployment could end up being the King’s Gambit in the game. Rather than chasing the evangelists, the industry should focus on putting solutions together that deliver genuine benefit, rather than simply looking good in headlines, (whoever writes them). We need to see true and tangible improvements to foster trust, and if the people behind them genuinely believe AI will drive future hardware sales, they’ll make sure their AI solutions do just that.

Or fail. 

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe

AI revolution drives demand for specialized chips, reshaping global markets

Artificial Intelligence (AI) has rapidly transformed the chip industry since its mainstream arrival over the past two years, driving demand for specialized processors, accelerating design innovation, and reshaping global supply chains and markets.

The generative AI (genAI) revolution that began with OpenAI’s release of ChatGPT in late 2022 continues to push the limits of AI inference, large language models (LLMs) and semiconductor technologies. In short order, traditional CPUs, insufficient for AI’s parallel processing needs, have given way to specialized chips: GPUs, TPUs, NPUs, and AI accelerators.

That prompted companies such as Nvidia, AMD, and Intel to expand their portfolios to include AI-optimized products, with Nvidia leading in GPUs for AI training and inference. And because AI workloads prioritize throughput, energy efficiency, and scalability, the larger tech industry has seen massive investments in data centers, with AI-focused chips like NVIDIA’s H100 and AMD’s MI300 now powering the backbone of AI cloud computing.

At the same time, companies such as Amazon, Microsoft, and Google have developed custom chips (such as AWS Graviton and Google TPU) to reduce dependency on external suppliers and enhance AI performance.

In particular, the AI revolution propelled has propelled growth at Nvidia, making it as a dominant force in the data center marketplace. Once focused on producing chips for gaming systems, the company’s AI-driven hardware and software now outpaces those efforts, which has led to remarkable financial gains. The company’s market capitalization topped $1 trillion in May 2023 — and passed $3.3 trillion in June 2024, making it the world’s most valuable company at that time.

The AI-chip industry, however, is about to change dramatically. Over the past several years, semiconductor developers and manufacturers have focused on supplying the data center needs of hyperscale cloud service providers such Amazon Web Services, Google Cloud Platform and Microsoft Azure; organizations have relied heavily on those industry stalwarts for their internal AI development.

There’s now a shift toward smaller AI models that only use internal corporate data, allowing for more secure and customizable genAI applications and AI agents. At the same time, Edge AI is taking hold, because it allows AI processing to happen on devices (including PCs, smartphones, vehicles and IoT devices), reducing reliance on cloud infrastructure and spurring demand for efficient, low-power chips.

“The challenge is if you’re going to bring AI to the masses, you’re going to have to change the way you architect your solution; I think this is where Nvidia will be challenged because you can’t use a big, complex GPU to address endpoints,” said Mario Morales, a group vice president at research firm IDC. “So, there’s going to be an opportunity for new companies to come in — companies like Qualcomm, ST Micro, Renesas, Ambarella and all these companies that have a lot of the technology, but now it’ll be about how to use it.

“This is where the next frontier is for AI – the edge,” Morales said.

Turbulence in the market for some chip makers

Though global semiconductor chip sales declined in 2023 by about 11%, dropping from the previous year’s record of $574.1 billion to around $534 billion, that downturn did not last. Sales are expected to increase by 22% in 2025, according to Morales, driven by AI adoption and a stabilization in PC and smartphone sales.

“If you’re making memory or making an AI accelerator, like Nvidia, Broadcom, AMD or even Marvel now, you’re doing very well,” Morales said. “But if you’re a semiconductor company like an ST Micro, Infinium, Renesas or Texas Instruments, you’ve been hit hard by excess inventory and a macroeconomy that’s been uncertain for industrial and automobile sectors. Those two markets last year outperformed, but this year they were hit very hard.”

Most LLMs used today rely on public data, but more than 80% of the world’s data is held by enterprises that won’t share it with platforms like OpenAI or Anthropic, according to Morales. That trend benefits processor companies, especially Nvidia, Qualcomm, and AMD. Highly specialized System on a Chip (SoC) technology with lower price points and more energy efficiency will begin to dominate the market as organizations bring the tech in-house.

“I think it’s definitely going to change the dynamics in the market,” Morales said. “That’s why you’re seeing a lot of companies aligning themselves to address the edge and end points with their technology. I think that’s the next wave of growth you’re going to see along with the enterprise; the enterprise is adopting their own data center approach.”

Intel will continue to find a safe haven for its processors in PCs, and its decision to outsource manufacturing to TSMC has kept it competitive with rival AMD. But Intel is likely to struggle to keep pace with other chip makers in emerging markets.

“Outside of that, if you look at their data center business, it’s still losing share to AMD and they have no answer for Nvidia,” Morales said.

While Intel’s latest line of x86 and Gaudi AI accelerators are designed to compete with Nvidia’s H100 and Blackwell GPUs, Morales sees them more as a “stop gap” effort —not what the market is seeking.

“I do believe on the client side there’s an opportunity for Intel to take advantage of a replacement cycle with AI working its way into PCs,” he said. “They just received an endorsement from Microsoft for Copilot, so that gives their x86 line an opening; that’s where Intel can continue to fight until they recover from their transformation and all the changes that have happened at the company.”

To stay relevant in modern data centers — where Nvidia’s chips are driving growth — Intel and AMD will need to invest in GPUs, according to Andrew Chang, technology director at S&P Global Ratings.

“While CPUs remain essential, Nvidia dominates the AI chip market, leaving AMD and Intel struggling to compete,” Chang said. “AMD aims for $5 billion in AI chip sales by 2025, while Intel’s AI efforts, centered on its Gaudi platform, are minimal. Both companies will continue investing in GPUs and AI accelerators, showing some incremental revenue growth, but their share of the data center market will likely keep declining.”

Politics, the CHIPS Act and what happens after Jan. 20

Geopolitical and economic factors such as export restrictions, supply chain disruptions, and government policies, could also reshape the chip industry. President-elect Donald J. Trump, who takes office Jan. 20, has signaled he plans to impose heavy tariffs on chip imports.

The CHIPS and Science Act is also promising billions of dollars to semiconductor developers and manufacturers who locate operations in the US. Under the Act, $39 billion in funding has been earmarked for several companies, including TSMC, Intel, Samsung and Micron — all of whom have developed plans for, or are already building, new fabrication or research facilities.

But in order for tax dollars to be divvied out, each company must meet specific milestones; until that time the monies remain unspent. While the promise of billions of dollars in incentives are unquestionably helping reshore US chip production, Morales pointed to the CHIPS Act’s 25% tax break as a greater benefit.

“Even a company like Intel…is getting about $50 billion dollars [in tax breaks], which is unheard of. That’s where the winning payouts are,” he said.

Though Trump has signaled that government funding to encourage reshoring is the wrong tactic, industry experts do not believe the CHIPS Act will be drastically cut when he regains office. “We expect modest revisions to the CHIPS Act, but not something drastic as cutting funding yet to be dispersed,” Morales said. “The CHIPS Act received bipartisan support and any attempt to revise this would face pushback from states that stand to benefit, such as Arizona and Ohio.”

Though high-end processors to power energy-sucking cloud data centers have dominated the market to date, energy-efficient AI processors for edge devices will likely continue to gain traction.

“Think about an AI PC this year or a smartphone that incorporates AI as well, or even a wearable device that has a smaller, more well-tuned model that can leverage AI inferencing,” Morales said. “This is where we’re going next, and I think it’s going to be very big over the coming years.

“And, I think AI inferencing, as a percentage of the companies, will be as big if not bigger than what we’ve seen in the data center, so far,” he added.

From LLMs to SLMs and edge devices

Enterprises and other organizations are also shifting their focus from single AI models to multimodal AI, or LLMs capable of processing and integrating multiple types of data or “modalities,” such as text, images, audio, video, and sensory input. The input from diverse resources creates a more comprehensive understanding of that data and enhances performance across tasks.

Over 80% of organizations expect their AI workflows to increase in the next two years, while about two-thirds expect pressure to upgrade IT infrastructure, according to a report by S&P Global.

Sudeep Kesh, chief innovation officer at S&P Global Ratings, noted that AI is evolving towards smaller, task-specific models, but larger, general-purpose models will still be essential. “Both types will coexist, creating opportunities in each space,” he said.

A key challenge will be developing computationally and energy-efficient models, which will influence chip design and implementation. Chip makers will also need to address scalability, interoperability, and system integration — all of which are expected to drive technological advances across industries, improve autonomous systems, and enable future developments like edge AI, Kesh said.

In particular, as companies move away from cloud-based LLMs and embrace smaller language models that can be deployed on edge devices and endpoints, the industry will see increased interest in AI inferencing.

“It’s an environment where it’s feast or famine for the industry,” IDC’s Morales said. “What’s in store for the coming year? I think the growth we’ve seen in the data center been phenomenal and it will continue into 2025. What I’m excited about is enterprises are beginning to look at prioritizing IT spending dollars in AI, and that will break a second wave of demand for processors.”