Month: February 2025

Tech layoffs this year: A timeline

2025 began in turmoil, with layoffs at some of the largest tech companies despite the support shown by the new US administration. 2024 had been a year of recovery, with the pace of layoffs slowing and IT employment the highest for years following two years of massive IT layoff in 2022 and 2023.

According to data compiled by Layoffs.fyi, the online tracker keeping tabs on job losses in the technology sector, 1,193 tech companies laid off 264,220 staff in 2023, dropping to “just” 152,104 employees laid off by 547 companies in 2024. In 2025, it has already logged 7,003 staff laid off by 31 companies.

Here is a list — to be updated regularly — of some of the most prominent technology layoffs the industry has experienced recently.

Tech layoffs in 2025

  • Salesforce
  • Meta

Feb. 4, 2025: Salesforce lays off over 1,000

At the same time as it’s hiring sales staff for its new artificial intelligence products, Salesforce is laying off over 1,000 workers across the company, according to Bloomberg. As of June, 2024, the company had over 72,000 employees, according to its website. Salesforce did not comment on the report. In 2024 the company reportedly laid off around 1,000 staff too, in two waves: January and July.

Jan. 14, 2025: Meta will lay off 5% of workforce

Mark Zuckerberg told Meta employees he intended to “move out the low performers faster” in an internal memo reported by Bloomberg. The memo announced that the company will lay off 5% of its staff, or around 3,600 staff, beginning Feb. 10. The company had already reduced its headcount by 5% in 2024 through natural attrition, the memo said. Among those leaving the company will be staff previously responsible for fact checking of posts on its social media platforms in the US, as the company begins relying on its users to police content.

Tech layoffs in 2024

  • Equinix
  • AMD
  • Freshworks
  • Cisco
  • General Motors
  • Intel
  • OpenText
  • Microsoft
  • AWS
  • Dell
  • Cisco

Nov. 26, 2024: Equinix to cut 3% of staff

Despite intense demand for its data center capacity, Equinix is planning to lay off 3% of its workforce, or around 400 employees. The announcement followed the appointment of Adaire Fox-Martin to replace Charles Meyers as CEO and the departures of two other senior executives, CIO Milind Wagle and CISO Michael Montoya.

Nov. 13, 2024: AMD to cut 4% of workforce

AMD will lay off around 1,000 employees as it pivots towards developing AI-focused chips, it said. The move came as a surprise to staff, as the company also reported strong quarterly earnings.

Nov. 7, 2024: Freshworks lays off 660

Enterprise software vendor Freshworks laid off around 660 staff, or around 13% of its headcount, despite reporting increased revenue and profits in its fourth fiscal quarter. The company described the layoffs as a realignment of its global workforce.

Sept. 17, 2024: Cisco lays off 6,000

After laying off around 4,200 staff in February, Cisco is at it again, laying off another 6,000 or around 7% of its workforce. Among the divisions affected were its threat intelligence unit, Talos Security.

Aug. 20, 2024: General Motors lays off 1,000 software staff

More than 1,000 software and services staff are on the way out at General Motors, signalling that it could be rethinking its digital transformation strategy. In an internal memo, the company said that it was moving resources to its highest-priority work and flattening hierarchies.

August 1, 2024: Intel removes 15,000 roles

Intel plans to cut its workforce by around 15% to reduce costs after a disastrous second quarter. Revenue for the three months to June 29 stagnated at around $12.8 billion, but net income fell 85% to $83 million, prompting CEO Pat Gelsinger to bring forward a company-wide meeting in order to announce that 15,000 staff would lose their jobs. “This is an incredibly hard day for Intel as we are making some of the most consequential changes in our company’s history,” Gelsinger wrote in an email to staff, continuing: “Our revenues have not grown as expected — and we’ve yet to fully benefit from powerful trends, like AI. Our costs are too high, our margins are too low. We need bolder actions to address both — particularly given our financial results and outlook for the second half of 2024, which is tougher than previously expected.”

July 4, 2024: OpenText to lay off 1,200

OpenText said it will lay off 1,200 staff, or about 1.7% of its workforce, in a bid to save around $100 million annually. It plans to hire new sales and engineering staff in other areas in 2025, it said.

June 4, 2024: Microsoft lays off staff in Azure division

Microsoft laid off staff in several teams supporting its cloud services, including Azure for Operations and Mission Engineering. The company didn’t say exactly how many staff were leaving.

April 4, 2024: Amazon downsizes AWS in a fresh cost-cutting round

Amazon announced hundreds of layoffs in the sales and marketing teams of its AWS cloud services division — and also in the technology development teams for its physical retail stores, as it stepped back from efforts to generalize the “Just Walk Out” technology built for its Amazon Fresh grocery stores.

April 1, 2024: Dell acknowledges 13,000 job cuts

Dell Technologies’ latest 10K filing with the US Securities and Exchange Commission disclosed that the company had laid off 13,000 employees over the course of the 2023 fiscal year; it characterized the layoffs and other reorganizational moves as cost-cutting measures. “These actions resulted in a reduction in our overall headcount,” the company said. A comparison to the previous year’s 10K filing, performed by The Register, found that Dell employed 133,000 people at that point, compared to 120,000 as of February 2024. Dell announced layoffs of 6,650 staffers on Feb. 6, but it is unclear whether those cuts were reflected in the numbers from this year’s 10K statement.

Feb. 14, 2024: Cisco cuts 5% of workforce

Cisco will shed 4,200 of its 84,900 employees as it refocuses on more profitable areas of its business, including AI and security. The company’s last major round of layoffs was in November 2022. Cisco’s sales of telecommunications equipment have been hit by delays at telcos in rolling out equipment they havealready purchased. AI, on the other hand, is a growing business for Cisco, with AI-related sales in the billions—and that’s before it announced its recent partnership with Nvidia, which is making bank on sales of chips for AI applications. 

See news of earlier layoffs.

Tech layoffs this year: A timeline

2025 began in turmoil, with layoffs at some of the largest tech companies despite the support shown by the new US administration. 2024 had been a year of recovery, with the pace of layoffs slowing and IT employment the highest for years following two years of massive IT layoff in 2022 and 2023.

According to data compiled by Layoffs.fyi, the online tracker keeping tabs on job losses in the technology sector, 1,193 tech companies laid off 264,220 staff in 2023, dropping to “just” 152,104 employees laid off by 547 companies in 2024. In 2025, it has already logged 7,003 staff laid off by 31 companies.

Here is a list — to be updated regularly — of some of the most prominent technology layoffs the industry has experienced recently.

Tech layoffs in 2025

  • Salesforce
  • Meta

Feb. 4, 2025: Salesforce lays off over 1,000

At the same time as it’s hiring sales staff for its new artificial intelligence products, Salesforce is laying off over 1,000 workers across the company, according to Bloomberg. As of June, 2024, the company had over 72,000 employees, according to its website. Salesforce did not comment on the report. In 2024 the company reportedly laid off around 1,000 staff too, in two waves: January and July.

Jan. 14, 2025: Meta will lay off 5% of workforce

Mark Zuckerberg told Meta employees he intended to “move out the low performers faster” in an internal memo reported by Bloomberg. The memo announced that the company will lay off 5% of its staff, or around 3,600 staff, beginning Feb. 10. The company had already reduced its headcount by 5% in 2024 through natural attrition, the memo said. Among those leaving the company will be staff previously responsible for fact checking of posts on its social media platforms in the US, as the company begins relying on its users to police content.

Tech layoffs in 2024

  • Equinix
  • AMD
  • Freshworks
  • Cisco
  • General Motors
  • Intel
  • OpenText
  • Microsoft
  • AWS
  • Dell
  • Cisco

Nov. 26, 2024: Equinix to cut 3% of staff

Despite intense demand for its data center capacity, Equinix is planning to lay off 3% of its workforce, or around 400 employees. The announcement followed the appointment of Adaire Fox-Martin to replace Charles Meyers as CEO and the departures of two other senior executives, CIO Milind Wagle and CISO Michael Montoya.

Nov. 13, 2024: AMD to cut 4% of workforce

AMD will lay off around 1,000 employees as it pivots towards developing AI-focused chips, it said. The move came as a surprise to staff, as the company also reported strong quarterly earnings.

Nov. 7, 2024: Freshworks lays off 660

Enterprise software vendor Freshworks laid off around 660 staff, or around 13% of its headcount, despite reporting increased revenue and profits in its fourth fiscal quarter. The company described the layoffs as a realignment of its global workforce.

Sept. 17, 2024: Cisco lays off 6,000

After laying off around 4,200 staff in February, Cisco is at it again, laying off another 6,000 or around 7% of its workforce. Among the divisions affected were its threat intelligence unit, Talos Security.

Aug. 20, 2024: General Motors lays off 1,000 software staff

More than 1,000 software and services staff are on the way out at General Motors, signalling that it could be rethinking its digital transformation strategy. In an internal memo, the company said that it was moving resources to its highest-priority work and flattening hierarchies.

August 1, 2024: Intel removes 15,000 roles

Intel plans to cut its workforce by around 15% to reduce costs after a disastrous second quarter. Revenue for the three months to June 29 stagnated at around $12.8 billion, but net income fell 85% to $83 million, prompting CEO Pat Gelsinger to bring forward a company-wide meeting in order to announce that 15,000 staff would lose their jobs. “This is an incredibly hard day for Intel as we are making some of the most consequential changes in our company’s history,” Gelsinger wrote in an email to staff, continuing: “Our revenues have not grown as expected — and we’ve yet to fully benefit from powerful trends, like AI. Our costs are too high, our margins are too low. We need bolder actions to address both — particularly given our financial results and outlook for the second half of 2024, which is tougher than previously expected.”

July 4, 2024: OpenText to lay off 1,200

OpenText said it will lay off 1,200 staff, or about 1.7% of its workforce, in a bid to save around $100 million annually. It plans to hire new sales and engineering staff in other areas in 2025, it said.

June 4, 2024: Microsoft lays off staff in Azure division

Microsoft laid off staff in several teams supporting its cloud services, including Azure for Operations and Mission Engineering. The company didn’t say exactly how many staff were leaving.

April 4, 2024: Amazon downsizes AWS in a fresh cost-cutting round

Amazon announced hundreds of layoffs in the sales and marketing teams of its AWS cloud services division — and also in the technology development teams for its physical retail stores, as it stepped back from efforts to generalize the “Just Walk Out” technology built for its Amazon Fresh grocery stores.

April 1, 2024: Dell acknowledges 13,000 job cuts

Dell Technologies’ latest 10K filing with the US Securities and Exchange Commission disclosed that the company had laid off 13,000 employees over the course of the 2023 fiscal year; it characterized the layoffs and other reorganizational moves as cost-cutting measures. “These actions resulted in a reduction in our overall headcount,” the company said. A comparison to the previous year’s 10K filing, performed by The Register, found that Dell employed 133,000 people at that point, compared to 120,000 as of February 2024. Dell announced layoffs of 6,650 staffers on Feb. 6, but it is unclear whether those cuts were reflected in the numbers from this year’s 10K statement.

Feb. 14, 2024: Cisco cuts 5% of workforce

Cisco will shed 4,200 of its 84,900 employees as it refocuses on more profitable areas of its business, including AI and security. The company’s last major round of layoffs was in November 2022. Cisco’s sales of telecommunications equipment have been hit by delays at telcos in rolling out equipment they havealready purchased. AI, on the other hand, is a growing business for Cisco, with AI-related sales in the billions—and that’s before it announced its recent partnership with Nvidia, which is making bank on sales of chips for AI applications. 

See news of earlier layoffs.

Meta promises it won’t release dangerous AI systems

According to a new policy document from Meta, the Frontier AI Framework, the company might not release AI systems developed in-house in certain risky scenarios.

The document defines two types of AI systems that can be classified as either “high risk” or “critical risk.” In both cases, these are systems that could help carry out cyber, chemical or biological attacks.

Systems classified as “high risk” might facilitate such an attack, though not to the same extent as a “critical risk” system, which could result in catastrophic outcomes. These could include, for example, taking over a corporate environment or deploying powerful biological weapons.

In the document, Meta states that if a system is “high risk,” the company will restrict internal access to it and will not release it until measures have been taken to reduce the risk to “moderate levels.” If, instead, the system is “critical risk,” security protections will be put in place to prevent it from spreading and development will stop until the system can be made safer.

Meta promises it won’t release dangerous AI systems

According to a new policy document from Meta, the Frontier AI Framework, the company might not release AI systems developed in-house in certain risky scenarios.

The document defines two types of AI systems that can be classified as either “high risk” or “critical risk.” In both cases, these are systems that could help carry out cyber, chemical or biological attacks.

Systems classified as “high risk” might facilitate such an attack, though not to the same extent as a “critical risk” system, which could result in catastrophic outcomes. These could include, for example, taking over a corporate environment or deploying powerful biological weapons.

In the document, Meta states that if a system is “high risk,” the company will restrict internal access to it and will not release it until measures have been taken to reduce the risk to “moderate levels.” If, instead, the system is “critical risk,” security protections will be put in place to prevent it from spreading and development will stop until the system can be made safer.

Europe’s DMA gives another big boost to iOS platform decay

One of a handful of “independent” iOS app stores in Europe has begun the distribution of a porn app, advertising it as “Apple Approved.” The app isn’t approved by Apple, and the biggest app available on the AltStore is Epic’s child-focused Fortnite game. Fortnite publisher Epic Games also invested in the AltStore, which now seems on track to become Europe’s place for iOS porn. 

What an achievement.

It’s all thanks to the Digital Markets Act (DMA). 

Tempers boil in Europe’s Hot Tub

The app (Hot Tub) is now available on the AltStore in Europe. The AltStore is one of the handful of independent stores to have appeared in the EU since implementation of the DMA. 

Originally a subscription-based service, the store became freely available once it received major funding from Epic Games, which has been a noisy critic of Apple’s App Store model. One thing the company hasn’t done with that funding is build an age-verification process, which means the porn app it provides is now easily available to young people in the region. 

Apple has no way to prevent this because the DMA both forces Apple to open up its app store ecosystem to third-party developers and removed its power to curate apps sold outside its store. And while the porn app is the first manifestation of the kind of content you’ll have to avoid when using third-party stores, or the kind of content you’ll want your kids to avoid, it won’t be the last such threat. 

Apple has warned for years that enabling app side-loading on iOS will open the gates to dangerous, deceptive, and dubious apps. Despite these warnings, the European Commission made this threat a reality. You should see this as a sign of what’s to come, thanks to the actions of former European competition chief Margrethe Vestager, who seems pleased to have forced Apple to open up. 

Features of the app include a “teen” channel and content from PornHub, which recently admitted to unlawful monetary transactions involving sex trafficking proceeds. Shortly after the app appeared, Alt Store also said it would donate its February Patreon earnings to organizations supporting sex workers and the LGBTQ community, which seems incongruous, given PornHub’s recent admission

A regulation for decay

Europe seems to think that forcing Apple’s platforms to become worse will in some way promote competition and enable innovative European businesses to thrive. But it seems to do so at the cost of platform security and the acceleration of what author Cory Doctorow describes as “enshittification.” In other words, it’s regulation-forced platform decay. 

In a statement provided to Computerworld, Apple said: “We are deeply concerned about the safety risks that hardcore porn apps of this type create for EU users, especially kids. This app and others like it will undermine consumer trust and confidence in our ecosystem that we have worked for more than a decade to make the best in the world.”

Apple has always argued that the DMA damages the company in preventing such content being published on its platforms. While there’s an app notarization process, that’s about tech verification and security rather than content. An app being notarized does not mean Apple has approved it, as is explicitly stated in the App Store guidelines

Silence is complicity

What makes matters worse is that the AltStore claims the app has been approved by Apple.

“Contrary to the false statements made by the marketplace developer, we certainly do not approve of this app and would never offer it in our App Store,” Apple said. “The truth is that we are required by the European Commission to allow it to be distributed by marketplace operators like AltStore and Epic who may not share our concerns for user safety.”

Apple was concerned about the app before it hit the store and approached the European Commission in December to express its concerns. The Commission expressed no opposition to the app, I’m told. In other words, the people behind the app would be more truthful if they said their app was approved by Vestager, rather than Apple. I’m not convinced that would be how she sees it, but actions speak louder than words, and by not taking any action the Commission she leads gives tacit approval.

While perhaps some European millionaires will make a couple of bucks off this platform decay, it is doubtful the participants in the adult videos now easily available to European schoolchildren will see much of that largesse.

How to protect you and your kids

Despite the EU’s efforts, parents do have some choice. Apple has built Parental Controls to forbid access to third-party stores.  First you must set up Parental Controls on your child’s device, after which you should follow these steps:

  • Open Settings>Screen Time>Content & Privacy Restrictions.
  • Tap App Installations & Purchases.
  • Tap App Marketplaces to change this to Don’t Allow.
  • You can also tap Web to change this to Don’t Allow.

While this is likely to lead to your children protesting that they can no longer access Fortnite in order to spend your money on digital game items (for the benefit of Epic Games), this does at least mean you can restrict your children to a curated and trusted marketplace. While the DMA’s goal is to foster competition, its impact on platform security remains contentious, and incidents like this one absolutely illustrate the risks.

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe

Europe’s DMA gives another big boost to iOS platform decay

One of a handful of “independent” iOS app stores in Europe has begun the distribution of a porn app, advertising it as “Apple Approved.” The app isn’t approved by Apple, and the biggest app available on the AltStore is Epic’s child-focused Fortnite game. Fortnite publisher Epic Games also invested in the AltStore, which now seems on track to become Europe’s place for iOS porn. 

What an achievement.

It’s all thanks to the Digital Markets Act (DMA). 

Tempers boil in Europe’s Hot Tub

The app (Hot Tub) is now available on the AltStore in Europe. The AltStore is one of the handful of independent stores to have appeared in the EU since implementation of the DMA. 

Originally a subscription-based service, the store became freely available once it received major funding from Epic Games, which has been a noisy critic of Apple’s App Store model. One thing the company hasn’t done with that funding is build an age-verification process, which means the porn app it provides is now easily available to young people in the region. 

Apple has no way to prevent this because the DMA both forces Apple to open up its app store ecosystem to third-party developers and removed its power to curate apps sold outside its store. And while the porn app is the first manifestation of the kind of content you’ll have to avoid when using third-party stores, or the kind of content you’ll want your kids to avoid, it won’t be the last such threat. 

Apple has warned for years that enabling app side-loading on iOS will open the gates to dangerous, deceptive, and dubious apps. Despite these warnings, the European Commission made this threat a reality. You should see this as a sign of what’s to come, thanks to the actions of former European competition chief Margrethe Vestager, who seems pleased to have forced Apple to open up. 

Features of the app include a “teen” channel and content from PornHub, which recently admitted to unlawful monetary transactions involving sex trafficking proceeds. Shortly after the app appeared, Alt Store also said it would donate its February Patreon earnings to organizations supporting sex workers and the LGBTQ community, which seems incongruous, given PornHub’s recent admission

A regulation for decay

Europe seems to think that forcing Apple’s platforms to become worse will in some way promote competition and enable innovative European businesses to thrive. But it seems to do so at the cost of platform security and the acceleration of what author Cory Doctorow describes as “enshittification.” In other words, it’s regulation-forced platform decay. 

In a statement provided to Computerworld, Apple said: “We are deeply concerned about the safety risks that hardcore porn apps of this type create for EU users, especially kids. This app and others like it will undermine consumer trust and confidence in our ecosystem that we have worked for more than a decade to make the best in the world.”

Apple has always argued that the DMA damages the company in preventing such content being published on its platforms. While there’s an app notarization process, that’s about tech verification and security rather than content. An app being notarized does not mean Apple has approved it, as is explicitly stated in the App Store guidelines

Silence is complicity

What makes matters worse is that the AltStore claims the app has been approved by Apple.

“Contrary to the false statements made by the marketplace developer, we certainly do not approve of this app and would never offer it in our App Store,” Apple said. “The truth is that we are required by the European Commission to allow it to be distributed by marketplace operators like AltStore and Epic who may not share our concerns for user safety.”

Apple was concerned about the app before it hit the store and approached the European Commission in December to express its concerns. The Commission expressed no opposition to the app, I’m told. In other words, the people behind the app would be more truthful if they said their app was approved by Vestager, rather than Apple. I’m not convinced that would be how she sees it, but actions speak louder than words, and by not taking any action the Commission she leads gives tacit approval.

While perhaps some European millionaires will make a couple of bucks off this platform decay, it is doubtful the participants in the adult videos now easily available to European schoolchildren will see much of that largesse.

How to protect you and your kids

Despite the EU’s efforts, parents do have some choice. Apple has built Parental Controls to forbid access to third-party stores.  First you must set up Parental Controls on your child’s device, after which you should follow these steps:

  • Open Settings>Screen Time>Content & Privacy Restrictions.
  • Tap App Installations & Purchases.
  • Tap App Marketplaces to change this to Don’t Allow.
  • You can also tap Web to change this to Don’t Allow.

While this is likely to lead to your children protesting that they can no longer access Fortnite in order to spend your money on digital game items (for the benefit of Epic Games), this does at least mean you can restrict your children to a curated and trusted marketplace. While the DMA’s goal is to foster competition, its impact on platform security remains contentious, and incidents like this one absolutely illustrate the risks.

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe

Adobe enhances Acrobat AI with contract intelligence to streamline enterprise workflows

Adobe has introduced contract intelligence features in Acrobat AI Assistant, targeting enterprises and professionals dealing with complex agreements. The new capabilities, announced on Tuesday, enable users to analyze legal terms, compare multiple contracts, and verify key details with AI-generated summaries while maintaining data security.

The expansion comes as businesses struggle with contract management inefficiencies, which can lead to financial and operational risks. According to an Adobe Acrobat survey, 64% of small business owners have avoided signing contracts due to uncertainty over terms, while nearly 70% of consumers admitted to signing agreements without fully understanding them.

“The new generative AI features can help customers grasp complex terms and spot differences between multiple agreements so they can better understand and verify the information in these important documents — faster and easier,” Adobe said in a statement.

Adobe claimed that the Acrobat AI Assistant “supplements LLM technologies with the same AI and machine learning models behind Liquid Mode” to provide a highly accurate understanding of document structure and content, which enhances the quality and reliability of AI Assistant’s outputs.

For enterprises handling high volumes of vendor contracts, purchase orders, and legal documents, these challenges could translate into financial and operational risks.

“This transformation will significantly impact several key areas,” said Kartikey Kaushal, senior analyst at Everest Group. “Automated contract drafting will evolve to create complex agreements with enhanced accuracy and consistency. AI-powered negotiation assistance will streamline the redlining process by automatically suggesting revisions aligned with organizational policies and historical data. Compliance monitoring will shift to a more proactive approach, with AI continuously scanning contracts for potential risks and regulatory concerns.”

“Additionally,” Kaushal added, “legal review cycles will be reduced from weeks to hours through advanced document analysis and risk assessment.”

The new contract intelligence features will be available on desktop, web, and mobile platforms. Both free Adobe Reader and paid Acrobat users can access Acrobat AI Assistant as an add-on subscription for $4.99 per month, Adobe announced in the statement.

AI-driven contract intelligence for business efficiency

With AI integration becoming a critical component of enterprise workflows, Adobe’s Acrobat AI Assistant will now offer advanced contract intelligence capabilities, allowing businesses to extract key terms, detect discrepancies across multiple contracts, and verify information with AI-generated summaries backed by citations.

“Customers open billions of contracts in Adobe Acrobat each month, and AI can be a game changer in helping simplify their experience,” Abhigyan Modi, senior vice president of Adobe Document Cloud, said in the statement. “We are introducing new capabilities to deliver contract intelligence in Adobe AI Assistant, making it easier for customers to understand and compare these complex documents and providing citations to help them verify responses, all while keeping their data safe.”

These enhancements are particularly relevant for finance teams reviewing sales contracts, legal teams managing compliance, and procurement departments assessing vendor agreements. Marketers and business executives can also use the AI assistant to analyze partnership contracts and service-level agreements without requiring legal expertise.

Addressing key business challenges

Adobe’s contract intelligence features aim to solve several long-standing challenges in enterprise contract management. One of the primary advantages is reducing contract review time.

With AI-powered summaries and automatic extraction of key terms, businesses can quickly assess obligations, deadlines, and potential risks without manually sifting through lengthy agreements, the statement added.

“For a manpower and consulting firm, contract management is a critical yet time-consuming process, involving agreements with corporate clients, independent consultants, and regulatory compliance bodies,” said Shalu Bindlish, director at Advaita Bedanta Consultants. “AI-powered solutions like Adobe’s Acrobat AI Assistant could streamline this by summarizing key clauses, identifying discrepancies, and ensuring consistency across multiple contracts. This could enhance efficiency in negotiating terms, managing service-level agreements, and ensuring compliance with labor laws across different regions.”

Ensuring compliance and accuracy is another critical aspect.

The AI-driven comparison tool, Adobe claimed, enables organizations to track changes between contract versions, helping to detect inconsistencies and discrepancies before finalizing agreements. This minimizes legal risks and prevents costly errors.

“However”, Bindlish cautioned, “while AI can accelerate contract analysis, it may not fully capture industry-specific nuances or evolving legal frameworks. A human review layer remains crucial to validate AI-driven insights and mitigate risks in high-stakes workforce and consulting agreements.”

Additionally, as per the statement, Adobe’s solution claims to enhance collaboration and workflow integration. Enterprises can securely share contracts with stakeholders, request electronic signatures, and complete the approval process within the Acrobat platform.

This integration eliminates the need for multiple tools, making contract management more efficient and streamlined for teams handling high volumes of agreements, Adobe said in the statement.

Enterprise-grade AI with strong data security

As concerns over data privacy in AI applications continue to rise, Adobe has emphasized its commitment to security. Acrobat AI Assistant, the company claimed, operates within Adobe’s AI Ethics framework, ensuring that customer data is not used for AI training and remains protected from third-party access.

Experts, however, note that AI-powered contract management introduces compliance risks, particularly in highly regulated industries.

“AI-powered Contract Lifecycle Management (CLM) systems raise compliance and legal concerns related to data privacy, regulatory adherence, bias, and liability, particularly in regulated industries such as banking, financial services, healthcare, life sciences, and legal,” added Kaushal. “These systems process sensitive contractual information, heightening risks under laws like GDPR, CCPA, HIPAA, and SOX, which impose stringent data protection and access control requirements.”

Kaushal also pointed out that AI-generated contract recommendations could introduce legal uncertainties. “AI-generated contract recommendations may also lack legal accountability, leading to disputes if incorrect terms are applied. Additionally, AI models trained on biased or outdated data can introduce contractual inconsistencies and unfair terms, increasing litigation risks. Enterprises, therefore, emphasize transparency and explainability to gauge the effectiveness of AI-generated response.”

Balancing AI efficiency with caution

Adobe’s AI-driven contract intelligence is positioned as a transformative tool for businesses looking to streamline workflows and mitigate legal risks. However, as AI’s role in contract management expands, organizations must balance automation with human oversight to ensure compliance, accuracy, and accountability.

With contract complexity increasing across industries, AI-powered solutions offer a potential path to efficiency — but experts warn that their effectiveness will depend on how well enterprises integrate AI with existing legal and compliance frameworks.

CIOs grapple with subpar global genAI models

With the number of generative AI trials soaring in the enterprise, it is typical for the CIO to purchase numerous large language models from various model makers, tweaked for different geographies and languages. But CIOs are discovering that non-English models are faring far more poorly than English ones, even when purchased from the same vendor.

There is nothing nefarious about that fact. It is simply because there is a lot less data available to train non-English models.

“It is almost guaranteed that all LLM implementations in languages other than English will perform with less accuracy and less relevance than implementations in English because of the vast disparity in training sample size,” said Akhil Seth, head of AI business development at consultant firm UST.

Less data delivers less comprehensiveness, less accuracy, and much more frequent hallucinations. (Hallucinations typically happen when the model has no information to answer the query, so it makes something up. Proud algorithms these LLMs can be.)

Nefarious or not, IT leaders at global companies need to deal with this situation or suffer subpar results for customers and employees who speak languages other than English.

The major model makers — OpenAI, Microsoft, Amazon/AWS, IBM, Google, Anthropic, and Perplexity, among others — do not typically divulge the volume of data each model is trained on, and certainly not the quality or nature of that data. Enterprises usually deal with this lack of transparency about training data via extensive testing, but that testing is often focused on the English language model, not those in other languages.

“There are concerns that this [imbalance of training data] would put applications leveraging non-English languages at an informational and computational disadvantage,” said Flavio Villanustre, global chief information security officer of LexisNexis Risk Solutions.

“The volume, richness, and variability in the underlying training data is key to obtaining high-quality runtime performance of the model.” Inquiries in languages that are underrepresented in the training data are likely to yield poor performance, he said.

The size difference can be extreme

How much smaller are the datasets used in non-English models? That varies widely depending on the language. It’s not so much a matter of the number of people who speak that language as it is the volume of data in that language available for training. 

Vasi Philomin, the VP and general manager for generative AI at Amazon Web Services (AWS), one of the leading AI as a Service vendors, estimated that the training datasets for non-English models are roughly “10 to 100 times smaller” than their English counterparts.

Although there is no precise way to predetermine how much data is available for training in a given language, Hans Florian, a distinguished research scientist for multilingual natural language processing at IBM, has a trick. “You can look at the number of Wikipedia pages in that language. That correlates quite well with the amount of data available in that language,” he said.

Training data availability also varies by industry, topic, and use case.

“If you want your language model to be multilingual, the best thing you can do is have parallel data in the languages you want to support,” said Mary Osborne, the senior product manager of AI and natural language processing at SAS. “That’s an easy proposition in places like Quebec, for example, where all their government data is created in both English and French. If you wanted to have an LLM that did a great job of answering questions about the Canadian government in both English and French, you’d have a good supply of data to pull that off,” Osbourne said.

“But if you wanted to add an obscure indigenous language like Cree or Micmac, those languages would be vastly underrepresented in the sample. They would yield poor results compared to English and French, because the model wouldn’t have seen enough data in those indigenous languages to do well,” she said.

Although dataset size is extremely important in a genAI model, data quality is also critical. Even though there are no objective benchmarks for assessing data quality, experts in various topics have a rough sense of what good and bad content looks like. In healthcare, for example, it might be the difference between using the New England Journal of Medicine or Lancet versus scraping the personal website of a chiropractor in Milwaukee.

Like dataset size, data quality often varies by geography, according to Jürgen Bross, senior research scientist and manager in multilingual at IBM. In Japan, for example, IBM needed to apply its own quality filtering, partly because so many quality web sites in Japan are behind strict paywalls. That meant that, on average, the available Japanese data was of lower quality. “Fewer newspapers and more product pages,” Bross said. 

Quick fixes bring limited success

UST’s Seth said the dataset challenges with non-English genAI models are not going to be easy to overcome. Some of the more obvious mechanisms to address the smaller training datasets for non-English models — including automated translation and more aggressive fine-tuning — come with their own negatives.

“Putting a [software] translator somewhere in the inference pipeline is an obvious quick fix, but it will no doubt introduce idiomatic inconsistencies in the generated output and potentially even in the interpretation of the input. Even multilingual models suffer from this,” Seth said.

Another popular countermeasure for non-English genAI models is using synthetic data to supplement the actual data. Synthetic data is typically generated by machine learning, which extrapolates patterns from real data to create likely data. The problem is that if the original data has even a hint of bias — which is common — synthetic data is likely to perpetuate and magnify that bias. Forgive the cliché, but it’s the genAI version of three steps forward, two steps back.

Indeed, LexisNexis’ Villanustre worries that this problem could get worse, hurting the accuracy and credibility of genAI-produced global analysis.

“There is an increasing portion of unstructured content on the internet that is currently created by generative AI models. If not careful, future models could be increasingly trained on output from other models, potentially amplifying biases and inaccuracies,” Villanustre said.

Practical (and sometimes expensive) approaches

So how can tech leaders better address the problem?

It starts during the procurement process. Although IT operations folks typically ask excellent questions about LLMs before they purchase, they tend to be overwhelmingly focused on the English version. It doesn’t occur to them that the quality delivered in the non-English models may be dramatically lower.

Jason Andersen, a VP and principal analyst with Moor Insights & Strategy, said CIOs need to do everything they can to get model makers to share more information about training data for every model being purchased or licensed. “There has to be much more transparency of data provenance,” he said. 

Alternatively, CIOs can consider sourcing their non-English models from regional/local genAI firms that are native to that language. Although that approach might solve the problem for many geographies, it is going to meet strong resistance from many enterprise CIOs, said Rowan Curren, a senior analyst for genAI strategies at Forrester.

“Most enterprises are far more interested in sourcing their foundation models from their trusted providers,” which are generally the major hyperscalers, Curren said. “Enterprises really want to acquire those [model training] capabilities via their deployments on AWS, Google, or Microsoft. That gives [CIOs] a higher comfort level. They are hesitant to work with a startup.”

AWS’s Philomin said his team is trying to split the difference for IT customers by using a genAI marketplace approach, borrowing the technique from the AWS Marketplace — which in turn had borrowed the concept from its Amazon parent company. Amazon’s retail approach allows users to purchase from small merchants through Amazon, with Amazon taking a cut.

Amazon’s genAI marketplace — called Bedrock — does something similar, providing access to a large number of genAI model makers globally. Although it certainly doesn’t mitigate all of the downsides of using a little-known provider in various geographies, Philomin argues that it addresses some of them.

“We are removing some of the risks, [such as] the resilience of the service and the support,” Philomin said. But he also stressed that those smaller players “are the seller of record, not AWS.” That caveat raises the question of how much help the AWS reseller role will be if something later blows up.

Another approach to address the training data disparity? Bypass the non-English models (for now) by employing bilingual humans who can comfortably interact with the English model.

“As a German native who works primarily in English, I’ve found that while LLMs are competent in German, they don’t quite reach native-level proficiency,” said Vincent Schmalbach, an independent AI engineer in Munich.

“For critical German-language content, I’ve developed a practical workflow. I interact with the LLM in English to get the highest quality output, then translate the final result to German. This approach consistently produces better results than working directly in German.”

The tactic that most genAI specialists agree on is that CIOs need to budget more money to test and fine-tune every non-English model they want to use. That money also needs to cover the additional processing and verification needed for non-English models. 

That said, fine-tuning can only help so much. The training data is the heart of the genAI brain. If that is inadequate, more fine-tuning can be akin to trying to save a salad with rotting spinach by pouring on more salad dressing.

And allocating additional budget to fine-tuning models can be difficult because the number of variables — such as the specific languages, topics, and industry in question — is too numerous to offer any realistic guidance. But IBM’s Florian does offer a tiny bit of optimism: “You don’t need a permanent budget increase. It’s just a one-time budget increase, a one-time expense that you take.”

In other words, once the non-English model is fully integrated and supplemented, little to no funding is needed beyond whatever the English model needs.

Looking ahead

There’s reason to hope that the disparity in the quality of output from models in various languages may be lessened or even negated in the coming years. That’s because a model based on a smaller dataset may not suffer from lower accuracy if the underlying data is of a higher quality.

One factor now coming into play lies in the difference between public and private data. An executive at one of the largest model makers — who asked to not be identified by name or employer — said the major LLM makers have pretty much captured as much of the data on the public internet as they can. They are continuing to harvest new data from the internet every day, of course, but those firms are shifting much of their data-gathering efforts to private sources such as corporations and universities. 

“We have found a lot of super high-quality data, but we cannot get access to it because it’s not on the internet. We need to get agreements with the owners of this data to get access,” he said.

Tapping into private sources of information — including those in various countries around the world —will potentially improve the data quality for some topics and industries, and at the same time increase the amount of good training data available for non-English models. As the total universe of training data expands, the imbalance in the amount of training data across languages may matter less and less. However, this shift is also likely to raise prices as the model makers cut deals with third parties to license their private information.

Another factor that could minimize the dataset size problem in the next few years is an anticipated increase in unstructured data. Indeed, highly unstructured data — such as that collected by video drones watching businesses and their customers — could potentially sidestep language issues entirely, as the video analysis could be captured directly and saved in many different languages. 

Until the volume of high-quality data for non-English languages gets much stronger — something that might slowly happen with more unstructured, private, and language-agnostic data in the next few years — CIOs need to demand better answers from model vendors on the training data for all non-English models.

Let’s say a global CIO is buying 118 models from an LLM vendor, in a wide range of languages. The CIO pays maybe $2 billion for the package. The vendor doesn’t tell the CIO how little training was done on all of those non-English models, and certainly not where that training data came from. If the vendors were fully transparent on both of those points, CIOs would push back on pricing for everything other than the English model. 

In response, the model makers would likely not charge CIOs less for the non-English models but instead ramp up their efforts to find more training data to improve the accuracy of those models.

Given the massive amount of money enterprises are spending on genAI, the carrot is obvious. The stick? Maybe CIOs need to get out of their comfort zone and start buying their non-English models from regional vendors in every language they need. 

If that starts to happen on a large scale, the major model makers may suddenly see the value of data-training transparency.

How would a potential ban on DeepSeek impact enterprises?

Chinese AI startup, DeepSeek, has been facing scrutiny from governments and private entities worldwide but that hasn’t stopped enterprises from investing in this OpenAI competitor.

European regulators joined Microsoft, OpenAI, and the US government last week in independent efforts to determine if DeepSeek infringed on any copyrighted data from any US technology vendor. The investigations could potentially lead to a ban on DeepSeek in the US and EU, impacting millions of dollars that enterprises are already pouring into deploying DeepSeek AI models.

Anthropic unveils new framework to block harmful content from AI models

Anthropic has showcased a new security framework, designed to reduce the risk of harmful content generated by its large language models (LLM), a move that could have far-reaching implications for enterprise tech companies.

Large language models undergo extensive safety training to prevent harmful outputs but remain vulnerable to jailbreaks – inputs designed to bypass safety guardrails and elicit harmful responses, Anthropic said in a statement.