Page 10 of 126

3 reasons Microsoft needn’t fear DeepSeek

The release of the latest version of the Chinese genAI bot DeepSeek last month upended the tech world when its creators claimed it was built for only $6 million — far less than the hundreds of billions of dollars Microsoft, Google, OpenAI, Meta, and others have poured into genAI development. 

The shockwaves were immediate. GenAI-related stocks took a nosedive, losing hundreds of billions of dollars in value overnight. Many prognosticators said DeepSeek would undermine America’s genAI dominance — and threaten the country’s big AI companies, notably Microsoft.

Microsoft, which became a $3-trillion company based on its AI leadership, has perhaps the most to lose from DeepSeek’s arrival. It’s invested billions of dollars in AI already, and has said this year alone it will invest another $80 billion. Given that DeepSeek said it built its newest chatbot so cheaply, is Microsoft throwing billions of dollars away? Can it compete with a company that can build genAI at such a low cost?

Microsoft has nothing to fear from DeepSeek. Here are three reasons the Chinese upstart won’t hurt Microsoft — and might even help it.

DeepSeek’s savings aren’t as large as it claims

DeepSeek’s claim that it developed the latest version of its chatbot for $6 million was eye-popping, given the amount of money being poured into AI development and related infrastructure by so many other companies. It was even more eye-popping because the chatbot appears to be technically on par with OpenAI’s ChatGPT, which underlies Microsoft’s Copilot.

But DeepSeek’s claim was extremely misleading. The semiconductor research and consulting firm  SemiAnalysis took a deep dive into the true costs of developing DeepSeek, based on information publicly provided by the Chinese company. SemiAnalysis found that the $6 million was “just the GPU cost of the pre-training run, which is only a portion of the total cost of the model. Excluded are important pieces of the puzzle like R&D and TCO of the hardware itself.”

Hardware costs, SemiAnalysis found, were likely well over a half billion dollars. It estimates that the total capital expenditure costs for the hardware, including the costs of operating it, were approximately $1.6 billion.

Beyond that, OpenAI claims that DeepSeek may have illegally used data created by OpenAI to train its model. The cost of obtaining training data can be billions of dollars, so we don’t know how much money DeepSeek would have had to spend if it didn’t use OpenAI’s data.

Although it’s still likely that DeepSeek spent much less than OpenAI, Microsoft, and competitors for building its model, its costs are likely in the billions of dollars, not a mere $6 million. And it’s not at all clear that DeepSeek can gain enough revenue to keep up its burn rate.

Businesses fear privacy and security breaches — and Chinese censorship

Cost savings are good. But even more important to most enterprises is the privacy and security of their data and business, and the privacy and security of their customers’ data.

Congress passed a law banning TikTok from the US based on fears that data is being gathered about users of the app and sent back to China. (US President Donald J. Trump has put a temporary hold on that ban.) But the kind of data that TikTok might gather and report back to China pales in comparison with the kinds of data DeepSeek might send. TikTok merely lets people post and watch videos. DeepSeek’s genAI chatbot has access to the sensitive personal, business, and financial data of enterprises and individuals that use it.

DeepSeek’s privacy policy admits upfront that it sends business and personal data to China, noting, “We store the information we collect in secure servers located in the People’s Republic of China.” The policy adds, “We may collect your text or audio input, prompt, uploaded files, feedback, chat history, or other content that you provide to our model and Services.”

Beyond that, Wired magazine adds: “DeepSeek says it will collect information about what device you are using, your operating system, IP address, and information such as crash reports. It can also record your ‘keystroke patterns or rhythms.’”

What might DeepSeek do with that data? Chinese companies are required by Chinese law to turn over any information to the Chinese government when requested. American businesses are unlikely to want to expose their data to the Chinese government in that way.

In addition, DeepSeek heavily censors its answers to requests, refusing to answer some questions, and providing Chinese propaganda for others, according to The New York Times. Businesses certainly don’t want to become arms of the Chinese government’s propaganda efforts.

Enterprises want off-the-shelf AI integration with business tools

What businesses want from genAI tools, above all, is to increase their productivity. Doing that requires integration with their applications, tools, and infrastructure. That’s exactly what Microsoft does with its entire Copilot product line, including Microsoft 365, OneDrive, SharePoint, Teams, GitHub, Microsoft’s CRM and ERP platform Dynamics 365, and others. 

DeepSeek offers nothing like that kind of integration. And without that, DeepSeek isn’t likely to make much progress against Microsoft — even if it can sell its chatbot more cheaply.

Microsoft itself doesn’t seem to be concerned, at least publicly. Microsoft CEO Satya Nadella even believes that the efficiencies DeepSeek has found in building AI will ultimately help his company’s bottom line.

“That type of optimization means AI will be much more ubiquitous,” he told Yahoo Finance. “And so, therefore, for a hyperscaler like us, a PC platform provider like us, this is all good news as far as I’m concerned.”

Yes, you can still upgrade Windows 10 PCs to Windows 11

Windows 10 has less than a year left before it hits its end of support deadline. Starting in October 2025, you’ll have to pay for security updates if you want to keep using Microsoft’s nearly-nine-year-old operating system. That means now is the time to think about upgrading any Windows 10 PCs you’re still working with to the current Windows 11 OS.

If you believe the viral headlines, things are getting messy: Microsoft, the rumors say, is actually trying to stop people from grabbing free upgrades to Windows 11, and the company is even eliminating a workaround that made that path possible. Could that really be true?

I’ll make it easy for you: That isn’t actually the case. You can absolutely still upgrade old and officially “unsupported” Windows 10 PCs to Windows 11, just as you could years ago when Windows 11 was released. Not much has changed.

So let’s look at what’s actually going on with Windows 11 upgrades in 2025. I’ll show you how you can still upgrade to Windows 11 — even if Windows Update says a system isn’t compatible and Microsoft doesn’t want to help. I’ll even explain why Windows 11 might not be the right fit for your PC.

That’s right: Even if you can, you might not want to upgrade after all — and that last part is what the controversy is really about.

Want to stay on top of what’s happening with Windows? Sign up for my free Windows Intelligence newsletter. I’ll send you free Windows Field Guide downloads as a special welcome bonus!

Windows 11 upgrade workarounds, explained

First things first: The newest Windows 10 PCs can easily upgrade to Windows 11 with no workarounds needed. If your PC is officially eligible for an easy upgrade, just open the Windows Update settings page on your Windows 10 PC. You’ll see a big message encouraging you to upgrade with a few clicks.

The oldest Windows 10 PCs, on the other hand, genuinely can’t upgrade to Windows 11 at all. They just don’t have the required hardware. Windows 11 needs Trusted Platform Module (TPM) hardware in order to operate, for one example, as it relies on that for certain hardware-based security functions. If your PC doesn’t have it, Windows 11 can’t run.

But there’s a mysterious third category of PCs in the middle. These PCs aren’t “officially” eligible for a supported upgrade, and Windows Update will never offer it. But they can run Windows 11. All you have to do is use a special registry hack while installing the software.

Consider the TPM hardware situation:

  • A PC without a TPM can’t upgrade to Windows 10.
  • A PC with TPM 2.0 hardware can upgrade to Windows 11 in the normal way.
  • But a PC with TPM 1.2 hardware? That PC can upgrade to Windows 11 — but only with the “AllowUpgradesWithUnsupportedTPMOrCPU” registry hack.

Microsoft has always warned that PCs upgraded using this registry hack workaround are technically “unsupported.” Microsoft says your PC may not work properly if you take that route and that it may one day stop offering Windows 11 updates to PCs that used the hack to upgrade. These warning messages date all the way back to the release of Windows 11. They’re nothing new.

Meanwhile, it’s worth noting that Microsoft is the one that made this registry hack workaround in the first place! It’s an “officially unofficial” way to get many Windows 10 PCs onto Windows 11 — without Microsoft’s guaranteed support and with a “your mileage may vary” warning — but with Microsoft’s help, in a roundabout way.

Microsoft’s hack-breaking mix-up

To be clear, Microsoft never encouraged average PC users to use the registry hack trick and upgrade their Windows 10 PCs to Windows 11. That path was intended more for Windows geeks and other technically inclined people. But, again, the company did create the registry hack, and it even provided instructions for following the procedure on its website — complete with warnings, naturally.

Even so, Microsoft doesn’t want to help people follow this path anymore. As spotted by Neowin earlier this month, Microsoft removed instructions for using the registry hack from its website. That’s it!

To be crystal clear, the registry hack still works. If you want to upgrade a Windows 10 PC to Windows 11, you can use the same registry hack you could’ve used two years ago. You’ll just have to find it from another source — not Microsoft.com.

Is it possible Microsoft might get rid of the workaround entirely? Sure. But there’s no indication that will happen. Instead, it just doesn’t want to encourage average PC users to try this tactic.

If you’re an average person looking to keep getting security updates for your Windows 10 PC after October, Microsoft would much prefer you buy a new Windows 11 PC — or pay $30 for another year of security updates.

There was also a recent story about Microsoft’s Defender antivirus blocking a tool that helps bypass these Windows 11 system requirements. For a few days, the “Flyby11” tool was flagged as malware. That’s changed: Defender doesn’t block this application anymore. And, even if it did, this tool is only one of many ways to upgrade an unsupported Windows 11 PC.

The reality of Microsoft’s Windows 10 upgrade warnings

In an update to the official support page in question, Microsoft explains its position:

“This support article was originally published on September 30, 2021, when Windows 11 was first released to the public. At the time of publication and still today, the intention behind this support page is to detail ways of installing Windows 11 on devices that meet system requirements for Windows 11. If you installed Windows 11 on a device not meeting Windows 11 system requirements, Microsoft recommends you roll back to Windows 10 immediately.

“Windows 11 minimum system requirements remain unchanged….”

See? There’s nothing really new here. Microsoft hasn’t changed anything about Windows; all it did was modify a web page. Yes, it recommends you roll back to Windows 10 if you’ve upgraded a PC with the registry hack. It’s always recommended you do so and, as the official guidance goes, avoid this registry hack.

Even so, countless people upgraded their old Windows 10 PCs to Windows 11 with this workaround. And I’ve yet to hear a single person who’s experienced a major issue after doing so.

If Microsoft were to change things in the future, the move would instantly break lots of existing Windows 11 PCs. That’d be a huge deal and the kind of controversy the company would likely rather avoid.

All this being said, you might want to at least think twice before rolling the dice on an unsupported upgrade. There’s a strong argument to be made for shielding yourself from even a potential mess, especially when it comes to a work-connected system. You could instead consider getting a new Windows 11 PC, sticking with Windows 10 and paying for security updates, or installing Linux or ChromeOS Flex to keep your PC running.

How to upgrade a Windows 10 PC to Windows 11

If, in spite of Microsoft’s warnings, you do want to upgrade an unsupported Windows 10 PC to Windows 11, the simplest way is to use the convenient Rufus tool to create a USB drive that’ll handle the installation and use the registry hack to skip the compatibility check at the same time.

Rufus Windows 11 system requirements
Rufus offers a user-friendly way to use the Microsoft-created upgrade workaround.

Chris Hoffman, IDG

This won’t work with all Windows 10 PCs, but it will work with many of them — even if Windows Update tells you otherwise.

At the end of the day, remember: Microsoft may warn you that you’re on your own if you do this, but it’s always issued that warning. It’s up to you to decide which path you want to take, just as it has been since the start of this situation.

Let’s stay in touch! Sign up for my free Windows Intelligence newsletter. I’ll send you three new things to try each Friday and free Windows Field Guides as a special welcome gift.

The irritating but amusing irony of Google’s Gemini interface

Look, if you’ve read this column for long now, you know I’m extremely guarded with my enthusiasm for Gemini and the other similar large-language-model AI answer-bots.

Plain and simple, they just aren’t reliable as on-demand answer genies — despite being positioned as exactly that — and they’ve got a nasty habit of coughing up inaccurate info with an astonishing amount of confidence.

I’ve said it before, and I’ll say it again: If something is inaccurate or unreliable even 10% of the time (and that’s being generous, in this instance), it’s useful precisely 0% of the time.

But that foundational flaw isn’t what I want to talk about today — ’cause the truth is that for all of their weaknesses as information-surfacing systems, Gemini and its brethren do offer some genuine utility when it comes to other, more clerical functions. And plenty of folks are finding ways to work ’em into their workflow with lower-level tasks such as sorting through data and formatting spreadsheets (to name just a couple quick examples).

Clearly, Google wants Gemini to become an indispensable part of our lives both professionally and personally, as evidenced by the way it’s Google+’ing the service into our beaks at every possible opportunity. For as useful as it can be in certain limited scenarios, though, I can’t help but think Google is shooting itself in the foot with the way it’s presenting Gemini — in what’s an almost shockingly obvious-seeming miss, especially when it comes to the kind of more mainstream, not-just-early-adopter embracing the company is clearly aiming to achieve.

Let me show ya what I mean.

[Get level-headed knowledge in your inbox with my free Android Intelligence newsletter. Tips, insights, and other tasty treats await!]

Google’s Gemini interface puzzle

Right now, when you go to open up Gemini on Android on one of Google’s own Pixel devices, you’re greeted by a screen that looks a little somethin’ like this:

Google Gemini Android
The Google Gemini full-screen interface, as seen on Android.

JR Raphael, IDG

See that little tool-tip at the top? “More models available” — “choose the one that best fits my needs,” you say? Okay, cool. I can get on board on with that. Let’s see what’s available.

A quick tap on that top part of the screen, and….

Google Gemini Android models
Google’s Gemini model list, from the app’s Android interface.

JR Raphael, IDG

What. The. Schmidt. Is. This.

To clarify, this is the standard Google Gemini experience — not any sort of beta or early access setup. This is what anyone who buys a new Pixel phone or Samsung Galaxy gadget gets when they activate what’s now their device’s default and prominently promoted assistant service. It’s also what anyone with any other Android device is now being pushed to use in place of the classic Google Assistant, with ever-increasing aggressiveness.

And I don’t think I can emphasize just how overwhelming of an interface is there and smacking you square in the peepers as soon as you take that step.

It’s not just on Android, either. You see the same selection when using Gemini on the web, too, maybe even with a more comically over-the-top appearance:

Google Gemini desktop
The Google Gemini interface in a desktop web browser.

JR Raphael, IDG

Seriously — what reasonably normal person who doesn’t work within Google’s engineering department could possibly parse this? And who would want to?

Making sense of the Gemini model mess

For context, what we’re seeing here is a list of every different version of Gemini Google’s released over recent months. Gemini 2.0 is the current version, launched just last week. Within that 2.0 framework, you’ve got four different possibilities to ponder:

  • “2.0 Flash” — “for everyday tasks, more features”
  • “2.0 Flash Thinking Experimental” — “best for multi-step reasoning”
  • “2.0 Flash Thinking Experimental with apps” — “reasoning across YouTube, Maps & Search”
  • And “2.0 Pro Experimental” — “best for complex tasks”

Erm, right.

Beyond that muddled mess, you can also choose to go back to the older “1.5 Pro with Deep Research” version of Gemini for “in-depth answers” as well as the “1.5 Pro” or “1.5 Flash” model. Sure — why not, right?

Let me be as blunt as I can be about this: Mushy-brained of a mammal as I may be, I’m someone who closely follows Google and studies its services as part of my job. I’m more tech-savvy than most average animals (which isn’t saying much, I realize, but even so). And I’ve been immersed in this particular part of the tech universe for something like 7,947 years now.

And yet, I couldn’t even begin to tell you what all that stuff means, in plain English, or why you might want to pick one Gemini model over another. Heck, even after reading Google’s 4,000-character oeuvre about all the ins and outs of this latest Gemini 2.0 edition, I couldn’t explain to you what, exactly, makes it any different to use on a practical level compared to the earlier versions — nor, after spending quite a bit of time testing it, could I identify to you how it’s done things any better for me in any meaningful, measurable, and specific way.

And that, m’dear, is absolutely hilarious to me — because the entire point of Gemini is that it’s supposed to help us understand stuff and make our lives easier. But somehow, its very interface is so frickin’ complex and convoluted that we practically need another version of Gemini just to decipher it and help us understand which version of Gemini we’re supposed to use for what and why.

The irony is delightful. But all bemused chuckling aside, it’s also a pretty serious problem.

I mean, really: Imagine one of your less savvy co-workers — or maybe even your wacky cousin Winslow from West Virginia — following Google’s prompts to try out Gemini and then encountering this monstrous menu of mumbo-jumbo. There’s no way they’d be able to make heads or tails of it, and I’d be willing to wager they’d just close the thing once and for all the second they saw it.

And that’s to say nothing of what happens if they actually make it past that first impression and then realize they’ve gotta keep wading through that labyrinth and figuring out the appropriate Gemini version every single time they come up with a new task or question.

It just isn’t a good experience by any measure — but especially not for a service that promises to save you time and simplify your life. And that, suffice it to say, doesn’t exactly jibe with Google’s goal of getting everyone in the habit of using Gemini constantly across all of their devices.

Here’s the bottom line: Gemini isn’t a beta-level experimental feature anymore. It’s a prominent public service — perhaps even Google’s most prominent product at the moment. It’s now a core part of the company’s enterprise-focused Workspace offering. For Goog’s sake, there was even a Super Bowl ad about it. For individuals and companies alike, it’s clearly meant to be serious business — and yet, it still feels like a clunky developer play-space.

If Google really wants people to accept Gemini as an everyday tool for workplace productivity and beyond, they’ve gotta make it more accessible and aimed at actual regular-human use. It needs to be intuitive, approachable, and easy for anyone to understand. And you don’t need an accuracy-challenged AI assistant to tell you this isn’t the way to achieve that.

Get practical tips, personal recommendations, and plain-English perspective on all the latest Googley twists with my free Android Intelligence newsletter — three new things to know and try each Friday.

Enterprise tech spending to hit $4.9 trillion in 2025, driven by AI, cloud, and cybersecurity

Global enterprise technology spending is set to grow by 5.6% in 2025, reaching $4.9 trillion, as enterprises continue to prioritize investments in cybersecurity, cloud computing, generative AI, and digital transformation.

North America and the Asia-Pacific region are projected to be the fastest-growing markets, while software and IT services are projected to account for 70% of all global technology spending by 2029, according to a Forrester report.

“Despite geopolitical instability and a softening IT and telecom services market in 2024, technology investments remain resilient,” said the report titled Global Tech Market Forecast, 2024-2029.

While certain sectors of the IT and telecom services market are showing signs of slowing down, businesses are accelerating their adoption of AI-driven tools and cloud-based solutions to enhance productivity and efficiency.

“Over the next five years, technology investments will reshape industries at an unprecedented pace,” Michael O’Grady, principal forecast analyst at Forrester said in the report. “GenAI, cloud technologies, and cybersecurity will take center stage, transforming how businesses operate and deliver value.”

O’Grady further said that companies that prioritize these investments “will not only strengthen their competitive edge but also achieve sustainable growth, but it’s important that they also balance their rapid tech investments with ongoing efforts to manage legacy systems and reduce technical debt.”

Software and AI investments fuel growth

Forrester projects that software spending will grow by 10.5% in 2025, making it the fastest-growing category within the global tech market. Enterprise investments in AI, cloud computing, and cybersecurity are expected to drive long-term expansion, with businesses increasingly shifting toward SaaS-based models.

Software will comprise 37% of global technology spending by 2029, nearly doubling its share from 2016, the report noted.

“The balance between AI hype and enterprise adoption is stabilizing as businesses focus on practical, ROI-driven applications,” said Charlie Dai, VP and principal analyst at Forrester.

He noted that while AI spending continues to grow, its measurable benefits are now evident in areas such as document automation, customer service, and employee augmentation. “Success depends on clear use cases, integration, and managing expectations, ensuring investments align with tangible business outcomes,” Dai said.

The report further added that with the demand for AI-driven infrastructure rising rapidly, AI server and storage markets are expected to see a 13% annual growth rate through 2030. OpenAI’s annualized revenue has already surged to $3.4 billion, up from $1 billion in mid-2023, highlighting the increasing adoption of generative AI solutions in enterprise environments.

This trend reflects broader corporate interest in AI-powered automation, which is transforming industries ranging from healthcare and finance to manufacturing and retail.

Cloud transformation on the rise

The IT services sector is expected to grow by 3.6% in 2025, as businesses continue to rely on consulting, IT outsourcing, and infrastructure-as-a-service (IaaS) to modernize their operations. The shift from traditional capital expenditures to operating expenditures through cloud-based services is accelerating, as enterprises seek more scalable and cost-effective solutions.

Dai noted that while many industries are shifting to an opex model for flexibility and scalability, a hybrid approach will persist. He explained that sectors such as manufacturing and utilities will likely continue investing in capex for critical, long-term infrastructure, whereas tech-driven industries will favor opex-based cloud solutions.

“Cost control, regulatory requirements, and strategic asset ownership will drive this decision,” he said.

IaaS is poised for substantial growth, with a projected compound annual growth rate of 16% through 2028. This expansion is being driven by enterprises migrating workloads to major cloud providers, including Microsoft Azure, AWS, and Google Cloud, the report added.

These investments are expected to improve operational agility, reduce infrastructure costs, and enhance security resilience in an increasingly digital business landscape.

Europe faces challenges

According to the report, technology spending trends vary significantly by region, with North America and Asia-Pacific expected to see the strongest growth.

North America is projected to experience a 6.1% increase in tech spending, with AI investments in financial services, retail, and media leading the way. Businesses in the US and Canada are accelerating cloud migration and cybersecurity initiatives, positioning the region at the forefront of enterprise IT innovation.

The Asia-Pacific region is forecasted to grow by 5.6%, with China, India, Japan, and Malaysia emerging as key drivers of expansion. India, in particular, is expected to have the region’s fastest-growing tech spending CAGR of 9.6% from 2024 to 2029, fueled by investments in AI, cloud, and digital transformation initiatives. The region’s investments in AI and semiconductor technologies continue to support enterprise adoption of next-generation computing solutions.

Meanwhile, Europe’s tech market is expected to grow at a slower rate of 5%, as economic challenges in Germany and Italy dampen enterprise spending.

Charlie Dai pointed to fragmented regulations, stricter data privacy laws, and higher operational costs as key barriers to faster enterprise IT growth in Europe.

He explained that cultural diversity and varying levels of digital maturity across countries further complicate scaling efforts for technology providers. “Europe faces slower enterprise IT growth due to fragmented regulations, stricter data privacy laws, and higher operational costs,” he said.

Latin America and the Middle East are also witnessing steady growth, with tech spending in these regions projected to rise between 5.2% and 5.4%. Governments and telecom operators are leading digital transformation efforts, with cloud adoption and AI integration playing a crucial role in modernizing public services and business operations.

Enterprise takeaways and strategic considerations

For enterprises, the forecast highlights several key takeaways that will shape IT investment strategies in the coming years. The growing dominance of cloud and AI-driven technologies is compelling organizations to rethink their approach to IT spending. Cybersecurity remains a top priority, with leading firms such as Palo Alto Networks forecasting a 16% revenue increase in 2024, reflecting heightened enterprise demand for advanced security solutions.

“Enterprises should prioritize a zero-trust strategy, integrating cybersecurity and compliance into every stage of AI and cloud adoption,” Dai said, adding that “enterprises must rethink their IT investment strategies by integrating cybersecurity and compliance into every stage of AI and cloud adoption.”

At the same time, organizations must navigate the challenges of modernizing legacy IT infrastructure. While cloud adoption is accelerating, two-thirds of global IT budgets are still allocated to maintaining existing systems. This underscores the complexity of balancing innovation with managing costs, ensuring compliance, and mitigating security risks.

The evolving regulatory landscape around AI and data protection is further influencing how enterprises deploy new technologies while safeguarding customer trust and operational integrity. With enterprises expected to spend nearly $5 trillion on technology in 2025, the decisions made today will have a lasting impact on business resilience and digital competitiveness.

Will the non-English genAI problem lead to data transparency and lower costs?

It’s become increasing clear that quality plunges when moving from English to non-English-based large language models (LLMs). They’re less accurate and there’s a serious lack of transparency around data training, both in terms of data volume and data quality.

The latter has long been a problem for generative AI (genAI) tools and platforms.

But enterprises aren’t paying less for less-productive models, even though the value they offer is diminished. So, why aren’t CIOs getting a price break for non-English models? Because without any data transparency, they rarely know they’re paying more for less. 

There are a variety of reasons why model makers don’t disclose their data training particulars. (Let’s not even get into the issue of whether they have legal rights to do whatever training they did — though it’s tempting to do so, if only to explore the hypocrisy of OpenAI complaining about DeepSeek not getting permission before training on much of its data.) 

Speaking of DeepSeek, don’t read too much into the lower cost of its underlying models. Yes, its builders cleverly leveraged open source to find efficiencies and lower pricing, but there’s been little disclosure of how much the Chinese government helped with DeepSeek’s funding, either directly or indirectly. 

That said, if DeepSeek is the cudgel that puts downward pressure on genAI pricing, I’m all for it — and IT execs should be, too. But until we see evidence of meaningful price cuts, they should use the lack of data transparency in non-English models to try and get model maker pricetags out of the stratospheric. 

The non-English issue isn’t really about the language, per se. It’s more about the training data that is available within that language. (By some estimates, the training datasets for non-English models could be just 1/10 or even 1/100 the size of their English counterparts.)

Hans Florian, whose title is a distinguished research scientist for multilingual natural language processing at IBM, said he uses a trick to guesstimate how much data is available in various languages. “You can look at the number of Wikipedia pages in that language. That correlates quite well with the amount of data available in that language,” he said.

To further complicate the issue, sometimes it’s not about the language or the available data in that language. It can — logically enough — be about data related to activities in the region where a particular  language is dominant.

If model makers start seeing meaningful pricing pushback from a lot of enterprises concerned about model quality, they have only a couple of options. They can selectively — and secretly — negotiate lower prices for non-English models for some of their customers — or they can get serious about data transparency.

Because LLM makers have invested billions of dollars in genAI, they aren’t going to like the idea of lower pricing. That leads to that second option: deliver full transparency to all customers about all models — both in terms of quantity and quality — and price their wares accordingly. 

Given that quality is almost impossible to represent numerically, that will mean disclosing all training data details so each customer can make their own determination of quality for the topics, verticals and geographies they care about.

The pricing disparity between what a model can deliver and what an enterprise is forced to pay is at the heart of why CIOs are still struggling to deliver genAI ROI

Obviously, lower pricing would be the best way to improve the ROI for genAI investments. But if that’s not going to happen anytime soon, full data transparency is the next best thing.

There is a catch: model makers almost certainly realize that full data-training transparency will likely force them to lower prices, since it would showcase how low quality their data is. 

Note: I say that their data is low-quality as if it’s a given; it is absolutely a given. If model makers believed they were using lots of high-quality data, far from resisting transparency, they would embrace it. It would be a selling point. It might even be useful for propping up prices. High quality usually sells itself.

Their refusal to deliver any kind of data-training transparency tells you everything you need to know about their quality beliefs, and about the state of the market at the moment. 


Monday.com aims to be an ‘AI-First’ platform with latest enhancements

Monday.com has accelerated its push into artificial intelligence (AI) with the announcement this week of its AI vision, which includes three areas of focus: AI Blocks, embedded Product Power-ups, and a forthcoming Digital Workforce of AI agents.

The new features, the company said in its announcement, “will give SMBs and mid-market companies a competitive advantage to scale and shift business dynamics without increasing resources, and enable enterprise and Fortune 500 companies to accelerate processes often slowed by scale.”

“Our ambition is to make monday.com an AI-first platform, where AI isn’t just an add-on but a core part of how businesses operate,” Or Fridman, AI product group lead, told Computerworld. “We believe AI has the potential to solve some of the most demanding business challenges, whether it’s making projects more predictable, improving decision-making, or automating complex workflows.”

AI Blocks in action

With AI Blocks, available in Monday Pro and Enterprise plans, users can set up their Monday board to automatically use AI on their behalf. Actions are customizable to specific projects and support both new and existing workflows without the need for technical expertise.

“It doesn’t require any prompting knowledge, and in less than 30 seconds, customers can set up their first AI Block,” said Fridman. “Ease of use is a major differentiating factor for us. Our platform is built on building blocks, helping people use technology easily.”

Each AI Block wraps an AI capability with context from a user’s work data. Customers choose the input (emails, documents, board column data), identify what they want the AI to do, and dictate where the output should be located (in a document or on a board column). The AI can be instructed, for instance, to prioritize tasks, assign workers to projects, extract information, assign labels, translate and improve text, and summarize updates.

“Monday.com’s AI Blocks are designed to integrate AI capabilities into existing processes without causing a lot of disruption,” said Melody Brue, VP and principal analyst at Moor Insights and Strategy. “The goal is to make AI more accessible to users who may not have specialized technical knowledge.”

Some use cases for AI Blocks, the company said, include lead categorization and file extraction (such as in HR). 

For example, a hiring manager could create a board to identify qualified candidates who should be moved on to the interview phase. She can add an AI-powered column with the task “assign labels,” which, based on her specific criteria, such as how many years of experience, skills and competencies a candidate must have, can automatically detect hireability. AI will then autofill columns with information from applicable candidate resumés, including email, phone number, and current employment, so that the recruiter can start making phone calls.

“AI Blocks provide the most help in aggregating information for reporting, reducing the time spent looking for relevant information, and then offering suggestions for the end user to communicate,” said Margo S. Visitacion, Forrester VP and principal analyst.

She emphasized, “users still need to verify the information is correct, but it can trim a lot of time from those tasks.”

Product Power-ups and Digital Workforce

Along with AI Blocks, Monday’s portfolio now has AI-generated Product Power-ups embedded throughout, which can automatically, for instance, rank best skill matches for tasks and reassign them accordingly, or identify project issues that require attention, such as resource scheduling or conflicts that might be causing delays.

“Our AI can proactively surface risks in large projects, making them more predictable and manageable,” said Fridman.

Finally, Monday’s upcoming “Digital Workforce” is a group of autonomously-operating AI agents that can perform tasks for users. The company will roll out its first agent, a “monday expert,” in March. This agent is designed to assist with onboarding new users and provide guidance around Monday features. Other planned agents include “deal facilitator” and “service analyzer.”

Fridman explained that customers can choose the relevant digital worker from the Monday marketplace and interact with it through chat and other in-product experiences. “It’s important to us to give value as quickly as possible without a long onboarding experience,” he said, pointing out that the AI workers will quickly learn, and adjust to, user preferences.

For instance, in customer support, an “AI-first digital service worker” could automatically resolve tickets so that human agents can move onto something more complex.

“If they work the way they are intended, they’ll make automating repeatable activities an integrated part of how an information worker executes their day job,” said Forrester’s Visitacion, “which can cut down on duplication of activities, task switching or potentially incorrect aggregation.”

Addressing concerns around predictability, reliability, safety

All told, the potential for AI agents is “enormous” when it comes to worker productivity, said Visitacion. “Companies can ask more complex questions and get meatier answers to support decision-making or automate the right workflows,” she said.

But she emphasized: “To get there, however, companies really must focus on structuring data and ensuring security and trust to ensure reliability.”

That’s a big, overarching concern: Addressing AI predictability, reliability and safety. Fridman said that Monday’s AI features follow the same data residency standards that exist across its portfolio, including multi-region support and encryption to ensure the privacy and security of customer data.

Further, “we fine-tune and optimize each AI engine and AI Block using proprietary techniques to ensure high quality, accuracy and built-in safeguards.”

Ultimately, agents will enable businesses to operate at a “previously impossible level, regardless of size,” Fridman said. These new capabilities are a “game-changer,” he said, particularly for SMBs that often lack the resources to scale like their larger enterprise counterparts. For instance, instead of hiring additional staff, SMBs can ‘hire’ digital workers to complete time-consuming, repetitive tasks such as managing operations, automating client communications, or streamlining order fulfillment.

“This allows them to scale faster, serve more customers, and focus on growth, all without the overhead of expanding their teams too quickly,” said Fridman. He added: “Our mission is to democratize AI, making it accessible and impactful for every business, not just the big players.”

Fridman emphasized that in 2025 and beyond, Monday is doubling down on AI-driven automation and intelligence.

“We want businesses to use AI not just for small tasks, but to redefine how work gets done,” he said. “That means evolving our digital workforce to handle increasingly sophisticated workflows. Our vision is a future where any company can build, automate, and optimize their operations using AI without needing data scientists or complex setups.”

In its announcement, Monday said, “To ensure AI remains accessible, monday.com offers a flexible and transparent pricing model for AI Blocks. Every plan includes 500 free AI Credits per month, providing teams with a simple way to explore the power of AI. For organizations with more significant needs, additional credits are available through buckets that scale with usage. Options range from a starter pack of 2,500 credits, geared towards lower usage, to enterprise buckets of 250,000 credits, providing flexibility for businesses of all sizes.”

AI ESM now GA

Monday also announced that its AI-first enterprise service management (ESM) platform is now generally available to all customers.

Just emerged from beta, the ESM offers AI-powered ticket resolution and automatic ticket classification and routing. It also features comprehensive service team dashboards intended to provide real-time insights into ticket trends, service performance and organization needs. Finally, a customizable customer portal allows users to access self-service options, submit tickets, track status, and communicate with the service team. 

Paris AI Action Summit: US and UK refuse to sign accord

The escalating electricity demands of artificial intelligence systems are raising concerns about the technology’s sustainability — but that’s apparently of little concern to the governments of the US and the UK.

They were among the invitees at the Paris AI Action Summit that refused to sign the “Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet,” the summit’s final declaration. The statement did win the approval of 58 countries, including China and India, and two supranational groups, the 27-member European Union (EU) and the 55-member African Union.

That’s more than signed the Bletchley Declaration by countries attending the AI Safety Summit organized by the UK in November 2023. The US and UK did sign that, as did the EU, China, and India, among others.

Signatories of the Paris summit statement agreed on six priorities:

  • Promoting AI accessibility to reduce digital divides
  • Ensuring AI is open, inclusive, transparent, ethical, safe, secure, and trustworthy, taking into account international frameworks for all
  • Making innovation in AI thrive by enabling conditions for its development and avoiding market concentration driving industrial recovery and development
  • Encouraging AI deployment that positively shapes the future of work and labor markets and delivers opportunity for sustainable growth
  • Making AI sustainable for people and the planet
  • Reinforcing international cooperation to promote coordination in international governance

Inclusion excluded

The US refusal to sign was likely triggered by the second priority of making AI inclusive: President Trump has ordered his administration to eliminate any reference to diversity, equity, and inclusion (DEI) from government websites.

But safety and sustainability are also not acceptable goals for the US, according to Vice President JD Vance, who addressed the summit on Tuesday morning.

“We stand now at the frontier of an AI industry that is hungry for reliable power and high-quality semiconductors,” Vance said. “If too many of our friends are deindustrializing on the one hand and chasing reliable power out of their nations and off their grids with the other, the AI future is not going to be won by handwringing about safety.”

Vance’s remarks about chasing out reliable power are likely a reference to moves in Europe to reduce reliance on electricity generated by burning oil and gas, European supplies of which have been disrupted by Russia’s invasion of Ukraine, in favor of renewable but weather-dependent sources such as solar- or wind-powered systems.

Coordination in AI governance is also going to be a point of contention. Even as the EU AI Act’s provisions begin to enter force, Vance warned summit attendees that “Excessive regulation in the AI sector could kill a transformative industry just as it’s taking off. The US, he said, “will make every effort to encourage pro-growth AI policies, and I’d like to see that deregulatory flavor making its way into a lot of the conversations at this conference.”

According to the BBC, the UK government also cited “global governance,” along with national security concerns, as reasons it refused to sign the Paris summit’s declaration.

America first

Vance was clear that his top priority is not accessibility or inclusion, but the US.

“This administration will ensure that American AI technology continues to be the gold standard worldwide, and that we are the partner of choice for others, foreign countries and certainly businesses as they expand their own use of AI,” he said.

But access to that technology will not be open to all.

“Some authoritarian regimes have stolen and used AI to strengthen their military, intelligence, and surveillance capabilities; capture foreign data; and create propaganda to undermine other nations’ national security,” Vance told summit attendees, adding, “This administration will block such efforts. We will safeguard American AI and chip technologies from theft and misuse, work with our allies and partners to strengthen and extend these protections, and close pathways to adversaries attaining AI capabilities that threaten all of our people.”

Billions in funding

Shortly after Trump’s inauguration, he announced that US AI companies would invest $500 billion in Project Stargate, designed to ramp up AI infrastructure in the US — although even with support from investors in Japan and the United Arab Emirates, barely a quarter of that sum is committed so far.

Vance predicted that investment would continue apace: “Of the $700 billion, give or take, that is estimated to be spent on AI in 2028, over half of it will likely be invested in the US,” he said.

But the US doesn’t have a monopoly on big projects. At the Paris summit, European Commission President Ursula Von der Leyen announced the EU’s intention to mobilize €200 billion ($207 billion) in investment in AI.

There’s some sleight of hand going on there too: While Von der Leyen talks of “mobilizing” €200 billion, only €20 billion of that is public money, and she’s expecting private enterprise to make up the rest.

An AI agent could help you buy your next car

Capital One has launched an AI agent designed to help customers with one of the more difficult and confusing purchase decisions: buying a car.

The new chatbot, called Chat Concierge, will help customers with everything from researching vehicles and scheduling test drives, to exploring financing options. The generative AI-powered assistant, one of many such projects at the financial institution, simplifies car buying by answering basic questions online with no dealership visit needed. It then directs them to existing online services.

Although Capital One’s auto loans are its smallest loan business, they still account for about 28% of its business, or $75 billion.

Chat Concierge is considered a customer service chatbot — a generative AI (genAI) automation tool that can handle simple user questions. The new service stands in contrast to Capital One’s own study last fall that found the in-person dealership experience remains vital for car buyers, even when they use digital tools to streamline early stages of the process. The report showed 88% of car buyers conduct at least half of the car buying process in person; 60% of buyers said sales reps contribute to trust.

“Car buyers’ trust in dealers is a key indicator of how transparent they perceive the car buying process — even with access to digital tools to complete key elements of their purchase,” the study concluded.

Even so, Sanjiv Yajnik, president of Financial Services at Capital One, said Chat Concierge will drive the future of car buying. “By leveraging our own internally developed AI tools to provide personalized, efficient, and transparent interactions, Capital One is reimagining car buying and setting a new standard for customer experience in the automotive industry,” Yajnik said in a statement.

Capital One’s AI assistant is part of a larger trend of companies deploying AI agents to tackle tasks often performed by entry-level employees, or to create efficiencies for high-level workers.

In the simplest sense, an AI agent is the combination of a large language model (LLM) and a traditional software application that can act independently to complete a task. The most basic AI agents include Chatbots such as OpenAI’s ChatGPT, Microsoft’s CoPilot, and Google Bard; they can answer user questions on a myriad of topics. AI agents can also act as spam filters, such as email spam detectors that use keyword matching and smart devices such as Thermostats that can follow set rules for raising or lowering temperature based on environmental conditions.

As AI-powered agents improve, they enable more personalized and effective customer service than early chatbots. Banks are using the genAI tools to resolve complex issues, setting new standards for efficiency. By leveraging customer data, AI assistants provide 24/7 support, handling thousands of inquiries at once, according to Arthur O’Connor, PhD, academic director of data science at the City University of New York (CUNY) School of Professional Studies.

“One of the most interesting developments is emotion recognition (ER), an emerging technology enabling chat bots to detect and respond to customer emotions, allowing for more empathetic and effective interactions, and thus engender customer satisfaction and loyalty,” O’Connor said.

Last month, Google DeepMind announced Project Astra, a research initiative aimed at developing a universal AI assistant that can process text, images, video, and audio inputs, enabling more natural and context-aware interactions. A key feature of Project Astra is its multimodal capabilities, allowing users to engage through various means such as speaking, showing images, or sharing videos. The assistant can remember details from past conversations and utilize tools such as Google Search, Maps, and Lens to provide informed responses.

The US Airforce recently announced it’s experimenting with a chatbot called NIPRGPT that will allow service members to engage in human-like conversations to complete various tasks, including drafting correspondence, preparing background papers, and assisting with coding.

Many AI agents will be integrated into existing software applications without users even knowing it. For example, Google Maps Navigation uses an AI model combined with traffic data and predicted conditions to provide the best route for drivers. Virtual Personal Assistants, such as Apple’s Siri, Amazon’s Alexa, or Google Assistant, use agents to predict user needs.

There are also learning AI agents whose algorithms are sophisticated enough to improve performance based on past experiences. Those systems include consumer recommendation services used on Netflix, Spotify, and YouTube, which all rely on AI to learn user preferences.

Agents that can become “smarter” include DeepMind’s AlphaGo, which learns and adapts to play the boardgame Go at a superhuman level.

Capital One’s Chat Concierge uses multiple AI agents that collaborate to mimic human reasoning. Instead of just providing information, the agents take action based on the user’s requests. They understand natural language, create action plans, validate them to avoid mistakes, and explain everything to the user, according to the bank.

For example, if a buyer asks for a list of trucks and then requests a test drive of the least expensive option, Chat Concierge can handle both tasks seamlessly. Concierge will also:

  • Simulate and validate plans to ensure they meet the car buyer’s needs and business policies.
  • Generate and deliver clear, natural language explanations of all the steps to the car buyer.
  • Let car buyers explore financing without leaving the dealer’s website.
  • Connect buyers directly to dealers through dealer websites, a navigator platform, and customer relationship management (CRM) apps, integrating customer info into the dealer’s CRM.
  • Work seamlessly with both Capital One and non-Capital One products.

“Capital One has a long history of using data, technology, and analytics to deliver superior financial services products and services for millions of customers,” said Prem Natarajan, chief scientist and head of enterprise AI at Capital One. “The launch of Chat Concierge is a key milestone in our customer-centered AI journey as we continue to focus on solving some of the most challenging problems in finance with technology.”

EU seeks to invest €200 billion in AI

The European Commission announced the mobilization of €200 billion (about $207 billion) for the InvestAI plan at the AI Action Summit in Paris on Tuesday, with the aim of enabling “open and collaborative development” of artificial intelligence in Europe. This was announced by Commission President Ursula Von der Leyen, who has also opened a new EU fund of €20 billion for AI gigafactories.

The strategy will thus finance four future AI gigafactories in the European Union (EU), which will specialize in training the largest and most complex models. These facilities will have around 100,000 state-of-the-art chips, approximately four times more than the centers currently under construction.

It is intended that companies of all sizes will have access to this computing power. These will have a focus on complex industrial and “mission critical” applications. Initial funding will come from different schemes, such as the Digital Europe Program and Horizon Europe and InvestEU.

The Commission already announced the first seven AI factories in December and will soon follow with the next five, which will represent the largest public investment in AI in the world and, it hopes, will unlock more than 10 times the amount in private investment.

A European AI Research Council will also be set up. Von der Leyen said, “We want AI to be a positive and growth force. We are doing this through our European approach, based on openness, cooperation and excellent talent. But we still need to leverage it. That’s why this unique public-private partnership, similar to a CERN for AI, will enable all our scientists and companies, not just the biggest ones, to develop the cutting-edge large-scale models needed to make Europe an AI continent.”

The Brave browser gets built-in functionality to run custom scripts

It’s been possible for a while now to modify web pages using popular extensions such as Tampermonkey and Greasemonkey, which can be useful for avoiding annoying ads or tracking attempts.

Now, starting with version 1.75 of the Brave browser, you don’t have to download this kind of add-on — because the feature is already built in. According to Bleeping Computer, the new feature can be used for everything from adding support for keyboard shortcuts to stopping the automatic playback of videos.

Information on how to write your own scripts is available on the Brave website.