Microsoft AI CEO Mustafa Suleyman writes on LinkedIn that the company is now making OpenAI’s reasoning model o1 free to use for all users of Microsoft’s AI assistant Copilot.
Microsoft calls the functionality itself “Think Deeper.” The o1 model spends more time (about 30 seconds) considering the instructions it receives from several different angles and perspectives, then delivers a more comprehensive response than most genAI tools.
Previously, interested users had to pay at least $20 per month to gain access to the o1 model via one of Open AI’s ChatGPT subscriptions. Think Deeper has also been available as a preview in Copilot Labs, also only for paying Copilot Pro users.
The decision comes after the Chinese companies providing the chatbot service failed to provide the authority with sufficient information about how users’ personal data is used.
Reuters writes that Garante wants to know, among other things, what personal data DeepSeek collects, from what sources, for what purposes, on what legal basis, and whether the data is stored in China.
As a result, DeepSeek is no longer available through Apple’s or Google’s app stores in Italy. Garante has also launched an investigation. DeepSeek has not commented on the matter.
If you’re like about 70% of computer users worldwide, you use Google’s Chrome browser as your gateway to the web, from conducting research and catching up on news to emailing and interacting with cloud apps. There are several tools built into Chrome that you might not know about, but should. They can improve your browsing experience significantly, enhancing productivity, organization, security, search, and more.
Even if you have already heard about some of these tools, consider this guide a refresher and encouragement to use them.
1. Chrome profiles: Keep work and personal browsing separate
You can add more than one user profile to Chrome. Each profile will have its own set of bookmarks, browsing history, website logins, and other data. For example, you can create one profile specifically for your work-related browsing, so that bookmarks and websites associated with your job are kept separate from your personal activity online.
To create another profile: Click your headshot or current profile icon that’s toward the upper right in Chrome. On the panel for your profile that opens, click Add new profile.
Click your profile icon, then select Add new profile.
Howard Wen / IDG
A large panel will open over the screen. You can create a new profile by signing in with another Google account. If this account already has Chrome profile data (bookmarks, browsing history, logins) associated with it, these will be synced to your PC.
You can sign into an existing Google account or create a profile that’s not connected to a Google account.
Howard Wen / IDG
Or you can select to create a new profile without signing in with another Google account. Browsing information that’s created in Chrome while using this new profile will be saved only on your PC.
Naming a new Chrome profile and choosing a color scheme.
Howard Wen / IDG
After you create the new profile, it’ll appear on the panel of your first profile. Click the name of this new profile; this will launch another instance of Chrome that will let you browse under that profile. You can run two (or more) instances of Chrome on your PC, each with a different user profile.
2. Password checkup: Review (and fix) your website logins
By default, Chrome automatically saves your usernames and passwords for websites that require a login in a service called Google Password Manager. If you don’t use a dedicated password manager app, GPM is a convenient tool for storing and managing login info. (See our separate guide to Google Password Manager.) It’s easy to “set and forget” passwords, so it’s a good idea to periodically check the health of your logins, updating usernames or passwords as needed.
Click the three-dot icon at Chrome’s upper right. On the menu that opens, select Passwords and autofill and then Google Password Manager. GPM will open in a new browser tab, where you’ll see the login information for the websites you’ve saved to GPM. You can click a website name to change or delete your username or password for it.
An important feature to use is the Checkup tool. Along the left, click Checkup. Chrome will analyze all of your website passwords, rating which have weak security and notifying you if any have been compromised or if you’ve reused any across websites. You can click to see a list of the offending passwords, and the password manager’s interface will guide you through changing them.
Check for compromised, reused, or weak passwords, then change them as needed.
Howard Wen / IDG
If you’d like, you can use the password manager as a self-standing app on your PC. When Google Password Manager is open in a tab, click the Install Google Password Manager icon at the right end of the address bar. After it’s installed on your PC, you can click the desktop shortcut to launch Google Password Manager on its own, apart from Chrome.
3. Print to PDF: Turn a web page into a PDF
“Printing” a web page to a PDF can be useful for archiving the page as its contents appeared when you viewed it, or sharing a page when a web link to it won’t be convenient or possible for the person you want to share it with.
The fastest way to do this: With the web page open, hold the Ctrl key and type p on a Windows PC (or the Cmd key and p on a Mac). Alternatively, click the three-dot icon at the upper right of Chrome, and on the menu that opens, select Print.
A large panel opens. To the right of “Destination,” see if “Save as PDF” is listed inside the selection box. If it’s not, click this box to open a dropdown menu and select Save as PDF.
Set the Destination field to Save as PDF.
Howard Wen / IDG
The rest of this panel lists settings for formatting the PDF that you can change. (If you don’t see them, click More settings.) When you’ve set everything the way you want, click Save. You’ll be prompted to select a location on your PC’s storage where you want to save the PDF. Make your choice, and then Chrome will output the entire web page as a PDF and save it to your PC.
4. Reading list: Curate a list of web pages to read later
Chrome offers a nifty feature that lets you gather web pages that you want to remember to read later. The difference between saving a web page to Chrome’s reading list versus saving it as a bookmark is that the reading list is meant to motivate you, such as to read important information that you’re doing for research. You can chart your progress by marking a page as read when you’re finished with it.
With the web page open, click the three-dot icon at the upper right of Chrome. On the menu that opens along the right, click Bookmarks and lists and then select Reading list. Then click Add tab to reading list at the bottom of the panel. Repeat this process to add more web pages to the reading list.
To open your reading list, click the three-dot icon at the upper right, then select Bookmarks and lists > Reading list > Show reading list.The list will open in a panel on the right.
Gather web pages you want to read in\ Chrome’s reading list.
Howard Wen / IDG
On the reading list, clicking the title of a web page opens it in the browser tab to the left. When you’re finished reading it, move the pointer over the page’s title in the list and select the checkmark to mark the page as read or the x to remove it from the reading list.
5. Reading mode: Make lengthy content easier to read
You may come across an article that you want to concentrate on without other elements on the page’s layout (such as ads, images, videos, or sidebars) distracting you. Or maybe your eyesight is struggling with how the text appears on the page. Reading mode can help, and it works very well for reading long articles.
With the web page open, click the three-dot icon at the upper right, then select More tools > Reading mode. Chrome will extract the main article from the page and format it for easier reading in the reading mode panel that appears on the right.
Try reading mode for a distraction-free environment to read long articles.
Howard Wen / IDG
You can widen the reading mode panel by clicking-and-holding the double-bar icon on its left frame. Drag this icon toward the left, and the margins for the text in the reader mode panel will automatically adjust themselves.
Along the top of the reading mode panel is a toolbar that lets you adjust the text font and size, and the spacing between text characters and lines of text. You can also change the background color.
6. Tab groups: Organize and name tab collections
Chrome’s tab groups feature lets you organize tabs of related web pages into a collection that has a title. When you click the group title, all the web pages that you organized under it will open in the browser. This can be useful if you want to open multiple web pages that you frequently visit with a single click. You can create several different tab groups — say, one group for the core web apps you use every day for work, another for research related to a specific project, and so on.
To create a new tab group: At the left end of the Bookmarks toolbar, click the grid icon and select Create new tab group. Alternatively, click the three-dot icon at the upper right of Chrome, and on the menu that opens, select Tab groups > Create new tab group.
Or you can create a new tab group starting from an existing tab: Simply right-click the tab and select Add tab to group > New group from the menu that appears.
A special tab will open that prompts you to type in a name for your new tab group. You can optionally select a highlight color for the new tab group.
Creating a new tab group.
Howard Wen / IDG
Press the Enter key, and your new tab group will appear among the tabs in Chrome. If your Bookmarks toolbar is open, the group will also appear to the left of the grid icon.
To add a web page to a tab group: Simply drag a tab that’s already open in Chrome to the right of the tab group name and let it go.
Adding a tab to a group via drag-and-drop.
Howard Wen / IDG
To close the tabs in a tab group: Click the tab group name. The tabs that are opened to the right of it will close.
To open the tabs in a tab group: Click the tab group name, and the tabs that you organized under it will open to its right. Or, if you have the Bookmarks toolbar open, you can click the tab group name there or click the grid icon and select the group you want to open.
Navigating to a tab group via the Bookmarks toolbar.
Howard Wen / IDG
Finally, you can click the three-dot icon at the upper right of Chrome, then select Tab groups, the name of the tab group that you want, and Open group.
To manage a tab group: Right-click on the tab group name. On the menu that opens, you can click the following:
New tab in group: Opens a new, blank tab to the right of the tab group name. The web page you navigate to in this tab will be added to the tab group.
Move group to new window: Opens all the web pages organized in this group tab in a new browser window.
Ungroup: The web pages in this tab group will be opened, but the tab group (and its name) will be removed. This action essentially “frees” the web pages that you put into this tab group.
Close group: Closes a tab group, which removes it from the browser’s tabs toolbar. You can reopen a closed group via the Bookmarks toolbar or by selecting the three-dot icon and Chrome’s upper right, selecting Tab groups, and choosing the group you want.
Delete group: Deletes both the tab group name and all the web pages that you organized in it.
Google Lens is a visual search feature built into Chrome. It lets you search for the source of an image on a web page, find variants of the image, or find or similar looking images. You can also use it to translate foreign words that appear in a photo or other image.
It can also be used to find an item for sale online. For example, if you have Google Lens search on a photo of a laptop, it might find an online store where you can buy it.
To use Google Lens in Chrome, right-click on a photo or image on a web page. On the menu that opens, select Search with Google Lens. A panel will open along the right of the browser, showing search results that you can browse through. You can click any result to open its web link in the browser.
Using Google Lens image search.
Howard Wen / IDG
In the main browser window that shows the image Google Lens searched on, you can fine-tune the image search in various ways:
Adjust the frame around the image by clicking-and-dragging its corners or sides. This may prompt Google Lens to provide more precise search results.
Draw a frame around a specific area of the image. Position the crosshair over the image, then click-and-drag it in any direction to frame the area of the image that you want Google Lens to analyze and search.
Translate text that’s in a language other than the one set as your browser’s default. Draw a frame around the text or double-click it to highlight it, then select Translate on the menu that opens. Google Lens will open a translation tool in the panel along the right.
Google Lens can translate text in an image.
Howard Wen / IDG
8. Share a web page: Send a link to another device
You’re viewing a web page on your PC but want to see it on your phone, tablet, or another PC. Here are two unique ways to forward a web page link to another device:
Method 1: Send the link to a signed-in device
First, you must be signed into Chrome with a Google account. The device you want to forward the link to also must be signed into Chrome with the same Google account.
With the web page open in Chrome on your PC, click the three-dot icon toward the upper right. On the menu that opens, select Cast, save, and share and then Send to your devices.
A menu pops open that lists any mobile device and other PCs that are signed in with your Google account. If you click the name of your smartphone on this menu, that device will receive a notification in Chrome. Tap this notification to open the web page.
Sending a web link to a signed-in device.
Howard Wen / IDG
Method 2: Create a QR code for the link
If the smartphone or other device that you want to forward the link to isn’t signed in to your Google account, you can create a QR code for the web page’s link.
With the web page open in Chrome on your PC, click the three-dot icon toward the upper right. On the menu that opens, select Cast, save, and share > Create QR code.
A QR code image will pop open below the web address bar.
Creating a QR code to send a link.
Howard Wen / IDG
Use the smartphone’s camera to capture it — most recent smartphone models will recognize a QR code. When you tap the link that appears, the web page will open in the smartphone’s default browser, whether it’s Chrome or another such as Firefox, Microsoft Edge, or Safari.
9. Translation: Manage the languages that Chrome translates
By default, Chrome offers to translate a web page if it’s not in your preferred native language. (If it doesn’t, click the Translate this page icon at the right end of the address bar or click the three-dot icon at the upper right and choose Translate.)
It’s worth taking the time to manage this feature so that it’s set best for your browsing, particularly if you frequently visit sites that are in languages other than your native one. Click the three-dot icon at the upper right of Chrome. On the menu that opens, scroll to the bottom and select Settings. The Settings page opens in a new tab. Along the left column, click Languages.
On the page that appears, scroll down to the Google Translate section. Here you can tell Chrome to automatically translate pages that are in certain languages without asking you first. You can also tell it not to offer to translate pages in some languages — useful for people who are fluent in more than one language. For languages that you don’t specify as “automatically translate” or “never offer to translate,” Chrome will continue to offer to translate the page.
The Deepseek-R1 model has managed to attract a lot of attention in a short time, especially because it can be used commercially without restrictions.
Now, developers at Hugging Face are trying to reconstruct the generative AI (genAI) model from scratch and develop an alternative to Deepseek-R1 called Open-R1 based on open source code. Although Deepseek is often referred to as an open model, parts of it are not completely open.
“Ensuring that the entire architecture behind R1 is open source is not just about transparency, but about unlocking its full potential,” developer Elie Bakouch, of Hugging Face, told Techcrunch.
In the long run, Open-R1 could make it easier to create genAI models without sharing data with other actors.
Earlier this week, we learned about Apple’s decision to appoint Kim Vorrath, the vice president of the company’s Technology Development Group (TDG), to help build Apple Intelligence under the supervision of John Giannandrea, Apple’s senior vice president for machine learning and AI.
Vorrath, who also serves at a board member at the National Center for Women in IT and sits on the Industrial Advisory Board at Cal Poly, has been with Apple since 1987. She’s taken leadership roles in iOS and OS X — she was even in charge of macOS at one time. Part of the original iPhone development team, she also supervised OS development for iPad, Mac and Vision Pro.
When it comes to bug testing and software quality control, she can say which features are ready to go and which are not. Vorrath also coordinates releases, not just for the specific platform (such as iPhone), but between devices, which means a great deal when you consider how integrated the Apple ecosystem has become.
Getting the band together
That established talent will be critical, given that Apple Intelligence features are also designed to work across the Apple ecosystem.
Of course, making these complex high tech products work well together takes effective organization. Vorrath brings that. She seems to be a person who can organize engineering groups and design effective workflows to optimize what those teams can do. With all these achievements, it is no surprise Vorrath is seen as one of the women who contributed the most to making Apple great.
In her new role, she joins Giannandrea, who allegedly “needs additional help managing an AI group with growing prominence,” Bloomberg reported.
Put it all together and it’s clear that Vorrath is one of Apple’s top fixers and joins the AI team at a critical point. First, she’s probably going to help get a new contextually-aware Siri out the door, and second, she’ll be making decisions around what happens in the next major iterations of Apple Intelligence.
It’s the next steps for Apple’s AI that I think have been missed in much of the coverage of this internal Apple shuffle.
Apple Intelligence 2.0
While people like to focus on Siri’s improvements and shortcomings, it must also be true that Apple hopes to maintain its traditional development cadence when it comes to Apple Intelligence.
That means delivering additional features and feature improvements every year, usually at WWDC. With the next WWDC looming fast, it might fall to Vorrath to select what additions are made, and to ensure they get developed on time.
Think logically and you can see why that matters. Apple announced Apple Intelligence at WWDC 2024, but it wasn’t ready to ship alongside the original release of operating system updates, and features were slowly introduced in the following months.
Arguably, the schedule didn’t matter. What does matter is that Apple, then seen as falling behind in AI, used Apple intelligence to argue for its own continued corporate relevance. It bought itself some time.
Now it must follow up on that time. That means making improvements and additions to show continued momentum. It comes down to delivering solutions consumers will want to use, with a little Apple magic alongside new developer tools to extend that ecosystem.
It has to succeed in doing this to maintain credibility in AI.
Is Apple going to keep relevant?
Getting that right — particularly across all Apple’s platforms and in good time — is challenging, and is most likely why Vorrath has been brought in. There’s so much riding on getting the mix right. Apple needs to be able to say “Hey, We’re not done yet with Apple Intelligence,” and back that claim up with tools to keep users’ interest. Those new AI services need to work well, ship on time, and work so people won’t even know how much they needed them until they use them.
Getting that mix right is going to take skill, dedication, and discipline. In the coming months, all eyes will be on Apple as critics and competitors wait to find out whether Apple Intelligence was a one shot attempt at maintaining relevance, or the first steps of a great company about to find its AI feet.
Making sure it is the second, and not the first, should be the fundamental mission Vorrath has taken on in her new role.
Chinese start-up DeepSeek’s cost-saving techniques for training and delivering generative AI (genAI) models could democratize the entire industry by lowering entry barriers for new AI companies.
DeepSeek made waves this week as its chatbot overtook ChatGPT downloads on the Apple and Google App Stores. The open-source AI model’s impact lies in matching leading US models’ performance at a fraction of the cost by using compute and memory resources more efficiently.
DeepSeek is more than China’s “ChatGPT”; it’s a major step forward for global AI by making model building cheaper, faster, and more accessible, according to Forrester Research. While large language models (LLMs) aren’t the only route to advanced AI, DeepSeek’s innovations should be “celebrated as a milestone for AI progress,” the research firm said.
The efficiencies of DeepSeek’s AI methodology means it requires vastly less compute capacity on which to run; that means it could also affect the chip industry, which has been riding a wave of GPU and AI accelerator hardware purchases by companies building out massive data centers.
For example, Meta is planning to spend $65 billion to build a data center with a footprint that’s almost as large as Manhattan. Expected to come online at the end of this year, the data center would house 1.3 million GPUs to power AI tech used by Facebook and other Meta ventures.
Brendan Englot, a professor and AI expert at Stevens Institute of Technology in New Jersey, said the fact that DeepSeek’s models are also open source will also help make it easier for other AI start-ups to compete against large tech companies. “DeepSeek’s technology provides an excellent example of how disruptive and innovative new tools can be built faster with the aid of open source software,” said Englot, who is also director of the Stevens Institute for Artificial Intelligence (SIAI).
DeepSeek’s arrival on the scene tanked GPU-leading provider Nvidia’s stock, as investors realized the impact the more efficient processes would have on AI processor and accelerator sales.
“DeepThink” a feature within the DeepSeek AI chatbot that leverages the R1 model to provide enhanced reasoning capabilities, uses advanced techniques to break down complex queries into smaller, manageable tasks.
Thanks to those kinds of optimizations, DeepThink (R1) only cost about $5.5 million to train — tens of millions of dollars less than similar models. While this could reduce short-term demand for Nvidia, the lower cost will likely drive more startups and enterprises to create models, boosting demand long-term, Forrester Research said.
And, while the costs to train AI models have just declined significantly with DeepThink, the cost to support inferencing will still require significant compute and storage, Forrester said. “This shift shows that core AI model providers won’t be enough, further opening the AI market,” the firm said in a research note. “Don’t cry for Nvidia and the hyperscalers just yet. Also, there might be an opportunity for Intel to claw its way back to relevance.”
Englot agreed, saying there is a lot of competition and investment right now to produce useful AI software and hardware, “and that is likely to yield many more breakthroughs in the very near future.”
DeepSeek base technology isn’t pioneering. On the contrary, the company’s recently published research paper shows that Meta’s Llama and Alibaba’s Qwen models were key to developing DeepSeek-R1 and DeepSeek-R1-Zero — its first two models, Englot noted.
In fact, Englot doesn’t believe DeepSeek’s advance poses as much of a threat to the semiconductor industry as this week’s stock slide suggests. GenAI tools will still rely on GPUs, and DeepSeek’s breakthrough just shows some computing can be done more efficiently.
“If anything, this advancement is good news that all developers of AI technology can take advantage of,” Englot said. “What we saw earlier this week was just an indication that less computing hardware is needed to train and deploy a powerful language model than we originally assumed. This can permit AI innovators to forge ahead and devote more attention to the resources needed for multi-modal AI and advanced applications beyond chat-bots.”
Others agreed.
Mel Morris, CEO of startup Corpora.ai, said DeepSeek’s affordability and open-source model allows developers to customize and innovate cheaply and freely. It will also challenge the competitive landscape and push major players like OpenAI — the developer of ChatGPT — to adapt quickly, he said.
“The idea that competition drives innovation is particularly relevant here, as DeepSeek’s presence is likely to spur faster advancements in AI technology, leading to more efficient and accessible solutions to meet the growing demand,” Morris said. “Additionally, the open-source model empowers developers to fine-tune and experiment with the system, fostering greater flexibility and innovation.”
Forrester cautioned that, according to its privacy policy, DeepSeek explicitly says it can collect “your text or audio input, prompt, uploaded files, feedback, chat history, or other content” and use it for training purposes. It also states it can share this information with law enforcement agencies [and] public authorities at its discretion.
Those caveats could be of concern to enterprises who have rushed to embrace genAI tools but have been concerned about data privacy, especially when it involves sensitive corporate information.
“Educate and inform your employees on the ramifications of using this technology and inputting personal and company information into it,” Forrester said. “Align with product leaders on whether developers should be experimenting with it and whether the product should support its implementation without stricter privacy requirements.”
China’s Alibaba Group has launched an upgraded version of its Qwen 2.5 AI model, claiming it outperforms models from DeepSeek, OpenAI, and Meta, as competition in the AI market intensifies.
“Qwen 2.5-Max outperforms … almost across the board GPT-4o, DeepSeek-V3 and Llama-3.1-405B,” Alibaba’s cloud unit said on its WeChat account, according to Reuters.
On its GitHub page, the company showed benchmarking results indicating that its instruct models – designed for tasks like chat and coding – mostly outperformed GPT-4o, DeepSeek-V3, and Llama-3.1-405B, while performing comparably to Claude 3.5-Sonnet.
The launch follows DeepSeek’s disruptive entry into the market, marked by the Jan 10 debut of its AI assistant powered by the DeepSeek-V3 model and the Jan 20 release of its open-source R1 model.
The Chinese startup’s low-cost strategy has shaken Silicon Valley, sending tech stocks lower and prompting investors to question the sustainability of major US AI firms’ high-spending approach.
China’s AI race heats up
Alibaba’s launch coincided with the Lunar New Year holiday, a time when much of China is on break, underscoring the growing competitive pressure from DeepSeek.
DeepSeek’s rapid ascent over the past three weeks has intensified rivalry not only with global players but also among Chinese tech firms.
“The AI model war is no longer just China versus the US – competition within China is also intensifying as companies like DeepSeek, Alibaba, and others innovate and optimize their models to serve a high-scale domestic market,” said Neil Shah, partner and co-founder at Counterpoint Research. “Chinese companies are being pushed to innovate further due to resource constraints, including limited access to the most advanced semiconductors, global-scale data, tools, infrastructure, and audiences.”
The race for frugal AI
The race to develop high-performance, cost-efficient AI models is intensifying, challenging the business strategies and pricing structures of major US hyperscalers and AI firms as they seek to recover billions in investment.
“This gives enterprise buyers and decision-makers more leverage, increasing pricing pressure on AI applications built with more expensive underlying models,” Shah said. “Such breakthroughs will force enterprises to reconsider, or at least rethink, the economics of AI investments and their choice of models and vendors.”
DeepSeek is driving immediate pricing considerations in two key areas of AI – raw token costs and model development expenses. These factors may force AI companies worldwide to consider optimizing their models to remain competitive.
“DeepSeek’s success also highlights the power of open source, strengthening the argument that open-source AI could become a dominant market later,” said Hyoun Park, CEO and chief analyst at Amalgam Insights. “If that happens, companies with strong open-source business models for enterprises – such as IBM Red Hat and Canonical – could step in and rapidly scale AI-related managed services.”
The geopolitics advantage
Geopolitics remains a wild card for Western AI firms, potentially tilting the market in their favor by restricting the adoption of Chinese models in certain regions.
At the same time, China is likely to tighten controls on the use of Western AI models, mirroring restrictions seen with other tech applications.
Enterprises will first assess whether these models comply with global privacy and regulatory standards before adopting them at scale, said Sharath Srinivasamurthy, associate vice president of Research at IDC.
“DeepSeek’s advancements could lead to more accessible and affordable AI solutions, but they also require careful consideration of strategic, competitive, quality, and security factors,” Srinivasamurthy said.
However, China’s substantial investment in AI research and development is only beginning to yield results, according to Srinivasamurthy. Other Chinese firms, like Alibaba, which have also been investing in AI in recent years, may soon start launching their own models.
China’s Alibaba Group has launched an upgraded version of its Qwen 2.5 AI model, claiming it outperforms models from DeepSeek, OpenAI, and Meta, as competition in the AI market intensifies.
“Qwen 2.5-Max outperforms … almost across the board GPT-4o, DeepSeek-V3 and Llama-3.1-405B,” Alibaba’s cloud unit said on its WeChat account, according to Reuters.
On its GitHub page, the company showed benchmarking results indicating that its instruct models – designed for tasks like chat and coding – mostly outperformed GPT-4o, DeepSeek-V3, and Llama-3.1-405B, while performing comparably to Claude 3.5-Sonnet.
The launch follows DeepSeek’s disruptive entry into the market, marked by the Jan 10 debut of its AI assistant powered by the DeepSeek-V3 model and the Jan 20 release of its open-source R1 model.
The Chinese startup’s low-cost strategy has shaken Silicon Valley, sending tech stocks lower and prompting investors to question the sustainability of major US AI firms’ high-spending approach.
China’s AI race heats up
Alibaba’s launch coincided with the Lunar New Year holiday, a time when much of China is on break, underscoring the growing competitive pressure from DeepSeek.
DeepSeek’s rapid ascent over the past three weeks has intensified rivalry not only with global players but also among Chinese tech firms.
“The AI model war is no longer just China versus the US – competition within China is also intensifying as companies like DeepSeek, Alibaba, and others innovate and optimize their models to serve a high-scale domestic market,” said Neil Shah, partner and co-founder at Counterpoint Research. “Chinese companies are being pushed to innovate further due to resource constraints, including limited access to the most advanced semiconductors, global-scale data, tools, infrastructure, and audiences.”
The race for frugal AI
The race to develop high-performance, cost-efficient AI models is intensifying, challenging the business strategies and pricing structures of major US hyperscalers and AI firms as they seek to recover billions in investment.
“This gives enterprise buyers and decision-makers more leverage, increasing pricing pressure on AI applications built with more expensive underlying models,” Shah said. “Such breakthroughs will force enterprises to reconsider, or at least rethink, the economics of AI investments and their choice of models and vendors.”
DeepSeek is driving immediate pricing considerations in two key areas of AI – raw token costs and model development expenses. These factors may force AI companies worldwide to consider optimizing their models to remain competitive.
“DeepSeek’s success also highlights the power of open source, strengthening the argument that open-source AI could become a dominant market later,” said Hyoun Park, CEO and chief analyst at Amalgam Insights. “If that happens, companies with strong open-source business models for enterprises – such as IBM Red Hat and Canonical – could step in and rapidly scale AI-related managed services.”
The geopolitics advantage
Geopolitics remains a wild card for Western AI firms, potentially tilting the market in their favor by restricting the adoption of Chinese models in certain regions.
At the same time, China is likely to tighten controls on the use of Western AI models, mirroring restrictions seen with other tech applications.
Enterprises will first assess whether these models comply with global privacy and regulatory standards before adopting them at scale, said Sharath Srinivasamurthy, associate vice president of Research at IDC.
“DeepSeek’s advancements could lead to more accessible and affordable AI solutions, but they also require careful consideration of strategic, competitive, quality, and security factors,” Srinivasamurthy said.
However, China’s substantial investment in AI research and development is only beginning to yield results, according to Srinivasamurthy. Other Chinese firms, like Alibaba, which have also been investing in AI in recent years, may soon start launching their own models.
China’s Alibaba Group has launched an upgraded version of its Qwen 2.5 AI model, claiming it outperforms models from DeepSeek, OpenAI, and Meta, as competition in the AI market intensifies.
“Qwen 2.5-Max outperforms … almost across the board GPT-4o, DeepSeek-V3 and Llama-3.1-405B,” Alibaba’s cloud unit said on its WeChat account, according to Reuters.
On its GitHub page, the company showed benchmarking results indicating that its instruct models – designed for tasks like chat and coding – mostly outperformed GPT-4o, DeepSeek-V3, and Llama-3.1-405B, while performing comparably to Claude 3.5-Sonnet.
The launch follows DeepSeek’s disruptive entry into the market, marked by the Jan 10 debut of its AI assistant powered by the DeepSeek-V3 model and the Jan 20 release of its open-source R1 model.
The Chinese startup’s low-cost strategy has shaken Silicon Valley, sending tech stocks lower and prompting investors to question the sustainability of major US AI firms’ high-spending approach.
China’s AI race heats up
Alibaba’s launch coincided with the Lunar New Year holiday, a time when much of China is on break, underscoring the growing competitive pressure from DeepSeek.
DeepSeek’s rapid ascent over the past three weeks has intensified rivalry not only with global players but also among Chinese tech firms.
“The AI model war is no longer just China versus the US – competition within China is also intensifying as companies like DeepSeek, Alibaba, and others innovate and optimize their models to serve a high-scale domestic market,” said Neil Shah, partner and co-founder at Counterpoint Research. “Chinese companies are being pushed to innovate further due to resource constraints, including limited access to the most advanced semiconductors, global-scale data, tools, infrastructure, and audiences.”
The race for frugal AI
The race to develop high-performance, cost-efficient AI models is intensifying, challenging the business strategies and pricing structures of major US hyperscalers and AI firms as they seek to recover billions in investment.
“This gives enterprise buyers and decision-makers more leverage, increasing pricing pressure on AI applications built with more expensive underlying models,” Shah said. “Such breakthroughs will force enterprises to reconsider, or at least rethink, the economics of AI investments and their choice of models and vendors.”
DeepSeek is driving immediate pricing considerations in two key areas of AI – raw token costs and model development expenses. These factors may force AI companies worldwide to consider optimizing their models to remain competitive.
“DeepSeek’s success also highlights the power of open source, strengthening the argument that open-source AI could become a dominant market later,” said Hyoun Park, CEO and chief analyst at Amalgam Insights. “If that happens, companies with strong open-source business models for enterprises – such as IBM Red Hat and Canonical – could step in and rapidly scale AI-related managed services.”
The geopolitics advantage
Geopolitics remains a wild card for Western AI firms, potentially tilting the market in their favor by restricting the adoption of Chinese models in certain regions.
At the same time, China is likely to tighten controls on the use of Western AI models, mirroring restrictions seen with other tech applications.
Enterprises will first assess whether these models comply with global privacy and regulatory standards before adopting them at scale, said Sharath Srinivasamurthy, associate vice president of Research at IDC.
“DeepSeek’s advancements could lead to more accessible and affordable AI solutions, but they also require careful consideration of strategic, competitive, quality, and security factors,” Srinivasamurthy said.
However, China’s substantial investment in AI research and development is only beginning to yield results, according to Srinivasamurthy. Other Chinese firms, like Alibaba, which have also been investing in AI in recent years, may soon start launching their own models.
Despite initiating a probe into Chinese AI startup DeepSeek, Microsoft has added the startup’s latest reasoning model R1, to its model catalog on Azure AI Foundry and GitHub.
As part of the blog post in which Microsoft declared that it has added R1, it said that the model had “undergone rigorous red teaming and safety evaluations, including automated assessments of model behavior and extensive security reviews to mitigate potential risks.”