Month: January 2025

OpenAI unleashes o3-mini reasoning model

OpenAI on Friday released the latest model in its reasoning series, o3-mini, both in ChatGPT and its application programming interface (API). It had been in preview since December 2024.

The company said in its announcement that it “advances the boundaries of what small models can achieve, delivering exceptional STEM capabilities — with particular strength in science, math, and coding — all while maintaining the low cost and reduced latency of OpenAI o1-mini.”

OpenAI said that o3-mini delivered math and factuality responses 24% faster than o1-mini, with medium reasoning effort, and testers preferred its answers to those generated by o1-mini more than half the time.

In addition, the announcement said, “while OpenAI o1 remains our broader general knowledge reasoning model, OpenAI o3-mini provides a specialized alternative for technical domains requiring precision and speed. In ChatGPT, o3-mini uses medium reasoning effort to provide a balanced trade-off between speed and accuracy. All paid users will also have the option of selecting o3-mini-high in the model picker for a higher-intelligence version that takes a little longer to generate responses. Pro users will have unlimited access to both o3-mini and o3-mini-high.”

The model is now available to users of ChatGPT Plus, Team, and Pro; Enterprise and Education users must wait another week. It will replace o1-mini in the model picker, providing higher rate limits and lower latency. OpenAI is tripling the rate limit for Team and Plus users from 50 messages per day (with o1-mini) to 150 messages per day with o3-mini. The company did not state usage limits for free plan users.

In addition, an early prototype of integration with search will find answers online, with links to their sources.

The model also offers new features for developers who incorporate OpenAI models in their software, including function calling, developer messages, and structured outputs. They can also choose one of three reasoning effort options — low, medium, and high — to adjust power and latency to suit the use case. However, unlike OpenAI o1, it does not support vision capabilities. The company said that o3-mini is available in the Chat Completions API, Assistants API, and Batch API now, to select developers in API usage tiers 3-5⁠.

In addition to its performance, OpenAI touted the model’s safety. “Similar to OpenAI o1, we find that o3-mini significantly surpasses GPT-4o on challenging safety and jailbreak evaluations. Before deployment, we carefully assessed the safety risks of o3-mini using the same approach to preparedness, external red-teaming, and safety evaluations as o1.”

GDPR authorities accused of ‘inactivity’

Data protection authorities in Europe imposed fines amounting to €1.2 billion last year according to the seventh edition of commercial law firm DLA Piper’s GDPR Fines and Data Breach Survey.

For the period since January 28, 2024, this represents a decrease of 33 percent compared to the fines of the previous year. This is the first year-on-year decline in fines, it said — although 2023 was unusual: Ireland fined Meta a record €1.2 billion that year, and no comparable fines were imposed in 2024.

In total, the fines imposed since GDPR came into force in May 2018 amount to €5.88 billion. Large technology companies and social media giants in particular have had to pay. Almost all of the ten highest fines imposed since 2018 relate to the tech industry, including the fines of €310 million euros imposed on LinkedIn by the Irish data protection authority in 2024 and a €251 million fine for Meta.

Ireland continues to impose the most fines by a wide margin: since May 2018, it has now imposed fines of €3.5 billion. In comparison, Germany has imposed fines totaling €89.1 million since the GDPR came into force. According to DLA Piper, the German data protection authorities are focusing on breaches of the integrity, confidentiality and security of data processing.

GDPR remains a powerful instrument

“This year’s results show that the data protection authorities in Europe continue to follow a clear line,” commented Jan Geert Meents, partner in the German Intellectual Property & Technology (IPT) practice group at DLA Piper, on the latest study results. The decline in the total volume of fines is ultimately due to extraordinary events in the previous year and does not mean a slowdown in regulatory activities. “The GDPR remains a powerful tool to ensure data protection and promote compliance. This is particularly true for Germany.”

Data protection activists, on the other hand, have a much more sober view of the current situation in terms of procedures and fines. The noyb association with its CEO Max Schrems even speaks of “inactivity of national data protection authorities“. On average, only 1.3 percent of all cases before the data protection authorities result in a fine, the activists report, citing statistics from the European Data Protection Board (EDPB).

Proceedings take too long

The idea that the GDPR has brought about a shift towards a serious approach to data protection has largely proven to be wishful thinking, according to a statement from noyb. “European data protection authorities have all the necessary means to adequately sanction GDPR violations and issue fines that would prevent similar violations in the future,” Schrems says. “Instead, they frequently drag out the negotiations for years — only to decide against the complainant’s interests all too often.”

The activists speak of a specific phenomenon in data protection. in 2022, for example, the Spanish data protection authority received 15,128 complaints. However, only 378 fines were imposed — including obvious violations such as unanswered requests for information or illegal cookie banners, which could theoretically be dealt with quickly and in a standardized manner. Those responsible at noyb cite the following as a comparison: 3.7 million speeding tickets were issued in Spain in 2022. Similar ratios would apply to practically all other EU member states.

Max Schrems Datenschutzaktivist noyb
Data protection authorities lack the motivation to enforce the law entrusted to them, complains Max Schrems, CEO of noyb.

David Bohmann PID

“Somehow it’s only data protection authorities that can’t be motivated to actually enforce the law they’re entrusted with,” criticizes Schrems. “In every other area, breaches of the law regularly result in monetary fines and sanctions.” Data protection authorities often act in the interests of companies rather than the data subjects, the activist suspects.

Fines motivate compliance

It is precisely fines that motivate companies to comply with the law, reports the association, citing its own survey. Two-thirds of respondents stated that decisions by the data protection authority that affect their own company and involve a fine lead to greater compliance. Six out of ten respondents also admitted that even fines imposed on other organizations have an impact on their own company.

In fact, the focus of the data protection authorities could shift somewhat, which could lead to more fines being imposed. DLA Piper refers to an announcement by the Dutch Data Protection Authority. It wants to investigate whether the directors of Clearview AI could be held personally liable for numerous GDPR violations after a fine of 30.5 million euros was imposed on the company. “This investigation could signal a potential shift in the focus of regulators towards personal liability and more individual accountability,” the legal experts interpret the move.

Personal liability — a new phase in GDPR enforcement

“The increasing focus on the personal liability of managers marks a new phase in GDPR enforcement,” comments Verena Grentzenberg, partner in DLA Piper’s IPT practice group in Germany with a focus on data protection. “This sends a clear signal to companies that breaches of data protection will not remain without consequences — not even at the level of the individuals involved.”

Apple Q2: Services buys time, what next?

Continued stress between the US and China and the slow transition to Apple Intelligence may be limiting Apple’s business growth, but there’s no legitimate way to deny the strategic success of CEO Tim Cook’s decision to build Apple’s services business (a decision he likely had in mind during the Beats purchase in 2014). The money it is making with services gives the company strength with which to weather these storms.

Think about it like this. Yes, Apple’s iPhone sales in China fell, and yes, regions in which Apple Intelligence is available saw iPhone sales outpace those in which it is not, but services increased 14% year-on-year, generating $26.3 billion in revenue — around 21% of Apple’s total revenue during the most recently revealed quarter. That’s why it means so much that Cook said, “In services, we achieved an all-time revenue record, and in the past year we’ve seen nearly $100 billion in revenue from our services business.”

That’s double Cook’s original ambition for services.

The cost of doing business

What makes those dollars even more valuable to Apple is the number of them it gets to keep: While the company generates a 39.31% margin on hardware revenues after costs, it books an astonishing 75% margin on services. In other words, for every 10 dollars of services income Apple creates, it keeps around $7.50.

Other details from Apple’s most recent financial results:

  • Revenue: $124.3 billion (+4% YoY)
  • EPS: $2.40 (+10% YoY)
  • Gross margin: 46.9% (but much higher for services)
  • Net income: $36.3 billion
  • Product revenue: $98 billion (+2% YoY)
  • Services revenue: $26.3 billion (+14% YoY)

Morgan Stanley analyst Erik Woodring today shared his estimate that the average revenue per user Apple is generating with services has now reached around $72 per user, up $5 on the last quarter. 

Apple’s management also confirmed that the iPhone 16 is outperforming the iPhone 15 range. The company said there’s been a record increase in iPhone upgrades during the quarter, presumably as its customers ensure they have the correct devices to run Apple Intelligence.

Services, services, services

Apple has managed its services pivot across the last few years. This configuration is a huge lesson to any business in that it shows the value of diversification. While Apple’s attempt to diversify its own business with services benefited hugely from the company’s incredibly positive customer satisfaction levels, any business should seek out related opportunities if it hopes to maintain growth in challenging circumstances.

“Services continues to see strong momentum, and the growth of our installed base of active devices gives us great opportunities for the future,” said Apple CFO Kevan Parekh. “We also see increased customer engagement with our services offerings. Both transacting and paid accounts reached new all-time highs, with paid accounts growing double digits year over year. Paid subscriptions also grew double digits.”

Services income also requires hardware sales, and not every Apple service will be generating anything like these numbers. The accretive nature of this part of the business is a little like the small fish that lives on a larger whale — you can’t have one without the other.

But in Apple’s ocean, Parekh’s revelation that the company has “over 1,000,000,000 paid subscriptions across services on our platforms” shows there’s plenty of fish in its ocean. Even as competition authorities force more competition into those waters, it’s a solid bet that Apple will continue to generate good business from the services segment.

Not playing games

Kicking the raw data Apple provided on its consolidated balance sheet around, you’ll see that services revenue after direct sales-related costs delivered almost half of Apple’s overall net income during the quarter. And if hardware revenue tracks overall hardware margins at 39.1%, then services at 75% is generating more actual net income than any Apple product other than the iPhone. Apple Fitness, indeed. Apple Arcade is not just playing games.

Ultimately, however, Apple’s services income is doing the job it should be doing and generating a lucrative slice of high-margin income that protects the company against product sales-driven challenges. It is also acting as a bulwark as the company engages in the transition to Apple Intelligence.

But while the company has done an excellent job crafting business resilience and bought itself time with the initial introduction of its own system-wide AI, it still needs a follow-up punch to consolidate its gains. Is Apple really going to rely on international language rollouts of Apple Intelligence, or does it plan new models for WWDC? How does it intend to augment services with additional offers its customers can’t resist? 

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe

Microsoft makes OpenAI’s o1 model free for Copilot users

Microsoft AI CEO Mustafa Suleyman writes on LinkedIn that the company is now making OpenAI’s reasoning model o1 free to use for all users of Microsoft’s AI assistant Copilot.

Microsoft calls the functionality itself “Think Deeper.” The o1 model spends more time (about 30 seconds) considering the instructions it receives from several different angles and perspectives, then delivers a more comprehensive response than most genAI tools.

Previously, interested users had to pay at least $20 per month to gain access to the o1 model via one of Open AI’s ChatGPT subscriptions. Think Deeper has also been available as a preview in Copilot Labs, also only for paying Copilot Pro users.

Italy blocks DeepSeek due to unclear data protection

Italy’s data protection authority Garante has chosen to block the app for the much-hyped Chinese AI model DeepSeek in the country.

The decision comes after the Chinese companies providing the chatbot service failed to provide the authority with sufficient information about how users’ personal data is used.

Reuters writes that Garante wants to know, among other things, what personal data DeepSeek collects, from what sources, for what purposes, on what legal basis, and whether the data is stored in China.

As a result, DeepSeek is no longer available through Apple’s or Google’s app stores in Italy. Garante has also launched an investigation. DeepSeek has not commented on the matter.

9 Google Chrome features you really should be using

If you’re like about 70% of computer users worldwide, you use Google’s Chrome browser as your gateway to the web, from conducting research and catching up on news to emailing and interacting with cloud apps. There are several tools built into Chrome that you might not know about, but should. They can improve your browsing experience significantly, enhancing productivity, organization, security, search, and more.

Even if you have already heard about some of these tools, consider this guide a refresher and encouragement to use them.

1. Chrome profiles: Keep work and personal browsing separate

You can add more than one user profile to Chrome. Each profile will have its own set of bookmarks, browsing history, website logins, and other data. For example, you can create one profile specifically for your work-related browsing, so that bookmarks and websites associated with your job are kept separate from your personal activity online.

To create another profile: Click your headshot or current profile icon that’s toward the upper right in Chrome. On the panel for your profile that opens, click Add new profile.

google chrome profiles menu

Click your profile icon, then select Add new profile.

Howard Wen / IDG

A large panel will open over the screen. You can create a new profile by signing in with another Google account. If this account already has Chrome profile data (bookmarks, browsing history, logins) associated with it, these will be synced to your PC.

set up a new chrome profile screen

You can sign into an existing Google account or create a profile that’s not connected to a Google account.

Howard Wen / IDG

Or you can select to create a new profile without signing in with another Google account. Browsing information that’s created in Chrome while using this new profile will be saved only on your PC.

naming and choosing color scheme for new profile in chrome

Naming a new Chrome profile and choosing a color scheme.

Howard Wen / IDG

After you create the new profile, it’ll appear on the panel of your first profile. Click the name of this new profile; this will launch another instance of Chrome that will let you browse under that profile. You can run two (or more) instances of Chrome on your PC, each with a different user profile.

2. Password checkup: Review (and fix) your website logins

By default, Chrome automatically saves your usernames and passwords for websites that require a login in a service called Google Password Manager. If you don’t use a dedicated password manager app, GPM is a convenient tool for storing and managing login info. (See our separate guide to Google Password Manager.) It’s easy to “set and forget” passwords, so it’s a good idea to periodically check the health of your logins, updating usernames or passwords as needed.

Click the three-dot icon at Chrome’s upper right. On the menu that opens, select Passwords and autofill and then Google Password Manager. GPM will open in a new browser tab, where you’ll see the login information for the websites you’ve saved to GPM. You can click a website name to change or delete your username or password for it.

An important  feature to use is the Checkup tool. Along the left, click Checkup. Chrome will analyze all of your website passwords, rating which have weak security and notifying you if any have been compromised or if you’ve reused any across websites. You can click to see a list of the offending passwords, and the password manager’s interface will guide you through changing them.

password checkup screen in chrome

Check for compromised, reused, or weak passwords, then change them as needed.

Howard Wen / IDG

If you’d like, you can use the password manager as a self-standing app on your PC. When Google Password Manager is open in a tab, click the Install Google Password Manager icon at the right end of the address bar. After it’s installed on your PC, you can click the desktop shortcut to launch Google Password Manager on its own, apart from Chrome.

3. Print to PDF: Turn a web page into a PDF

“Printing” a web page to a PDF can be useful for archiving the page as its contents appeared when you viewed it, or sharing a page when a web link to it won’t be convenient or possible for the person you want to share it with.

The fastest way to do this: With the web page open, hold the Ctrl key and type p on a Windows PC (or the Cmd key and p on a Mac). Alternatively, click the three-dot icon at the upper right of Chrome, and on the menu that opens, select Print.

A large panel opens. To the right of “Destination,” see if “Save as PDF” is listed inside the selection box. If it’s not, click this box to open a dropdown menu and select Save as PDF.

using the print to pdf feature in chrome

Set the Destination field to Save as PDF.

Howard Wen / IDG

The rest of this panel lists settings for formatting the PDF that you can change. (If you don’t see them, click More settings.) When you’ve set everything the way you want, click Save. You’ll be prompted to select a location on your PC’s storage where you want to save the PDF. Make your choice, and then Chrome will output the entire web page as a PDF and save it to your PC.

4. Reading list: Curate a list of web pages to read later

Chrome offers a nifty feature that lets you gather web pages that you want to remember to read later. The difference between saving a web page to Chrome’s reading list versus saving it as a bookmark is that the reading list is meant to motivate you, such as to read important information that you’re doing for research. You can chart your progress by marking a page as read when you’re finished with it.

With the web page open, click the three-dot icon at the upper right of Chrome. On the menu that opens along the right, click Bookmarks and lists and then select Reading list. Then click Add tab to reading list at the bottom of the panel. Repeat this process to add more web pages to the reading list.

To open your reading list, click the three-dot icon at the upper right, then select Bookmarks and lists > Reading list > Show reading list.The list will open in a panel on the right.

reading list panel in chrome

Gather web pages you want to read in\ Chrome’s reading list.

Howard Wen / IDG

On the reading list, clicking the title of a web page opens it in the browser tab to the left. When you’re finished reading it, move the pointer over the page’s title in the list and select the checkmark to mark the page as read or the x to remove it from the reading list.

5. Reading mode: Make lengthy content easier to read

You may come across an article that you want to concentrate on without other elements on the page’s layout (such as ads, images, videos, or sidebars) distracting you. Or maybe your eyesight is struggling with how the text appears on the page. Reading mode can help, and it works very well for reading long articles.

With the web page open, click the three-dot icon at the upper right, then select More tools > Reading mode. Chrome will extract the main article from the page and format it for easier reading in the reading mode panel that appears on the right.

reading mode panel in chrome

Try reading mode for a distraction-free environment to read long articles.

Howard Wen / IDG

You can widen the reading mode panel by clicking-and-holding the double-bar icon on its left frame. Drag this icon toward the left, and the margins for the text in the reader mode panel will automatically adjust themselves.

Along the top of the reading mode panel is a toolbar that lets you adjust the text font and size, and the spacing between text characters and lines of text. You can also change the background color.

6. Tab groups: Organize and name tab collections

Chrome’s tab groups feature lets you organize tabs of related web pages into a collection that has a title. When you click the group title, all the web pages that you organized under it will open in the browser. This can be useful if you want to open multiple web pages that you frequently visit with a single click. You can create several different tab groups — say, one group for the core web apps you use every day for work, another for research related to a specific project, and so on.

To create a new tab group: At the left end of the Bookmarks toolbar, click the grid icon and select Create new tab group. Alternatively, click the three-dot icon at the upper right of Chrome, and on the menu that opens, select Tab groups > Create new tab group.

Or you can create a new tab group starting from an existing tab: Simply right-click the tab and select Add tab to group > New group from the menu that appears.

A special tab will open that prompts you to type in a name for your new tab group. You can optionally select a highlight color for the new tab group.

creating a new tab group in chrome

Creating a new tab group.

Howard Wen / IDG

Press the Enter key, and your new tab group will appear among the tabs in Chrome. If your Bookmarks toolbar is open, the group will also appear to the left of the grid icon.

To add a web page to a tab group: Simply drag a tab that’s already open in Chrome to the right of the tab group name and let it go.

dragging a chrome tab into a tab group

Adding a tab to a group via drag-and-drop.

Howard Wen / IDG

To close the tabs in a tab group: Click the tab group name. The tabs that are opened to the right of it will close.

To open the tabs in a tab group: Click the tab group name, and the tabs that you organized under it will open to its right. Or, if you have the Bookmarks toolbar open, you can click the tab group name there or click the grid icon and select the group you want to open.

tab groups list in bookmarks bar

Navigating to a tab group via the Bookmarks toolbar.

Howard Wen / IDG

Finally, you can click the three-dot icon at the upper right of Chrome, then select Tab groups, the name of the tab group that you want, and Open group.

To manage a tab group: Right-click on the tab group name. On the menu that opens, you can click the following:

  • New tab in group: Opens a new, blank tab to the right of the tab group name. The web page you navigate to in this tab will be added to the tab group.
  • Move group to new window: Opens all the web pages organized in this group tab in a new browser window.
  • Ungroup: The web pages in this tab group will be opened, but the tab group (and its name) will be removed. This action essentially “frees” the web pages that you put into this tab group.
  • Close group: Closes a tab group, which removes it from the browser’s tabs toolbar. You can reopen a closed group via the Bookmarks toolbar or by selecting the three-dot icon and Chrome’s upper right, selecting Tab groups, and choosing the group you want.
  • Delete group: Deletes both the tab group name and all the web pages that you organized in it.

[ Related: 8 brilliant browser tab tricks for Windows power users ]

7. Google Lens: Search by image

Google Lens is a visual search feature built into Chrome. It lets you search for the source of an image on a web page, find variants of the image, or find or similar looking images. You can also use it to translate foreign words that appear in a photo or other image.

It can also be used to find an item for sale online. For example, if you have Google Lens search on a photo of a laptop, it might find an online store where you can buy it.

To use Google Lens in Chrome, right-click on a photo or image on a web page. On the menu that opens, select Search with Google Lens. A panel will open along the right of the browser, showing search results that you can browse through. You can click any result to open its web link in the browser.

google lens search results in chrome

Using Google Lens image search.

Howard Wen / IDG

In the main browser window that shows the image Google Lens searched on, you can fine-tune the image search in various ways:

  • Adjust the frame around the image by clicking-and-dragging its corners or sides. This may prompt Google Lens to provide more precise search results.
  • Draw a frame around a specific area of the image. Position the crosshair over the image, then click-and-drag it in any direction to frame the area of the image that you want Google Lens to analyze and search.
  • Translate text that’s in a language other than the one set as your browser’s default. Draw a frame around the text or double-click it to highlight it, then select Translate on the menu that opens. Google Lens will open a translation tool in the panel along the right.
translating text in an image with google lens

Google Lens can translate text in an image.

Howard Wen / IDG

You’re viewing a web page on your PC but want to see it on your phone, tablet, or another PC. Here are two unique ways to forward a web page link to another device:

First, you must be signed into Chrome with a Google account. The device you want to forward the link to also must be signed into Chrome with the same Google account.

With the web page open in Chrome on your PC, click the three-dot icon toward the upper right. On the menu that opens, select Cast, save, and share and then Send to your devices.

A menu pops open that lists any mobile device and other PCs that are signed in with your Google account. If you click the name of your smartphone on this menu, that device will receive a notification in Chrome. Tap this notification to open the web page.

sending a link to another device signed in to same account

Sending a web link to a signed-in device.

Howard Wen / IDG

If the smartphone or other device that you want to forward the link to isn’t signed in to your Google account, you can create a QR code for the web page’s link.

With the web page open in Chrome on your PC, click the three-dot icon toward the upper right. On the menu that opens, select Cast, save, and share > Create QR code.

A QR code image will pop open below the web address bar.

generating a qr code to send a link in chrome

Creating a QR code to send a link.

Howard Wen / IDG

Use the smartphone’s camera to capture it — most recent smartphone models will recognize a QR code. When you tap the link that appears, the web page will open in the smartphone’s default browser, whether it’s Chrome or another such as Firefox, Microsoft Edge, or Safari.

9. Translation: Manage the languages that Chrome translates

By default, Chrome offers to translate a web page if it’s not in your preferred native language. (If it doesn’t, click the Translate this page icon at the right end of the address bar or click the three-dot icon at the upper right and choose Translate.)

It’s worth taking the time to manage this feature so that it’s set best for your browsing, particularly if you frequently visit sites that are in languages other than your native one. Click the three-dot icon at the upper right of Chrome. On the menu that opens, scroll to the bottom and select Settings. The Settings page opens in a new tab. Along the left column, click Languages.

On the page that appears, scroll down to the Google Translate section. Here you can tell Chrome to automatically translate pages that are in certain languages without asking you first. You can also tell it not to offer to translate pages in some languages — useful for people who are fluent in more than one language. For languages that you don’t specify as “automatically translate” or “never offer to translate,” Chrome will continue to offer to translate the page.

setting translation preferences in chrome

Setting translation preferences in Chrome.

Howard Wen / IDG

Want more Chrome tips? See 8 great productivity tips for Chrome.

Coming soon — a fully open reconstruction of Deepseek-R1

The Deepseek-R1 model has managed to attract a lot of attention in a short time, especially because it can be used commercially without restrictions.

Now, developers at Hugging Face are trying to reconstruct the generative AI (genAI) model from scratch and develop an alternative to Deepseek-R1 called Open-R1 based on open source code. Although Deepseek is often referred to as an open model, parts of it are not completely open.

“Ensuring that the entire architecture behind R1 is open source is not just about transparency, but about unlocking its full potential,” developer Elie Bakouch, of Hugging Face, told Techcrunch.

In the long run, Open-R1 could make it easier to create genAI models without sharing data with other actors.

Is Apple Intelligence 2.0 on track?

Earlier this week, we learned about Apple’s decision to appoint Kim Vorrath, the vice president of the company’s Technology Development Group (TDG), to help build Apple Intelligence under the supervision of John Giannandrea, Apple’s senior vice president for machine learning and AI.

Vorrath, who also serves at a board member at the National Center for Women in IT and sits on the Industrial Advisory Board at Cal Poly, has been with Apple since 1987. She’s taken leadership roles in iOS and OS X — she was even in charge of macOS at one time. Part of the original iPhone development team, she also supervised OS development for iPad, Mac and Vision Pro.

When it comes to bug testing and software quality control, she can say which features are ready to go and which are not. Vorrath also coordinates releases, not just for the specific platform (such as iPhone), but between devices, which means a great deal when you consider how integrated the Apple ecosystem has become.

Getting the band together

That established talent will be critical, given that Apple Intelligence features are also designed to work across the Apple ecosystem.

Of course, making these complex high tech products work well together takes effective organization. Vorrath brings that. She seems to be a person who can organize engineering groups and design effective workflows to optimize what those teams can do. With all these achievements, it is no surprise Vorrath is seen as one of the women who contributed the most to making Apple great.

In her new role, she joins Giannandrea, who allegedly “needs additional help managing an AI group with growing prominence,” Bloomberg reported.

Put it all together and it’s clear that Vorrath is one of Apple’s top fixers and joins the AI team at a critical point. First, she’s probably going to help get a new contextually-aware Siri out the door, and second, she’ll be making decisions around what happens in the next major iterations of Apple Intelligence.

It’s the next steps for Apple’s AI that I think have been missed in much of the coverage of this internal Apple shuffle. 

Apple Intelligence 2.0

While people like to focus on Siri’s improvements and shortcomings, it must also be true that Apple hopes to maintain its traditional development cadence when it comes to Apple Intelligence.

That means delivering additional features and feature improvements every year, usually at WWDC. With the next WWDC looming fast, it might fall to Vorrath to select what additions are made, and to ensure they get developed on time.

Think logically and you can see why that matters. Apple announced Apple Intelligence at WWDC 2024, but it wasn’t ready to ship alongside the original release of operating system updates, and features were slowly introduced in the following months. 

Arguably, the schedule didn’t matter. What does matter is that Apple, then seen as falling behind in AI, used Apple intelligence to argue for its own continued corporate relevance. It bought itself some time.

Now it must follow up on that time. That means making improvements and additions to show continued momentum. It comes down to delivering solutions consumers will want to use, with a little Apple magic alongside new developer tools to extend that ecosystem.

It has to succeed in doing this to maintain credibility in AI.

Is Apple going to keep relevant?

Getting that right — particularly across all Apple’s platforms and in good time — is challenging, and is most likely why Vorrath has been brought in. There’s so much riding on getting the mix right. Apple needs to be able to say “Hey, We’re not done yet with Apple Intelligence,” and back that claim up with tools to keep users’ interest. Those new AI services need to work well, ship on time, and work so people won’t even know how much they needed them until they use them.

Getting that mix right is going to take skill, dedication, and discipline. In the coming months, all eyes will be on Apple as critics and competitors wait to find out whether Apple Intelligence was a one shot attempt at maintaining relevance, or the first steps of a great company about to find its AI feet.

Making sure it is the second, and not the first, should be the fundamental mission Vorrath has taken on in her new role. 

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe

How DeepSeek will upend the AI industry — and open it to competition

Chinese start-up DeepSeek’s cost-saving techniques for training and delivering generative AI (genAI) models could democratize the entire industry by lowering entry barriers for new AI companies.

DeepSeek made waves this week as its chatbot overtook ChatGPT downloads on the Apple and Google App Stores. The open-source AI model’s impact lies in matching leading US models’ performance at a fraction of the cost by using compute and memory resources more efficiently.

DeepSeek is more than China’s “ChatGPT”; it’s a major step forward for global AI by making model building cheaper, faster, and more accessible, according to Forrester Research. While large language models (LLMs) aren’t the only route to advanced AI, DeepSeek’s innovations should be “celebrated as a milestone for AI progress,” the research firm said.

The efficiencies of DeepSeek’s AI methodology means it requires vastly less compute capacity on which to run; that means it could also affect the chip industry, which has been riding a wave of GPU and AI accelerator hardware purchases by companies building out massive data centers.

For example, Meta is planning to spend $65 billion to build a data center with a footprint that’s almost as large as Manhattan. Expected to come online at the end of this year, the data center would house 1.3 million GPUs to power AI tech used by Facebook and other Meta ventures.

Brendan Englot, a professor and AI expert at Stevens Institute of Technology in New Jersey, said the fact that DeepSeek’s models are also open source will also help make it easier for other AI start-ups to compete against large tech companies. “DeepSeek’s technology provides an excellent example of how disruptive and innovative new tools can be built faster with the aid of open source software,” said Englot, who is also director of the Stevens Institute for Artificial Intelligence (SIAI).

DeepSeek’s arrival on the scene tanked GPU-leading provider Nvidia’s stock, as investors realized the impact the more efficient processes would have on AI processor and accelerator sales.

“DeepThink” a feature within the DeepSeek AI chatbot that leverages the R1 model to provide enhanced reasoning capabilities, uses advanced techniques to break down complex queries into smaller, manageable tasks.

Thanks to those kinds of optimizations, DeepThink (R1) only cost about $5.5 million to train — tens of millions of dollars less than similar models. While this could reduce short-term demand for Nvidia, the lower cost will likely drive more startups and enterprises to create models, boosting demand long-term, Forrester Research said.

And, while the costs to train AI models have just declined significantly with DeepThink, the cost to support inferencing will still require significant compute and storage, Forrester said. “This shift shows that core AI model providers won’t be enough, further opening the AI market,” the firm said in a research note. “Don’t cry for Nvidia and the hyperscalers just yet. Also, there might be an opportunity for Intel to claw its way back to relevance.”

Englot agreed, saying there is a lot of competition and investment right now to produce useful AI software and hardware, “and that is likely to yield many more breakthroughs in the very near future.”

DeepSeek base technology isn’t pioneering. On the contrary, the company’s recently published research paper shows that Meta’s Llama and Alibaba’s Qwen models were key to developing DeepSeek-R1 and DeepSeek-R1-Zero — its first two models, Englot noted.

In fact, Englot doesn’t believe DeepSeek’s advance poses as much of a threat to the semiconductor industry as this week’s stock slide suggests. GenAI tools will still rely on GPUs, and DeepSeek’s breakthrough just shows some computing can be done more efficiently.

“If anything, this advancement is good news that all developers of AI technology can take advantage of,” Englot said. “What we saw earlier this week was just an indication that less computing hardware is needed to train and deploy a powerful language model than we originally assumed. This can permit AI innovators to forge ahead and devote more attention to the resources needed for multi-modal AI and advanced applications beyond chat-bots.”

Others agreed.

Mel Morris, CEO of startup Corpora.ai, said DeepSeek’s affordability and open-source model allows developers to customize and innovate cheaply and freely. It will also challenge the competitive landscape and push major players like OpenAI — the developer of ChatGPT — to adapt quickly, he said.

“The idea that competition drives innovation is particularly relevant here, as DeepSeek’s presence is likely to spur faster advancements in AI technology, leading to more efficient and accessible solutions to meet the growing demand,” Morris said. “Additionally, the open-source model empowers developers to fine-tune and experiment with the system, fostering greater flexibility and innovation.”

Forrester cautioned that, according to its privacy policy, DeepSeek explicitly says it can collect “your text or audio input, prompt, uploaded files, feedback, chat history, or other content” and use it for training purposes. It also states it can share this information with law enforcement agencies [and] public authorities at its discretion.

Those caveats could be of concern to enterprises who have rushed to embrace genAI tools but have been concerned about data privacy, especially when it involves sensitive corporate information.

“Educate and inform your employees on the ramifications of using this technology and inputting personal and company information into it,” Forrester said. “Align with product leaders on whether developers should be experimenting with it and whether the product should support its implementation without stricter privacy requirements.”

Alibaba introduces Qwen 2.5-Max AI model, claims edge over DeepSeek

China’s Alibaba Group has launched an upgraded version of its Qwen 2.5 AI model, claiming it outperforms models from DeepSeek, OpenAI, and Meta, as competition in the AI market intensifies.

“Qwen 2.5-Max outperforms … almost across the board GPT-4o, DeepSeek-V3 and Llama-3.1-405B,” Alibaba’s cloud unit said on its WeChat account, according to Reuters.

On its GitHub page, the company showed benchmarking results indicating that its instruct models – designed for tasks like chat and coding – mostly outperformed GPT-4o, DeepSeek-V3, and Llama-3.1-405B, while performing comparably to Claude 3.5-Sonnet.

The launch follows DeepSeek’s disruptive entry into the market, marked by the Jan 10 debut of its AI assistant powered by the DeepSeek-V3 model and the Jan 20 release of its open-source R1 model.

The Chinese startup’s low-cost strategy has shaken Silicon Valley, sending tech stocks lower and prompting investors to question the sustainability of major US AI firms’ high-spending approach.

China’s AI race heats up

Alibaba’s launch coincided with the Lunar New Year holiday, a time when much of China is on break, underscoring the growing competitive pressure from DeepSeek.

DeepSeek’s rapid ascent over the past three weeks has intensified rivalry not only with global players but also among Chinese tech firms.

“The AI model war is no longer just China versus the US – competition within China is also intensifying as companies like DeepSeek, Alibaba, and others innovate and optimize their models to serve a high-scale domestic market,” said Neil Shah, partner and co-founder at Counterpoint Research. “Chinese companies are being pushed to innovate further due to resource constraints, including limited access to the most advanced semiconductors, global-scale data, tools, infrastructure, and audiences.”

The race for frugal AI

The race to develop high-performance, cost-efficient AI models is intensifying, challenging the business strategies and pricing structures of major US hyperscalers and AI firms as they seek to recover billions in investment.

“This gives enterprise buyers and decision-makers more leverage, increasing pricing pressure on AI applications built with more expensive underlying models,” Shah said. “Such breakthroughs will force enterprises to reconsider, or at least rethink, the economics of AI investments and their choice of models and vendors.”

DeepSeek is driving immediate pricing considerations in two key areas of AI – raw token costs and model development expenses. These factors may force AI companies worldwide to consider optimizing their models to remain competitive. 

“DeepSeek’s success also highlights the power of open source, strengthening the argument that open-source AI could become a dominant market later,” said Hyoun Park, CEO and chief analyst at Amalgam Insights. “If that happens, companies with strong open-source business models for enterprises – such as IBM Red Hat and Canonical – could step in and rapidly scale AI-related managed services.”

The geopolitics advantage

Geopolitics remains a wild card for Western AI firms, potentially tilting the market in their favor by restricting the adoption of Chinese models in certain regions.

At the same time, China is likely to tighten controls on the use of Western AI models, mirroring restrictions seen with other tech applications.

Enterprises will first assess whether these models comply with global privacy and regulatory standards before adopting them at scale, said Sharath Srinivasamurthy, associate vice president of Research at IDC.

“DeepSeek’s advancements could lead to more accessible and affordable AI solutions, but they also require careful consideration of strategic, competitive, quality, and security factors,” Srinivasamurthy said.

However, China’s substantial investment in AI research and development is only beginning to yield results, according to Srinivasamurthy. Other Chinese firms, like Alibaba, which have also been investing in AI in recent years, may soon start launching their own models.