Author: Security – Computerworld

Linux gets support for the Copilot key

There is a special Copilot key on some Windows laptops that can be used to launch Microsoft’s AI assistant — and now support for the key is coming to Linux.

That support has been added in version 6.14 of Linux, though exactly what it will be used for depends on which Linux-based operating system you are running. Most users are likely to use the key to open any generative AI assistant.

Version 6.14 offers several other new features, including expanded support for hand controllers, according to Phoronix.

Apple and the art of IT management

They may be stressed about shadow IT, security, endpoint management, and the recent proliferation of artificial intelligence (AI) tools. They’re challenged not just by static budgets and rising costs, but by the proliferation of tools they now must use to do their job. And while IT admins are not exactly struggling for work, there is plenty for them to do.

Those are some of the findings in JumpCloud’s latest survey of IT admin decision-makers across the US, UK, and Australia. The report confirms that the workplace continues to become increasingly multi-platform, with 27% of enterprise employees now preferring to use Macs, up from low-single figures at the turn of the century.

JumpCloud, which this week acquired Stack Identity, is one of the larger unified device, identity, and access management platforms to provide support for the burgeoning Apple enterprise.

Windows down, macOS and Linux up

“Windows use has shown the most significant decrease over the last six months, compared to macOS and Linux, which both increased,” said JumpCloud.

When asked about the breakdown of their organization’s device type, admins reported Windows devices comprise 56% (down from 63% in Q3 2024), macOS 27% (up from 24% in Q3 2024), and Linux 20% (up from 18% in Q3 2024).

Mac adoption in the enterprise is certain to continue to climb. The tech support cost overhead of managing Windows systems means a migration might represent low-hanging fruit for many enterprise leaders working to squeeze more from their budgets. Initial cost aside (and the difference between Mac and equivalent Windows systems is smaller now than ever), the total cost of ownership has a significant impact on budgets.

Make no mistake. Budget-wrangling really is a thing: 39% spend up to half of their entire budget on licensing fees. While this reflects the grim reality that tech providers of all kinds are forcing subscriptions onto their customers, it represents a massive increase in such costs. In Q3, 2024 just 28% of admins endured similar budget erosion for licensing fees. 

What do you get for your money?

Windows and other Microsoft devices were seen as the most difficult things to manage by 23% of respondents. To be fair, Apple devices were seen as difficult to manage by 19% of IT admins, with Linux winning unspoken praise — just 14% of admins saw it as the most difficult.

All the same, what that difference in Windows-vs-Mac management difficulty means is that the Windows experience is more abrasive, which — in conjunction with the upcoming Windows licensing replacement cycle — means IT will be tempted to look at alternatives.

Perhaps that’s why 43% of admins expect macOS device use to increase in the coming year, though 54% also anticipate increased use of Windows. The default rate — enterprises dropping support for either platform — is fairly equal, though Apple has the edge. 

On-prem, off-prem, and multicloud

Admins are also frustrated at the complexity of managing cloud and multi-cloud setups. That’s turning into a big opportunity for managed service providers (MSPs) who increasingly offer to ease the pain of managing multi-cloud setups. MSPs aren’t just about cloud services management, of course. But it does appear to be their time to shine, with 93% of organizations already using or considering an MSP. Their role also seems to be changing, as they’re increasingly seen as trusted advisors.

Can AI supplement IT roles? Admins see the technology as both a risk and an opportunity. But organizations appear to be accelerating AI deployments, with 15% of admins warning the tech is being put in place too fast and 67% believing these deployments are outpacing organizational ability to protect against threats.

“Keeping pace with all the improvements and changes keeps me up at night,” one anonymous survey respondent told JumpCloud. “AI has bought a new way of doing business and requires major adjustments.” 

Back to the (AI) future

It’s not just deployment that has admins spooked. Thirty-seven percent of them fear AI will take their jobs — and 56% of corporate vice presidents now worry AI will replace them, up from 29% a year ago. All the same, fear of redundancy is endemic across every role.

How is AI being managed? Most enterprises are taking steps to accommodate its use, with just 21% having taken no steps or put AI restrictions in position. Almost half (49%) of companies have developed policies to guide employee use of AI, with 47% encouraging use of tools such as ChatGPT. 

With data being the new gold, it’s no surprise that 28% of IT admins said their companies now have controls in place to prevent employee use of AI. “To harness [AI’s] power responsibly, organizations must lead with clear governance and innovation frameworks that balance opportunity with risk,” JumpCloud said.

All the same, unauthorized use of AI continues, and just like any other form of Shadow IT this proliferation is a big problem for IT.

The usual suspects

The number of admins concerned about the use of apps and devices that aren’t managed has increased again, with 88% of IT admins now worried about this. They estimate that most employees use between one and five unauthorized applications.

There are lots of reasons for this, one of them being the speed at which businesses are moving, which means the current needs aren’t being met, driving employees to seek solutions that fit. And while you’d expect IT to spend time handling this, lack of time and lack of visibility into all the apps employees use means that just as fiscal budgets demand careful juggling, so too does precious IT time.

What else is eating that time? Security. It currently consumes the lion’s share of IT budgets. A plurality of organizations (47%) spend between 10% and 25% of their yearly IT budget on cybersecurity; another 24% spend 26% to 50%; 5% spend more than half their budget on security; and 24% less than 9% on security. In other words, security remains a tidy little earner for vendors, and a significant revenue expenditure line item for IT. 

You’d think with all that money spent, security would already be tightly constrained, but that’s not the case. 

Almost half (46%) of organizations report that they have fallen victim to a cyberattack. AI-augmented attacks are also proliferating — this is now the third-biggest security concern after phishing and shadow IT. Man-in-the-middle attacks, MFA hacks and security breaches in partner organizations are also on the rise.

In other words, security is an endless feast of fear for some, and of revenue for others. Of course, things might be better if there were platform choices that could mitigate this attack surface.

You can explore some of the highlights from a previous JumpCloud survey here.

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe

DeepSeek triggers shock waves for AI giants, but the disruption won’t last

Chinese start-up DeepSeek rocked the tech industry Monday after the company’s new generative AI (genAI) bot hit Apple’s App Store and Google Play Store and downloads almost immediately exceeded those of OpenAI’s ChatGPT. US AI model and chipmaker stock prices were hit hard by the newcomer’s arrival; Google, Meta and OpenAI all initially suffered and chipmaker Nvidia’s stock closed the day down 17%. (The tech heavy Nasdaq exchange lost more than 600 points.)

DeepSeek’s open-source AI model’s impact lies in matching US models’ performance at a fraction of the cost by using compute and memory resources more efficiently. But industry analysts believe investor reaction to DeepSeek’s impact on US tech firms and others is being dramatically exaggerated.

“The market is incorrectly presuming this as a zero-sum game,” said Chirag Dekate, a vice president analyst at Gartner Research. “They’re basically saying, ‘Maybe we don’t need to build data centers anymore, maybe we’re not as energy starved because DeepSeek showed us we can do more with less.’”

Giuseppe Sette, president of AI tech firm Reflexivity agreed, stressing that DeepSeek took the market by storm by doing more with less.

“In layman terms, they activate only the most relevant portions of their model for each query, and that saves money and computation power,” Sette said. “This shows that with AI, the surprises will keep on coming in the next few years. And even though that might be a bit of a shocker today, it’s extremely bullish in the long-term — because it opens the way for deeper and broader adoption of AI at all scales.”

In essence, the markets have overlooked that companies such as Google, Meta, and OpenAI can replicate DeepSeek’s efficiencies with more mature, scalable AI models that offer better security and privacy.

“This is not a ‘the sky is falling moment’ for markets. I think they should take a close look at what this actually is: there are techniques you can implement to more effectively scale your AI models,” Dekate said.

Another looming problem for the newcomer is that DeepSeek is purported to filter out content that could be viewed as critical of the Chinese Communist government. DeepSeek’s release of its R1 and R1-Zero reasoning models on Jan. 20 quickly drew attention for two key aspects:

  1. DeepSeek eliminates human feedback in training, speeding up model development, according to AI developer Ben Thompson.
  2. DeepSeek requires less memory and compute power, needing fewer GPUs to perform the same tasks as other models.

DeepSeek claims its breakthroughs in AI efficiency cost less than $6 million and took less than two months to develop.

John Belton, a portfolio manager at Gabelli Funds, an asset management firm whose funds include shares of Nvidia, Microsoft, Amazon, and others, said DeepSeek’s achievements are real, but some of the company’s claims are misleading.

“No, you cannot recreate DeepSeek with $6 million and the extent to which they distilled existing models (took shortcuts potentially without license) is an unknown,” Belton said via email to Computerworld. “However, they have made key breakthroughs that show how to reduce training and inference costs.”

Belton also pointed out that DeepSeek isn’t new. Its creator, Liang Wenfeng, a hedge fund manager and AI enthusiast, published a paper on the performance breakthroughs more than a month ago and released a model with similar methods a year ago.

Dekate said DeepSeek’s rollout was particularly timely because just last month news outlets were publishing stories about AI scaling limitations from leading providers.

As organizations continue to embrace genAI tools and platforms and explore how they can create efficiencies and boost worker productivity, they’re also grappling with the high costs and complexity of the technology.

DeepSeek improved memory bandwidth efficiency with two key innovations: using a lower-position memory algorithm and switching from FP32 (32-bit) to FP8 (8-bit) for model precision training. “They’re using the same amount of memory to store and move more data,” Dekate said.

One analogy would be to consider the onramp to a major city highway — the highway being the data path. If the onramp only has one lane, there are only two ways to address traffic congestion:

  1. Increase the width of the roadway to fit more traffic
  2. Reduce the size of the vehicles to more fit on the roadway

DeepSeek did both. It created smaller vehicles, i.e., it used smaller data packets (8-bit) and therefore was able to pack more data into the same footprint.

The second key innovation was optimizing and compressing the key-value cache. DeepSeek used compression algorithms to reduce memory by processing prompts in two phases: decomposing and generating responses, both relying on efficient key-value cache use.

“They utilized underlying compute and memory resources incredibly efficiently,” Dekate said. “That is an amazing accomplishment, because they’re utilizing the underlying GPU resources more productively. Their models are able to perform at leadership-class levels while using a relatively lower scale of resources.”

Enterprises can benefit as well by adopting the techniques introduced by DeepSeek because it reduces the cost of adoption by using fewer compute resources for inferencing and training. Lower model costs should benefit innovators such as OpenAI and reduce the cost of applying AI across industries.

By using resources more efficiently, DeepSeek enables faster, broader AI adoption by other companies, driving growth in AI development, demand, and infrastructure.

And in the end, DeepSeek’s algorithm still needs AI accelerator technology to work — meaning GPUs and ASICs.

“It’s not the case that DeepSeek just woke up one day and had an amazing breakthrough. No, they’re using sound engineering techniques and they’re using some of the leading AI accelerators — and GPUs happen to be table stakes,” Dekate said. “And they use thousands of them. It’s not like they discovered a new technique that blew this whole space wide open. No. You still need AI accelerators to perform model training.”

Even in the most pessimistic view, if AI costs drop to 5% of those from other leading AI models, that efficiency eventually benefits those other models by reducing their costs, allowing for faster model adoption.

For enterprises, Dekate said, it’s worth exploring DeepSeek and similar models internally and in private settings. “Your legal team evaluates the terms and conditions of your ecosystem quite extensively. They’ll ask if privacy is protected. Are the data sources filtered? Are AI model responses filtered in any sense?” he said.

Before jumping in, enterprises should carefully consider these details. “Models like Gemini and GPT offer reliable, secure responses with enterprise-level protections, unlike many open models that lack these controls,” Dekate argued.

“Once everything settles, the net-net is that DeepSeek has developed very specific capabilities that are quantitative and that’s something to learn from, just as they did from Llama 3,” Dekate said.

Update Exchange Server or move to the cloud, say experts

Microsoft Exchange administratorsrunning versions older than March 2023 need to update or they won’t get the latest security mitigations, says an expert.

But, David Shipley added, even better advice is to shift quickly to the cloud-based Microsoft 365, which always has the latest patches.

“Running your own Exchange Server is really a bad idea in 2025,” said Shipley, head of Canadian-based security awareness training provider Beauceron Security. “Anyone not patched to the nines, to the latest standard [today], is asking for trouble.”

Shipley was commenting on last week’s caution from Microsoft that an older Office Configuration Service (OCS) certificate that verified automated downloaded Exchange Server mitigations is being deprecated. The new certificate, which is deployed by the Exchange Emergency Mitigation Service (EEMS), can only be read by servers running Exchange Server Cumulative Updates or Security Updates newer than March 2023.

The Microsoft alert said, “The EEMS running Exchange versions older than March 2023 is not able to contact OCS to check for and download new mitigation definitions. You might see an event like the following event logged in the Application log of the server:

Error, MSExchange Mitigation Service
Event ID: 1008
An unexpected exception occurred.
Diagnostic information: Exception encountered while fetching mitigations.

In the alert, the company urged admins to take action, saying, “If your servers are so much out of date [pre-March 2023], please update your servers ASAP to secure your email workload and re-enable your Exchange server to check for EEMS rules.” 

The Microsoft blog is “alarming,” said Andrew Grotto, a research scholar at Stanford University’s Centre of International Security and Co-operation and the senior director for cybersecurity policy at the White House in both the Obama and Trump administrations. “It shows how sticky [on-premises] Exchange is.”

Exchange mitigations are essentially hot fixes that plug holes, Shipley explained. Shifting to the software-as-a-service M365 doesn’t solve all security problems for the email service, he acknowledged, but, he added, it does solve the problem of threat actors being able to exploit unpatched or aged versions of the server, because Microsoft installs fixes for Microsoft 365 as soon as it creates them.

It isn’t known how many organizations still run Exchange on-premises but Shipley said he knows at least one unnamed public service organization currently running Exchange 2013.

Why do IT admins still have old versions of Exchange – or any other software? One reason: To save money on expensive software and hardware updates, Shipley said.

“Legacy infrastructure is the most difficult addiction to kick,” added Roger Cressey, a partner with US-based Liberty Group Ventures and formerly a senior vice-president at the Booz Allen Hamilton consultancy, where he supported the firm’s cybersecurity practice in the Middle East.

Both men stressed that better security is one of the biggest reasons to move to the cloud. This is particularly true for Exchange. It’s been hit by a number of vulnerabilities, including zero day holes. Arguably the most notorious were the vulnerabilities dubbed ProxyLogon, exploited in 2021 by a Chinese-based group called Hafnium. There was also a chain of vulnerabilities called ProxyShell.

These issues led to the release in September 2021 of Exchange Server updates that included the EEMS, which applies mitigations to the servers until patches are developed.

On-premises Exchange — and not just older versions — should be considered a legacy product, Johannes Ullrich, dean of research at the SANS Institute, said in an email to CSO. “Support from Microsoft is decreasing, and the overall tendency at Microsoft is to push Exchange users to cloud offerings. There is probably no good reason to avoid this push and to migrate to cloud e-mail services as soon as possible. Exchange support is only going to decrease and patching will remain painful.”

Thus, said Cressey, Exchange admins should “move to address” the Microsoft warning.

What enterprises need to know about DeepSeek’s game-changing R1 AI model

Two years ago, OpenAI’s ChatGPT launched a new wave of AI disruption that left the tech industry reassessing its future. Now within the space of a week a small Chinese startup called DeepSeek appears to have pulled off a similar coup, this time at OpenAI’s expense.

Nevertheless, DeepSeek’s sudden success — the company’s free mobile app quickly surpassed even ChatGPT for downloads on Apple’s App Store — has prompted questions. Is the DeepSeek story too good to be true? And should businesses in the US and allied countries allow employees to use an app when the company’s Chinese background and operation are so opaque?

What happened

The DeepSeek storm hit on January 20 when DeepSeek launched its R1 LLM model to the public, complete with big claims around performance.

Using smaller “distilled” LLM models, which require significantly less processing power while replicating the capability of larger models, DeepSeek’s R1 matched or exceeded OpenAI’s equivalent, o1-mini, in important math and reasoning tests.

That performance generated a surge of interest. By Monday the DeepSeek app had overtaken ChatGPT and Temu to become the iPhone App Store’s top free download — and DeepSeek was reporting delays in new registrations to use the app due to what it described as “large-scale malicious attacks” on its services.

Nobody saw this coming. Somehow, R1 was doing this with less hardware. Moreover, DeepSeek-R1 is available through an open-source MIT license, which allows for unrestricted commercial use, including modification and distribution.

With AI sector share prices unsettled by all of this, the implication is that perhaps usable models don’t need the huge chip clusters deployed by the established players and organizations shouldn’t be paying high prices to access them.

Furthermore, if a tiny startup can get by on more limited hardware while training LLMs for a fraction of the cost, perhaps strenuous US attempts to limit the export of the most powerful AI chips to most of the world including China, are already obsolete before they’ve been fully implemented.

Zero day AI

The speed of DeepSeek’s rise is a case of ‘zero-day disruption.’ Organizations have no time to react, and not just because developers across the world have piled in to test DeepSeek-R1 via its API by the thousand. Releasing a free app gives this capability to everyone, including employees who might enter sensitive data into it. By now, DeepSeek is everywhere, which makes it difficult to control.

“The app has raced to the top of the app charts, but I would advise anyone considering installing it and using it to exercise some caution,” warned tech commentator, Graham Cluley, who also hosts the AI Fix podcast.

That said, organizations should already be used to coping with this issue. “Human nature being what it is, there will surely be just as much sensitive data entered into DeepSeek as we’ve seen entered into every other AI out there,” said Cluley. Organizations should probably hold back until it has been more thoroughly audited in the same way they would with any new app.

Or perhaps focusing on the risks is too negative. DeepSeek will ignite more competition in the sector, potentially turning powerful LLMs from an expensive service for the deep pocketed into a cheap utility anyone can access. Rather than dumping existing AI services, organizations should demand a better deal while avoiding becoming too locked into one LLM as new innovations appear.

Censored language model

A lurking possibility is that DeepSeek isn’t as good as it seems, with some skepticism already appearing around its price-performance claims. Stacy Rasgon, a senior analyst at Bernstein Research, questioned DeepSeek’s underlying costs.

“Did DeepSeek really build OpenAI for $5M? Of course not,” he wrote in a client note. “The oft quoted $5M number is calculated by assuming a $2/GPU-hour rental price for this infrastructure, which is fine, but not really what they did, and does not include all the other costs associated with prior research and experiments on architectures, algorithms, or data.”

In use, DeepSeek makes elementary errors, not dissimilar to the ones that afflicted ChatGPT in its early days. Some of its responses also underline that the app imposes guard rails when run from a Chinese host. A good example is this report of its refusal to acknowledge the Tiananmen Square massacre, something the Chinese Government goes to extreme lengths to hide.

In the short term, DeepSeek’s appearance underlines the unstable nature of AI itself. Tech is used to periodic disruptions. AI suggests that these might become more routine, including of its own capabilities. It is unlikely to be the last such breakthrough in a sector that will prove harder to dominate than has been assumed.

Investors and government regulators trying to control AI development won’t like this but if it offers cheaper and earlier AI access across the economy it could still work as a net positive. According to Cluley, DeepSeek should be something for Silicon Valley to worry about.

“If it’s accurate that the Chinese have been able to develop a competitive AI that massively undercuts the US-based giants in terms of development cost and with a fraction of the hardware commitment then that is clearly going to upset the applecart and have a tech billionaire or two crying into their Cheerios this morning,” he said.

Businesses get their own version of the Chrome Web Store

Though there are a variety of cool extensions for the Chrome browser, there are also malicious extensions that pose a security threat. To increase security, Google has now launched the Chrome Web Store for Enterprises, a new store specifically designed for business users.

For example, businesses can create a list of approved extensions to ensure employees do not install malicious extensions on their own. Companies can also add their own logos and images to the store if they wish, making it clear to users what applies.

And, according to Bleeping Computer, it will soon also be possible for IT administrators to remotely remove add-ons, if necessary.

iPhone users turn on to DeepSeek AI

As if from nowhere, OpenAI competitor DeepSeek has somersaulted to the top of the iPhone App Store chart, overtaking ChatGPT’s OpenAI. It’s the latest in a growing line of generative AI (genAI) services and seems to offer some significant advantages, not least its relatively lower development and production costs. You can also ask it how many R’s the word “strawberry” contains and expect an accurate response.

Now on iPhones

Released last week, the DeepSeek app raced to the top of Apple’s App Store charts in multiple countries, including the US. People using the app have noted that the genAI tool can match or beat other similar models in performance.

It also does so at a fraction of the development and deployment costs. It’s also free to use on the web and on the iPhone. In other words, for the price of nothing, you get all the genAI utility you can expect from ChatGPT.

What the industry says

Nvidia’s senior research scientist, Jim Fan, calls DeepSeek “the biggest dark horse” in the open-source LLM field, praising the extent to which the developers have managed to deliver such power with such scant resources.

“We are living in a timeline where a non-US company is keeping the original mission of OpenAI alive — truly open, frontier research that empowers all. It makes no sense. The most entertaining outcome is the most likely,” he wrote on social media.

What’s the market model?

DeepSeek was introduced as open-source models the Chinese developer believes can compete with OpenAI’s and Meta’s best systems. That means the models are available under an MIT license from the popular Hugging Face platform, which itself means these models can be used commercially and without restrictions. Theoretically, even Apple could use it — and many developers are already trying it on relatively modest hardware.

The full package of DeepSeek’s R1 models is available and costs almost 95% less than OpenAI wants for its o1 models. There’s more information available on Github, including an in-depth 30-page technical report.

How good is it?

DeepSeek says its R1 model surpasses OpenAI o1 on the AIME, MATH-500, and SWE-bench Verified benchmarks.  It contains 671 billion parameters, a massive number that means the model can perform very well.

Of course, most on-device AI can’t possibly handle that many parameters, so DeepSeek has made smaller versions of the same model available, the smallest of which should run on an old Mac.

DeepSeek R1 is also built as a self-checking reasoning model, which helps it avoid some of the stupid mistakes other models make. While that reasoning means responses can be a little slower to arrive, they tend to be more reliable. 

Toward an open-source AI

“It shows that open-source AI is catching up,” and in the future we’ll have a multiplicity of such models, rather than just the big commercial models, The Atlantic CEO Nicholas Thompson points out.

One estimate suggests the models might have been trained on a budget as small as $6 million. In comparison, while Meta’s most recent Lama used an estimated 30.8 million GPU-hours to train, DeepSeek required just 2.8 million GPU-hours, according to Andrej Karpathy at EurekaLabs.

In other words, rather than throwing money at a problem, the Chinese researchers are figuring out how to get more from less.

It is impressive that DeepSeek seems to have succeeded in matching OpenAI and Meta’s AI at approximately 10% of the resources, cost, and parameters.

DeepSeek’s researchers said DeepSeek-V3 used Nvidia’s H800 chips for training. (Not everyone accepts the explanation. Scale AI CEO Alexandr Wang expressed doubts about this claim, but still calls the introduction of DeepSeek “earth-shattering”.)

To achieve this, the developers achieved significant technological breakthroughs, such as the capacity to predict consecutive words in a sequence, rather than just the next word. They also figured out to make the system answer questions more efficiently. This is explained well by Thompson.

Good for everyone?

China has figured out how to deliver powerful AI while using fewer resources — and (perhaps most significantly on a planet equipped with finite resources) far less energy.

Is this a bad thing for US interests? Almost certainly not. The fact that China achieved this on such limited resources should be a wake-up call to the US government and investor communities that it’s possible to deliver this technology at much lower costs.

“If there truly has been a breakthrough in the cost to train models from $ 100 million+ to this alleged $6 million number, this is actually very positive for productivity and AI end users, as cost is obviously much lower meaning lower cost of access,” Jon Withaar, a senior portfolio manager at Pictet Asset Management, told Reuters.

That’s a good thing, assuming AI is a good thing in the first place. But it’s a less good option for the big developers in the space. AI stocks are taking a battering today as investors evaluate the achievement. They want value for money, and if DeepSeek can get for $1 what other companies spend a sawbuck on, they’ll want to invest in that.

Ideological AI

It is worth mentioning one other limitation of the system. As it is a Chinese model, it is benchmarked by the Chinese Internet regulator who ensures the genAI responses “embody core socialist values.”

What’s interesting about that is the extent to which this shows how AI models — from China, or from anywhere else — can be built to bake in sets of values that may do more than just reflect their society. No wonder OpenAI wants the US government to invest in US AI.

Getting more for less

If it is indeed correct that DeepSeek has been able to achieve this degree of performance at such low costs using lower-specified tech, it suggests:

  1. That while cash is required to enable the tech, the biggest currency is creative innovation, which flourishes most in open environments. 
  2. That the social and environmental costs in terms of energy, water, and technology we expect AI to require can be dramatically reduced. 
  3. It’s good business to do so.
  4. These reduced costs make AI more accessible to a wider number of developers.

Some of the implications of this are explained in more depth here. But if you’re searching for an iPhone app that manages to capture the technology story while reflecting the evolving global geo-political tension and conversation around environment and industry, you can download it at the App Store today.

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe

Indian media houses rally against OpenAI over copyright dispute

The legal heat on OpenAI in India intensified as digital news outlets owned by billionaires Gautam Adani and Mukesh Ambani joined an ongoing lawsuit against the ChatGPT creator. They were joined by some of the largest news publishers in India including the Indian Express, and Hindustan Times, and members of the Digital News Publishers Association (DNPA), which includes major players like Zee News, India Today, and The Hindu.

These publishers claim OpenAI scraped and adapted their copyrighted content without permission, hurting the media industry’s revenue and intellectual property, reported Reuters.

The filings in the Delhi High Court argue that OpenAI’s actions pose a “clear and present danger to the valuable copyrights” of these publishers. This follows similar lawsuits globally, including one by the New York Times in the United States, highlighting a growing backlash from publishers against generative AI models.

Mounting allegations against OpenAI

OpenAI, which sparked a generative AI revolution with ChatGPT’s launch in 2022, has repeatedly denied allegations of copyright violations. The company claims its AI systems leverage public data under fair use doctrines. However, Indian publishers argue that OpenAI’s operations in India defy legal norms, especially given the company’s licensing agreements with international publishers such as Time magazine and the Financial Times.

The new filing asserts that OpenAI’s omission of similar agreements with Indian publishers “betrays an inexplicable defiance of the law” and undermines democracy by weakening the press, according to the report.

In November 2023, a group of nonfiction authors filed a class-action lawsuit against OpenAI and Microsoft, accusing them of unlawfully using their copyrighted works and academic journals to train the ChatGPT AI model without obtaining permission.

OpenAI did not respond to requests for comment.

Broader implications for the AI landscape

The intervention by heavyweight media houses adds momentum to ANI’s lawsuit, which accused OpenAI last year of using its content without authorization to train ChatGPT and spreading misinformation by attributing fabricated stories to ANI.

The Reuters-backed news agency demanded that the ChatGPT maker delete the copyrighted content used to train the LLM. OpenAI, however, has opposed the demand saying it violates US laws.

With a hearing scheduled for Tuesday, the case is expected to set a legal precedent in India on AI-related copyright disputes.

Legal observers believe the case could significantly impact how generative AI companies operate in India. Analysts have pointed out that India’s legal framework was not designed with modern AI systems in mind, creating a pressing need for updated copyright laws to govern emerging technologies.

“It may lead to stricter copyright rules, requiring AI developers to obtain explicit licenses for training datasets,” said Anish Nath, practice director at Everest Group. “Laws could also evolve to differentiate between using content for training versus reproducing it verbatim, with distinctions for non-profit versus for-profit AI companies.”

The legal battle in India is reflective of global trends. In the US, OpenAI has faced lawsuits from authors, visual artists, musicians, and news organizations for allegedly training AI models with copyrighted content. In response, the company has initiated partnerships with major international outlets to mitigate future disputes.

Industry leaders and legal experts are closely watching the case for its implications on AI regulation and copyright laws. If Indian courts uphold the publishers’ demands, OpenAI and similar firms could be compelled to either enter licensing agreements in India or overhaul their training data practices to avoid legal entanglements.

The outcome of this case could redefine the balance between innovation in AI and the rights of content creators, making India a critical battlefield in the global AI copyright debate. With Indian publishers and global precedents shaping the debate, OpenAI’s case serves as a pivotal moment in framing AI’s obligations in using proprietary content. As the AI industry continues its exponential growth, aligning technological advancements with robust intellectual property protections will be key to fostering sustainable innovation.