Author: Security – Computerworld

Update Exchange Server or move to the cloud, say experts

Microsoft Exchange administratorsrunning versions older than March 2023 need to update or they won’t get the latest security mitigations, says an expert.

But, David Shipley added, even better advice is to shift quickly to the cloud-based Microsoft 365, which always has the latest patches.

“Running your own Exchange Server is really a bad idea in 2025,” said Shipley, head of Canadian-based security awareness training provider Beauceron Security. “Anyone not patched to the nines, to the latest standard [today], is asking for trouble.”

Shipley was commenting on last week’s caution from Microsoft that an older Office Configuration Service (OCS) certificate that verified automated downloaded Exchange Server mitigations is being deprecated. The new certificate, which is deployed by the Exchange Emergency Mitigation Service (EEMS), can only be read by servers running Exchange Server Cumulative Updates or Security Updates newer than March 2023.

The Microsoft alert said, “The EEMS running Exchange versions older than March 2023 is not able to contact OCS to check for and download new mitigation definitions. You might see an event like the following event logged in the Application log of the server:

Error, MSExchange Mitigation Service
Event ID: 1008
An unexpected exception occurred.
Diagnostic information: Exception encountered while fetching mitigations.

In the alert, the company urged admins to take action, saying, “If your servers are so much out of date [pre-March 2023], please update your servers ASAP to secure your email workload and re-enable your Exchange server to check for EEMS rules.” 

The Microsoft blog is “alarming,” said Andrew Grotto, a research scholar at Stanford University’s Centre of International Security and Co-operation and the senior director for cybersecurity policy at the White House in both the Obama and Trump administrations. “It shows how sticky [on-premises] Exchange is.”

Exchange mitigations are essentially hot fixes that plug holes, Shipley explained. Shifting to the software-as-a-service M365 doesn’t solve all security problems for the email service, he acknowledged, but, he added, it does solve the problem of threat actors being able to exploit unpatched or aged versions of the server, because Microsoft installs fixes for Microsoft 365 as soon as it creates them.

It isn’t known how many organizations still run Exchange on-premises but Shipley said he knows at least one unnamed public service organization currently running Exchange 2013.

Why do IT admins still have old versions of Exchange – or any other software? One reason: To save money on expensive software and hardware updates, Shipley said.

“Legacy infrastructure is the most difficult addiction to kick,” added Roger Cressey, a partner with US-based Liberty Group Ventures and formerly a senior vice-president at the Booz Allen Hamilton consultancy, where he supported the firm’s cybersecurity practice in the Middle East.

Both men stressed that better security is one of the biggest reasons to move to the cloud. This is particularly true for Exchange. It’s been hit by a number of vulnerabilities, including zero day holes. Arguably the most notorious were the vulnerabilities dubbed ProxyLogon, exploited in 2021 by a Chinese-based group called Hafnium. There was also a chain of vulnerabilities called ProxyShell.

These issues led to the release in September 2021 of Exchange Server updates that included the EEMS, which applies mitigations to the servers until patches are developed.

On-premises Exchange — and not just older versions — should be considered a legacy product, Johannes Ullrich, dean of research at the SANS Institute, said in an email to CSO. “Support from Microsoft is decreasing, and the overall tendency at Microsoft is to push Exchange users to cloud offerings. There is probably no good reason to avoid this push and to migrate to cloud e-mail services as soon as possible. Exchange support is only going to decrease and patching will remain painful.”

Thus, said Cressey, Exchange admins should “move to address” the Microsoft warning.

What enterprises need to know about DeepSeek’s game-changing R1 AI model

Two years ago, OpenAI’s ChatGPT launched a new wave of AI disruption that left the tech industry reassessing its future. Now within the space of a week a small Chinese startup called DeepSeek appears to have pulled off a similar coup, this time at OpenAI’s expense.

Nevertheless, DeepSeek’s sudden success — the company’s free mobile app quickly surpassed even ChatGPT for downloads on Apple’s App Store — has prompted questions. Is the DeepSeek story too good to be true? And should businesses in the US and allied countries allow employees to use an app when the company’s Chinese background and operation are so opaque?

What happened

The DeepSeek storm hit on January 20 when DeepSeek launched its R1 LLM model to the public, complete with big claims around performance.

Using smaller “distilled” LLM models, which require significantly less processing power while replicating the capability of larger models, DeepSeek’s R1 matched or exceeded OpenAI’s equivalent, o1-mini, in important math and reasoning tests.

That performance generated a surge of interest. By Monday the DeepSeek app had overtaken ChatGPT and Temu to become the iPhone App Store’s top free download — and DeepSeek was reporting delays in new registrations to use the app due to what it described as “large-scale malicious attacks” on its services.

Nobody saw this coming. Somehow, R1 was doing this with less hardware. Moreover, DeepSeek-R1 is available through an open-source MIT license, which allows for unrestricted commercial use, including modification and distribution.

With AI sector share prices unsettled by all of this, the implication is that perhaps usable models don’t need the huge chip clusters deployed by the established players and organizations shouldn’t be paying high prices to access them.

Furthermore, if a tiny startup can get by on more limited hardware while training LLMs for a fraction of the cost, perhaps strenuous US attempts to limit the export of the most powerful AI chips to most of the world including China, are already obsolete before they’ve been fully implemented.

Zero day AI

The speed of DeepSeek’s rise is a case of ‘zero-day disruption.’ Organizations have no time to react, and not just because developers across the world have piled in to test DeepSeek-R1 via its API by the thousand. Releasing a free app gives this capability to everyone, including employees who might enter sensitive data into it. By now, DeepSeek is everywhere, which makes it difficult to control.

“The app has raced to the top of the app charts, but I would advise anyone considering installing it and using it to exercise some caution,” warned tech commentator, Graham Cluley, who also hosts the AI Fix podcast.

That said, organizations should already be used to coping with this issue. “Human nature being what it is, there will surely be just as much sensitive data entered into DeepSeek as we’ve seen entered into every other AI out there,” said Cluley. Organizations should probably hold back until it has been more thoroughly audited in the same way they would with any new app.

Or perhaps focusing on the risks is too negative. DeepSeek will ignite more competition in the sector, potentially turning powerful LLMs from an expensive service for the deep pocketed into a cheap utility anyone can access. Rather than dumping existing AI services, organizations should demand a better deal while avoiding becoming too locked into one LLM as new innovations appear.

Censored language model

A lurking possibility is that DeepSeek isn’t as good as it seems, with some skepticism already appearing around its price-performance claims. Stacy Rasgon, a senior analyst at Bernstein Research, questioned DeepSeek’s underlying costs.

“Did DeepSeek really build OpenAI for $5M? Of course not,” he wrote in a client note. “The oft quoted $5M number is calculated by assuming a $2/GPU-hour rental price for this infrastructure, which is fine, but not really what they did, and does not include all the other costs associated with prior research and experiments on architectures, algorithms, or data.”

In use, DeepSeek makes elementary errors, not dissimilar to the ones that afflicted ChatGPT in its early days. Some of its responses also underline that the app imposes guard rails when run from a Chinese host. A good example is this report of its refusal to acknowledge the Tiananmen Square massacre, something the Chinese Government goes to extreme lengths to hide.

In the short term, DeepSeek’s appearance underlines the unstable nature of AI itself. Tech is used to periodic disruptions. AI suggests that these might become more routine, including of its own capabilities. It is unlikely to be the last such breakthrough in a sector that will prove harder to dominate than has been assumed.

Investors and government regulators trying to control AI development won’t like this but if it offers cheaper and earlier AI access across the economy it could still work as a net positive. According to Cluley, DeepSeek should be something for Silicon Valley to worry about.

“If it’s accurate that the Chinese have been able to develop a competitive AI that massively undercuts the US-based giants in terms of development cost and with a fraction of the hardware commitment then that is clearly going to upset the applecart and have a tech billionaire or two crying into their Cheerios this morning,” he said.

Businesses get their own version of the Chrome Web Store

Though there are a variety of cool extensions for the Chrome browser, there are also malicious extensions that pose a security threat. To increase security, Google has now launched the Chrome Web Store for Enterprises, a new store specifically designed for business users.

For example, businesses can create a list of approved extensions to ensure employees do not install malicious extensions on their own. Companies can also add their own logos and images to the store if they wish, making it clear to users what applies.

And, according to Bleeping Computer, it will soon also be possible for IT administrators to remotely remove add-ons, if necessary.

iPhone users turn on to DeepSeek AI

As if from nowhere, OpenAI competitor DeepSeek has somersaulted to the top of the iPhone App Store chart, overtaking ChatGPT’s OpenAI. It’s the latest in a growing line of generative AI (genAI) services and seems to offer some significant advantages, not least its relatively lower development and production costs. You can also ask it how many R’s the word “strawberry” contains and expect an accurate response.

Now on iPhones

Released last week, the DeepSeek app raced to the top of Apple’s App Store charts in multiple countries, including the US. People using the app have noted that the genAI tool can match or beat other similar models in performance.

It also does so at a fraction of the development and deployment costs. It’s also free to use on the web and on the iPhone. In other words, for the price of nothing, you get all the genAI utility you can expect from ChatGPT.

What the industry says

Nvidia’s senior research scientist, Jim Fan, calls DeepSeek “the biggest dark horse” in the open-source LLM field, praising the extent to which the developers have managed to deliver such power with such scant resources.

“We are living in a timeline where a non-US company is keeping the original mission of OpenAI alive — truly open, frontier research that empowers all. It makes no sense. The most entertaining outcome is the most likely,” he wrote on social media.

What’s the market model?

DeepSeek was introduced as open-source models the Chinese developer believes can compete with OpenAI’s and Meta’s best systems. That means the models are available under an MIT license from the popular Hugging Face platform, which itself means these models can be used commercially and without restrictions. Theoretically, even Apple could use it — and many developers are already trying it on relatively modest hardware.

The full package of DeepSeek’s R1 models is available and costs almost 95% less than OpenAI wants for its o1 models. There’s more information available on Github, including an in-depth 30-page technical report.

How good is it?

DeepSeek says its R1 model surpasses OpenAI o1 on the AIME, MATH-500, and SWE-bench Verified benchmarks.  It contains 671 billion parameters, a massive number that means the model can perform very well.

Of course, most on-device AI can’t possibly handle that many parameters, so DeepSeek has made smaller versions of the same model available, the smallest of which should run on an old Mac.

DeepSeek R1 is also built as a self-checking reasoning model, which helps it avoid some of the stupid mistakes other models make. While that reasoning means responses can be a little slower to arrive, they tend to be more reliable. 

Toward an open-source AI

“It shows that open-source AI is catching up,” and in the future we’ll have a multiplicity of such models, rather than just the big commercial models, The Atlantic CEO Nicholas Thompson points out.

One estimate suggests the models might have been trained on a budget as small as $6 million. In comparison, while Meta’s most recent Lama used an estimated 30.8 million GPU-hours to train, DeepSeek required just 2.8 million GPU-hours, according to Andrej Karpathy at EurekaLabs.

In other words, rather than throwing money at a problem, the Chinese researchers are figuring out how to get more from less.

It is impressive that DeepSeek seems to have succeeded in matching OpenAI and Meta’s AI at approximately 10% of the resources, cost, and parameters.

DeepSeek’s researchers said DeepSeek-V3 used Nvidia’s H800 chips for training. (Not everyone accepts the explanation. Scale AI CEO Alexandr Wang expressed doubts about this claim, but still calls the introduction of DeepSeek “earth-shattering”.)

To achieve this, the developers achieved significant technological breakthroughs, such as the capacity to predict consecutive words in a sequence, rather than just the next word. They also figured out to make the system answer questions more efficiently. This is explained well by Thompson.

Good for everyone?

China has figured out how to deliver powerful AI while using fewer resources — and (perhaps most significantly on a planet equipped with finite resources) far less energy.

Is this a bad thing for US interests? Almost certainly not. The fact that China achieved this on such limited resources should be a wake-up call to the US government and investor communities that it’s possible to deliver this technology at much lower costs.

“If there truly has been a breakthrough in the cost to train models from $ 100 million+ to this alleged $6 million number, this is actually very positive for productivity and AI end users, as cost is obviously much lower meaning lower cost of access,” Jon Withaar, a senior portfolio manager at Pictet Asset Management, told Reuters.

That’s a good thing, assuming AI is a good thing in the first place. But it’s a less good option for the big developers in the space. AI stocks are taking a battering today as investors evaluate the achievement. They want value for money, and if DeepSeek can get for $1 what other companies spend a sawbuck on, they’ll want to invest in that.

Ideological AI

It is worth mentioning one other limitation of the system. As it is a Chinese model, it is benchmarked by the Chinese Internet regulator who ensures the genAI responses “embody core socialist values.”

What’s interesting about that is the extent to which this shows how AI models — from China, or from anywhere else — can be built to bake in sets of values that may do more than just reflect their society. No wonder OpenAI wants the US government to invest in US AI.

Getting more for less

If it is indeed correct that DeepSeek has been able to achieve this degree of performance at such low costs using lower-specified tech, it suggests:

  1. That while cash is required to enable the tech, the biggest currency is creative innovation, which flourishes most in open environments. 
  2. That the social and environmental costs in terms of energy, water, and technology we expect AI to require can be dramatically reduced. 
  3. It’s good business to do so.
  4. These reduced costs make AI more accessible to a wider number of developers.

Some of the implications of this are explained in more depth here. But if you’re searching for an iPhone app that manages to capture the technology story while reflecting the evolving global geo-political tension and conversation around environment and industry, you can download it at the App Store today.

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe

Indian media houses rally against OpenAI over copyright dispute

The legal heat on OpenAI in India intensified as digital news outlets owned by billionaires Gautam Adani and Mukesh Ambani joined an ongoing lawsuit against the ChatGPT creator. They were joined by some of the largest news publishers in India including the Indian Express, and Hindustan Times, and members of the Digital News Publishers Association (DNPA), which includes major players like Zee News, India Today, and The Hindu.

These publishers claim OpenAI scraped and adapted their copyrighted content without permission, hurting the media industry’s revenue and intellectual property, reported Reuters.

The filings in the Delhi High Court argue that OpenAI’s actions pose a “clear and present danger to the valuable copyrights” of these publishers. This follows similar lawsuits globally, including one by the New York Times in the United States, highlighting a growing backlash from publishers against generative AI models.

Mounting allegations against OpenAI

OpenAI, which sparked a generative AI revolution with ChatGPT’s launch in 2022, has repeatedly denied allegations of copyright violations. The company claims its AI systems leverage public data under fair use doctrines. However, Indian publishers argue that OpenAI’s operations in India defy legal norms, especially given the company’s licensing agreements with international publishers such as Time magazine and the Financial Times.

The new filing asserts that OpenAI’s omission of similar agreements with Indian publishers “betrays an inexplicable defiance of the law” and undermines democracy by weakening the press, according to the report.

In November 2023, a group of nonfiction authors filed a class-action lawsuit against OpenAI and Microsoft, accusing them of unlawfully using their copyrighted works and academic journals to train the ChatGPT AI model without obtaining permission.

OpenAI did not respond to requests for comment.

Broader implications for the AI landscape

The intervention by heavyweight media houses adds momentum to ANI’s lawsuit, which accused OpenAI last year of using its content without authorization to train ChatGPT and spreading misinformation by attributing fabricated stories to ANI.

The Reuters-backed news agency demanded that the ChatGPT maker delete the copyrighted content used to train the LLM. OpenAI, however, has opposed the demand saying it violates US laws.

With a hearing scheduled for Tuesday, the case is expected to set a legal precedent in India on AI-related copyright disputes.

Legal observers believe the case could significantly impact how generative AI companies operate in India. Analysts have pointed out that India’s legal framework was not designed with modern AI systems in mind, creating a pressing need for updated copyright laws to govern emerging technologies.

“It may lead to stricter copyright rules, requiring AI developers to obtain explicit licenses for training datasets,” said Anish Nath, practice director at Everest Group. “Laws could also evolve to differentiate between using content for training versus reproducing it verbatim, with distinctions for non-profit versus for-profit AI companies.”

The legal battle in India is reflective of global trends. In the US, OpenAI has faced lawsuits from authors, visual artists, musicians, and news organizations for allegedly training AI models with copyrighted content. In response, the company has initiated partnerships with major international outlets to mitigate future disputes.

Industry leaders and legal experts are closely watching the case for its implications on AI regulation and copyright laws. If Indian courts uphold the publishers’ demands, OpenAI and similar firms could be compelled to either enter licensing agreements in India or overhaul their training data practices to avoid legal entanglements.

The outcome of this case could redefine the balance between innovation in AI and the rights of content creators, making India a critical battlefield in the global AI copyright debate. With Indian publishers and global precedents shaping the debate, OpenAI’s case serves as a pivotal moment in framing AI’s obligations in using proprietary content. As the AI industry continues its exponential growth, aligning technological advancements with robust intellectual property protections will be key to fostering sustainable innovation.

Trump’s RTO edict raises concerns over morale, efficiency — and burnout

President Donald J. Trump’s executive order to federal employees to return to the office “as soon as practicable” will have a variety of repercussions — most of them negative, according to industry analysts and others.

The return-to-office (RTO) policy issued last week signals Trump’s intent to fulfill campaign promises to reform the 2.3-million-strong federal workforce, which he has criticized as inefficient and bloated. The language in Trump’s order doesn’t clarify whether it applies only to the estimated 10% of federal civilian workers — about 228,000 as of May 2024 — who work remotely full-time, according to the Office of Management and Budget.

Trump may be upping the stakes, but then-President Joseph R. Biden Jr. signed legislation Jan. 5 designed to bring more federal employees back to the office and increase the efficiency of office space utilization. Both men were likely taking cues from various businesses that have instituted RTO mandates in the wake of the Covid-19 pandemic.

In 2022, Tesla and SpaceX chief executive CEO Elon Musk — now a close advisor to Trump — delivered an RTO ultimatum to his two companies’ white-collar workers: get back in the corporate office or face firing. Musk’s letter to executive staff at the time specified: “The office must be where your actual colleagues are located, not some remote pseudo-office. If you don’t show up, we will assume you have resigned.”

Other corporations have followed suit more recently. In December, Amazon and AT&T ended their work-from-home policies. AT&T went so far as to tell 9,000 of its 149,000 workers to relocate to an office area or be fired.

Peter Miscovich, managing director at global real estate firm JLL, said the US is entering a “hybrid winter,” as many CEOs impose RTO mandates that could pose talent attraction and retention challenges for leading IT organizations. That’s especially true for advanced tech leadership teams and IT departments that have built sophisticated hybrid work practices over time and invested significantly in hybrid operational technologies and related infrastructure across global organizations, he said.

“Perhaps the most significant risk associated with RTO mandates is the potential loss of valuable and critical IT digital talent,” Miscovich said. “The IT tech sector has embraced hybrid work more thoroughly than most industries, and IT professionals now view hybrid workplace flexibility as a standard expectation for the workforce rather than a perk.”

Over the past two or so years, remote work — once praised as the new paradigm for productivity and employee satisfaction — began losing some of its luster as more organizations required workers to get back to their cubicles, at least part time.

In fact, many organizations are already struggling to fill a significant IT talent gap. In some cases, generative artificial intelligence (genAI) has been able to replace needed workers; in most other instances, the dearth of tech talent remains.

Mandates can exacerbate employee churn

According to new research from the University of Pittsburgh, S&P 500 companies that rolled out RTO mandates experienced “abnormally high” employee turnover and longer time-to-hire when filling job vacancies. “This significant IT digital talent brain-drain risk is particularly acute given the current competitive market for technology talent,” Miscovich said.

In 2025, CIOs and senior IT leaders face growing challenges when trying to attract top talent while maintaining operational excellence and managing workplace transformation amid RTO mandates, according to Miscovich. Resistance to the mandates is especially strong in global IT departments where hybrid and remote work are deeply integrated and have proven highly effective.

John Veitch, dean of the School of Business and Management at Notre Dame de Namur in Belmont, California, said RTO mandates are “generally” a sign of insecure leadership. In other words, executives don’t trust what they can’t see. RTO mandates say, “I have to see people working and earning their living,” he said.

Veitch didn’t have strong feelings either way about RTOs involving the federal government workers, though from a workflow point of view, he said he’s not convinced there are benefits.

He agreed that with the tech marketplace unemployment rate near historic lows, it could push some workers out the door, he said. “I don’t think the federal government pays particularly well relative to what you can get if you’re a top-flight technologist at a Silicon Valley firm,” Veitch said. “Obviously, people who have options will choose those options, particularly if return-to-office is a deal breaker for them. So, I don’t think it’s going to help the government in any way, shape or form to retain talented people.”

Further research points to other problems with RTO mandates. Being in the office five days a week leads to higher rates of burnout, lower morale, and inefficiencies associated with commuting time, according to J. P. Gownder, a principal analyst at Forrester Research.

On average, US workers spend 2.3 days in the office each week, according to a Stanford University study. A separate Stanford study found that hybrid work had zero effect on workers’ productivity or career advancement and dramatically boosted retention rates.

In general, hybrid working arrangements hold numerous advantages over full-time, in-office, Gownder said, and for non-collaborative work, home offices are far better suited because they create “a focused environment.”

“Despite some managers’ concerns, employees who work in hybrid fashion are more productive than those who spend all their time in the office. Most employees engage in a mix of personal and collaborative work,” he said.

In fact, hybrid work boosts employee productivity, performance, and retention, according to Nicholas Bloom, a professor of economics at Stanford. Because of RTO mandates, employees are often forced to commute in only to do tasks they could handle at home. Even in the office, many still rely on videoconferencing to collaborate with colleagues in other locations.

As a result of that and other inconveniences, organizations that move from hybrid work to full-time in-office work can expect higher attrition rates, Gownder argued. “Sometimes, managers impose these policies specifically to drive higher attrition, in lieu of layoffs,” he said. “IT talent often can work effectively remotely, and attrition rates in general are higher among IT professionals.”

Too many workers, too few offices?

Additionally, the federal government and private companies have a dramatically smaller number of offices to which they could return as many companies have consolidated their footprints to a few key hubs. AT&T, for example, ordered 60,000 managers to work from one of only nine offices, forcing 9,000 employees to relocate or resign.

As the pandemic eased in 2022 and 2023, US core business centers in large and small cities continued to suffer the after-effects of remote- and hybrid-work policies, which led to a 20% to 40% reduction in office space use and a devaluation of properties. The big switch to remote work left many downtowns largely empty for months.

Data indicates that approximately 80% of offices had downsized by the end of 2023. Other sources indicate the downsizing slowed last year and by Q4 2024 office leasing rates were at about 92% of pre-pandemic levels, according to David Brodeur-Johnson, employee experience research lead at Forrester Research.

“And yes, I believe that firms would be willing to expand their office spaces as needed to keep up with capacity, but most aren’t there yet,” he said.

While most organizations adopted a hybrid-work policies, requiring employees to be in the office a few days each week while allowing work from home on other days, the Trump Administration’s policies are a strict, five-day RTO. That’s likely to incur an employee backlash, Brodeur-Johnson said.

The dangers of disengagement

The federal government, he said, risks employee disengagement more than attrition. Monitoring federal employee surveys like FedRamp will be crucial, as disengagement is costly for both employees and employers.

“It’s important to bear in mind also that autonomy is a primary source of intrinsic motivation — the kind that comes from within — so I would argue strongly that the biggest negative impact will likely be to employee engagement instead of attrition,” Brodeur-Johnson said.

Academic studies also show that strong social relationships are key to remote work success, with emotional closeness outweighing physical distance. But simply being in the office doesn’t promote social relationships, Brodeur-Johnson pointed out. “Which is why companies like Nvidia have left it up to employees to decide, up to and including fully remote work,” he said. “How close people feel to each other emotionally is far more important than physical distance.”

While Trump’s executive order could spark a wider look at RTO edicts elsewhere, most private companies have settled on what their employees will tolerate. Meanwhile, federal agencies have steadily increased in-office requirements, so the latest change shouldn’t be a surprise, Brodeur-Johnson said.

But for some workers — especially those with care-giving duties or better flexible job options — it could be the tipping point. Top talent might leave first, which is why RTO mandates have slowed recently, he said.

Meta wants everyone to know that it, too, is investing a lot in AI

Not to be outdone by its close rival OpenAI, Meta has announced its plans to spend $60 to $65 billion on AI infrastructure this year, and is building a data center almost as big as Manhattan.

In a Facebook post, Meta CEO Mark Zuckerberg announced his company’s intent to build a 2GW data center, bring roughly 1GW of compute online in 2025, and end the year with more than 1.3 million GPUs.

Included in his post was a blueprint of the planned “Richland Parish Center Data Center” superimposed on a map of Manhattan (the data center will actually be in northeast Louisiana).

DOJ indicts North Korean conspirators for remote IT work scheme

The US Department of Justice this week announced that it had indicted two North Korean nationals and three other men, accusing them of participating in a conspiracy designed to trick US companies into funding the North Korean regime.

According to the indictment, which was filed in federal court in Miami, the scheme leveraged stolen identity documents and paid henchmen in the US to direct well-paid IT work and company computers to two North Korean men, Jin Sung-Il and Pak Jin-Song. The idea, the Justice Department said, was to funnel money back to the North Korean regime, which has limited opportunities to generate cash through legal means thanks to heavy international sanctions.

The conspiracy, according to the indictment, centers on North Korean nationals posing as foreign workers in other nations, or as US nationals, and gaining employment via online platforms that allow companies to advertise for contract IT workers. Using fake or altered identity documents, the North Koreans took on contracts for several US companies, which were not identified by name in the indictment. Those businesses then shipped company laptops to three US-based co-conspirators, Pedro Ernesto Alonso De Los Reyes, Erick Ntekereze Prince, and Emanuel Ashtor, who, the Justice Department said, installed remote access software on them so that they could be operated by Jin and Pak.

The US-based members of the group also used their own companies as fronts for the conspiracy, invoicing several of the victim firms and funneling payments to the North Koreans. The indictment stated that at least 64 US companies were victimized, and payments from ten of them generated at least $866,255 in revenue over the duration of the scheme, which ran for more than six years.

All five defendants are charged with conspiracy to damage a protected computer, mail and wire fraud, money laundering, and transferring false identification documents. The two North Koreans are additionally charged with violating the International Emergency Economic Powers Act. Each could face up to 20 years in prison.

Highlights risk from North Korea

“The indictments announced today should highlight to all American companies the risk posed by the North Korean government,” said Assistant Director of the FBI’s Cyber Division, Bryan Vorndran, in a statement.

While the indictments announced Thursday characterized this conspiracy as largely focused on diverting money to the heavily embargoed North Korean government, similar efforts by that country have been aimed at compromising corporate secrets and sensitive information. The “laptop farm” — where a US-based associate such as Prince and Ashtor hosted the provided company laptops in their own homes to conceal the North Korean involvement — has been a known technique for North Korean cyberwarfare since at least 2022, and has been used not just to collect a salary, but to steal data, explore sensitive parts of strategically significant infrastructure, and attempt to extort victimized firms.

The operations are growing in both numbers and sophistication, according to security firms who spoke to CSO in November. One recent case saw a bad actor use deepfake video technology and automated voice translation in a video interview, though this didn’t work particularly well and the interviewers were easily able to tell that something was wrong.

“Her eyes weren’t moving, the lips weren’t in sync, and the voice was mechanical,” Kirkwood told CSO. “It was like something from a 1970s Japanese Godzilla movie.”

Google-owned threat intelligence provider Mandiant told CSO that the number of North Korean IT workers looking to gain valuable freelance positions number in the thousands, and although not all are engaged in purely nefarious activity, the number of intrusion incidents linked to North Korean workers is high.

Trump’s move to lift Biden-era AI rules sparks debate over fast-tracked advances — and potential risks

President Donald Trump’s executive order removing Biden-Administration rules governing AI development is being cast as an opening of AI development flood gates, which could fast track advances in the still-new technology, but could also pose risks.

Signed on Thursday, the executive order (EO) overturns former President Joe Biden’s 2023 policy, which mandated that AI developers conduct safety testing and share results with the government before releasing systems that could pose risks to national security, public health, or the economy.

The revocation of the 2023 Eo shifts federal oversight from mandates to voluntary commitments, reducing requirements such as safety training submissions and large-scale computer acquisition notices, enabling less regulated innovation.

“This means some states may continue to follow the regulatory guidance in the 2023 EO, while others may not,” said Lydia Clougherty Jones Sr., a director analyst at Gartner Research.

Trump’s policy states its purpose is to “sustain and enhance America’s dominance in AI,” and promote national security. The EO directs the creation of an “AI Action Plan,” led by the Assistant to the President for Science and Technology, the White House AI and Crypto Czar, and the National Security Advisor. Michael Kritsios (former US CTO under the Trump administration), David Sacks (venture capitalist and former PayPal executive), and US Rep. Mike Waltz (R-Fla), have been nominated or appointed, respectively, to these positions.

A public-private partnership on AI

Along with the order, Trump also unveiled the Stargate initiative, a public-private venture that would create a new company to build out the nation’s AI infrastructure, including new data centers and new power plants to feed them. Initially, Stargate will team up the US government with OpenAI, Oracle, and Softbank. The companies will initially invest $100 billion in the project, with plans to reach $500 billion. Trump said the move would create 100,000 US jobs.

Oracle CEO Larry Ellison, for example, said 10 new AI data centers are already under construction. He linked the project to the use of AI for digital health records, noting the technology could help develop customized cancer vaccines and improve disease treatment.

Not everyone is, however, upbeat about the loosening of government oversight of AI development and partnerships with the private sector.

The Stargate announcement, along with the Trump Administration’s reversal of the earlier AI safety order, could replace many federal workers in key public service roles, according to Cliff Jurkiewicz, vice president of global strategy at Phenom, a company specializing in AI-enabled human resources.

 “While it’s impressive to see such a significant investment by the federal government and private businesses into the nation’s AI infrastructure, the downside is that it has the potential to disenfranchise federal workers who are not properly trained and ready to use AI,” Jurkiewicz said. “Federal employees need training to use AI effectively; it can’t just be imposed on them.”

Stargate will speed up what Jurkiewcz called “the Great Recalibration” — a shift in how work is performed through an human-AI partnership. Over the next 12 to 18 months, businesses will realize they can’t fully replace human knowledge and experience with technology, “since machines don’t perceive the world as we do,” he said.

The move could put smaller AI companies at a competitive disadvantage by stifling innovation, Jurkiewicz said. “Stargate could also deepen inequities, as those who know how to use AI will have a significant advantage over those who don’t.”

Removing AI regulations, however, won’t inherently lead to a completely unbridled technology that can mimic human intelligence in areas such as learning, reasoning, and problem-solving.

Commercial risk will drive responsible AI, with investment and success shaped by the private market and state regulations, according to Gartner. Industry commitments and consortia will advance AI safety and development to meet societal needs, independent of federal or state policies.

AI unleashed to become Skynet?

Some predict AI will become as ubiquitous as electricity or the internet, in that it will eventually be operating behind the scenes and woven into everyday life, silently powering countless systems and services without drawing much attention.

“I’m sure the whole Terminator thing could happen. I don’t consider it likely,” said John Veitch, dean of the School of Business and Management at Notre Dame de Namur in Belmont, CA. “I see lots of positive things with AI and taking the guardrails off of it.”

Regulating something as transformative as AI is challenging, much like the early internet. “If we had foreseen social media’s impact in 1999, would we have done things differently? I don’t know,” Veitch said.

Given AI’s complexity, less regulation might be better than more, at least for now, he said.

AI is valuable as the US faces an aging population and a shrinking labor force, Veitch said. With skilled workers harder to find and expensive to hire, AI can replace call centers or assist admissions teams, offering cost-effective solutions. For example, Notre Dame de Namur’s admissions team uses generative AI to follow up on enrollment requests.

Trump’s executive order prioritizes “sovereign AI” affecting the private market, while shifting most regulatory oversight to state and local governments. For example, New York plans to restrict government use of AI for automated decisions without human monitoring, while Colorado’s new AI law, effective in 2026, will require businesses to inform consumers when they’re interacting with AI, Gartner’s Jones said.

The revocation of Biden’s 2023 order reduces federal oversight of model development, removing requirements such as submitting safety training results or sending notifications about large-scale computer cluster acquisitions, which could encourage faster innovation, according to Jones. “Thus, it was not a surprise to see the Stargate announcement and the related public-private commitments,” she said.

Strengthening sovereign AI, Jones said, will boost public-private partnerships like Stargate to maintain US competitiveness and tech leadership.

What enterprises should focus on

Now that the regulatory buck has been passed to states, so to speak, organizations should monitor US state AI executive orders, laws, and pending legislation, focusing on mandates that differentiate genAI from other AI techniques and apply to government use, according to a Gartner report.

“We have already seen diverse concerns unique to individual state goals across the nearly 700 pieces of state-level AI-proposed legislation in 2024 alone,” Gartner said.

According to Gartner:

  • By 2029, 10% of corporate boards globally are expected to use AI to challenge key executive decisions.
  • By 2027, Fortune 500 companies will redirect $500 billion from energy operating expenses to microgrids to address energy risks and AI demands.
  • By 2027, 15% of new applications will be fully generated by AI, up from 0% today.

Executives should identify patterns in new laws, especially those addressing AI biases or errors, and align responsible AI with company goals. Companies are also being urged to document AI decision-making and manage high-risk use cases to ensure compliance and reduce harm.

Organizations should also assess opportunities and risks from federal investments in AI and IT modernization. For global operations, companies will need to monitor AI initiatives in regions like the EU, UK, China, and India, Gartner said.

“Striking a balance between AI innovation and safety will be challenging, as it will be essential to apply the appropriate level of regulation,” the researcher said. “Until the new administration determines this balance, state governments will continue to lead the way in issuing regulations focusing on AI innovation and safety-centric measures that impact US enterprises.”