Month: July 2024

Enterprises urged to think carefully about Windows 10 extended support options

Independent experts have urged businesses to think carefully before relying on third party support for security patches once Windows 10 reaches its end of life in October 2025.

Upgrading from Windows 10 may be challenging for some businesses because many older PCs may not meet the minimum system requirements for Windows 11. Some software or applications may not be compatible with Windows 11, forcing users to stick with Windows 10 or find alternatives.

In addition, point-of-sale (POS) terminals running Windows 10 may be difficult to upgrade, presenting a particular challenge for IT professionals in the retail and hospitality sectors.

As with the retirement of previous versions of Windows, Microsoft is offering enterprises extended support for Windows 10. For commercial customers and small businesses this comes in at $61 per device in the first year, doubling to $122 per Windows 10 device in year two and $244 per device for the third and final year.

Organisations using cloud-based update management enjoy cost savings, with prices of $45 per user with up to five devices in the first year.

There’s a big discount for educational institutions that can get extended support for a total of $7 over the maximum of three years.

Microsoft’s Extended Security Updates offers monthly critical and important security updates to Windows 10 but without access to any new features and only for up to three years.

Micro-patching alternative

Acros, a Slovenian company specialising in security updates, announced Wednesday that it will offer enterprise users of Windows 10 extended support under its 0patch brand for up to five years at a lower cost than Microsoft.

For medium and large organizations, 0patch Enterprise includes central management, multiple users and roles, and comes in at €34.95 (around $38) per device per year, excluding tax. A cut down version, pitched at small business and individuals, 0patch Pro, costs €24.95 plus tax per device per year.

0patch uses a system of “micro-patches” to address critical vulnerabilities, an approach touted as faster and offering a lower potential for system instability. The vendor has previously offered extended support for Windows 7 and Windows 8.

The company said it may offer fixes for vulnerabilities that Microsoft leaves unpatched while also providing patches for non-Microsoft products (such as Java runtime, Adobe Reader etc.), as explained in a blog post.

Gauging risk to reward

Rich Gibbons, head of market development at IT asset management specialist Synyega, noted that third-party support is an established part of the enterprise software market.

“Businesses regularly bring in third parties to help patch and maintain their legacy Oracle, SAP and IBM estates, and while it’s not as common with Microsoft, it’s still a legitimate option, and one worth assessing,” Gibbons said.

“Purchasing extended support packages from Microsoft is expensive and will only go up in price each year. It’s therefore little wonder that more cost-effective options like those offered by 0patch are beginning to gain traction,” Gibbons added

He advised companies to conduct a full risk-reward analysis to understand if the cost savings are worth selecting alternatives like 0patch rather than purchasing extended support from Microsoft or biting the bullet and upgrading their systems.

Leaving Microsoft’s ecosystem

Javvad Malik, lead security awareness advocate at KnowBe4, also urged companies to be careful about opting for third-party support rather than facing the financial and operational burdens of a significant overhaul.

“The viability of turning to a third party for extended support, as opposed to embarking on the arguably Herculean task of retooling apps and refreshing hardware to embrace Windows 11, is, on the surface, an attractive proposition,” Malik told Computerworld. “However, engaging with a third party for security patches introduces a layer of dependency beyond the control of Microsoft’s established ecosystem.”

Malik warned that relying on extended support for an extended period might make it more difficult to upgrade in the future.

“Upgrading from one version to the next is relatively simple when considering upgrading two or more versions up from the current version of any software. So, the cost of delaying an upgrade needs to be evaluated in totality, and not just as a comparison to an upgrade today,” Malik advised.

In response to this criticism, 0patch co-founder Mitja Kolsek told Computerworld that deferring a costly Windows upgrade can be beneficial whilst admitting that enterprises have to move on eventually.

“While an upgrade may eventually be inevitable for functional and compatibility reasons, we’re making sure that you’re not forced to upgrade because of security flaws that the vendor won’t fix anymore,” Kolsek explained. “At the same time, five years is a long time and a lot can happen — maybe you’ll be able to skip a version, or start using some other tool altogether.”

Will your business apps run on the latest Copilot+ PCs?

Microsoft’s first wave of Copilot+ PCs is here. They’re powered by Qualcomm Snapdragon X Elite hardware, which is a big deal for Windows. This is Microsoft’s version of Apple’s transition to the Arm architecture with its M-series Macs. And existing Windows applications aren’t guaranteed to run on an Arm-powered Windows PC.

The good news is that most applications will run — and Microsoft’s Prism translation layer does a good job of running them with decent speed, even. But not everything will work.

Here’s what you need to know.

Want more insights on the future of Windows? Sign up for my free Windows Intelligence newsletter — I’ll send you three things to try every Friday. Plus, get free Windows Field Guides (a $10 value) as a special welcome bonus!

Qualcomm Snapdragon Arm Copilot+ rule #1: There are no guarantees

The move to an Arm architecture is a big shift. If Microsoft hadn’t created the Prism translation layer, no existing Windows apps would “just work” on a Qualcomm Snapdragon PC. It’s just like Apple’s transition on the Mac, where the Rosetta software enabled existing Mac apps to run on an Arm-based M-series chip.

But the Mac is different. With the Mac transition, Apple put developers on notice: All future Macs would be Arm-based. For Windows, things are different: Only some new PCs use Arm processors. Intel and AMD aren’t being left behind — most Windows PCs will likely be using the traditional x86 architecture for years to come.

To ease the transition, many existing Windows applications will just work on an Arm-based PC. And by “just work,” I mean it  — you can double-click their installers and run them like normal. Unless you dig into the process details in the Task Manager, you might not even know you’re using an x86 application.

But that support only goes so far. Certain types of apps won’t work in the Prism translation layer and aren’t functional. Some hardware devices might not work with these PCs either. Plus, some heavy-duty professional applications could be slowed so much by that translation layer as to be unusable.

Google Drive on Arm
Google Drive flat-out refuses to install on a Qualcomm Snapdragon Arm Copilot+ computer.

Chris Hoffman, IDG

Qualcomm Snapdragon Arm Copilot+ rule #2: Some apps will have problems

There are a few types of applications that are guaranteed not to function properly through Prism. They will work if developers port them to Arm — but there’s no guarantee developers will bother, especially for existing business apps.

Specifically, keep an eye out for:

  • File sync tools that integrate with File Explorer: These must be ported to Arm to function properly. For example, as shown above, you can’t install Google Drive on a Windows on Arm PC at launch. If this tool is important to you, you will have to access Google Drive in a web browser or use a third-party syncing app.
  • Hardware devices that need manufacturer-provided drivers: The Prism translation layer won’t help Windows on Arm use hardware drivers for x86 PCs. In practice, this means many existing hardware devices — especially printers — won’t work. This is one reason why Microsoft is moving away from manufacturer-provided printer drivers.
  • Any application that needs a driver: Some applications use drivers to integrate at a low-level with the Windows kernel. For example, many PC games use this for anti-cheat features. This is why Fortnite won’t run on Windows on Arm. But the problem extends beyond games and could affect business-specific productivity tools, too, as any type of application that uses such low-level Windows system integration won’t work. Many third-party antivirus tools don’t support Windows on Arm, either.
  • High-end, demanding applications: At launch, the Adobe Premiere Pro video editor does not yet natively run on Arm. While it’s possible to run the x86 version through Prism, many users are reporting severe performance problems. Microsoft says a native Arm version is coming later in 2024. This is just one example, and you might encounter a demanding business application that won’t be ported. (And a demanding application that requires a lot of hardware resources might not deliver the performance you’d expect on an Arm PC.)

The slowdowns aren’t exclusive to high-end applications. All applications will run best on these PCs if the developer ports them to run natively on Arm hardware. But many lightweight applications that don’t need low-level integration with Windows will run just fine, with no perceptible slowdown.

Task Manager details
The Details pane in the Task Manager shows which applications are translated 64-bit x86 (x64) software and which apps are native 64-bit Arm code (Arm64). 

Chris Hoffman, IDG

3 ways to see whether your Windows apps run on an Arm Copilot+ PC

I wish there were a big database that would list apps and how they run on these PCs. At launch, there doesn’t appear to be such a website — perhaps someone will launch a resource in the future.

For now, Microsoft has endorsed the ​Windows on ARM Ready Software​ website. However, despite the promising name, that site is just about PC games — which doesn’t do much for users focused on serious workplace productivity.

So here are three practical ways to determine if an application is compatible:

  1. Contact the vendor or developer: The best way to find out whether an application will work is to contact the vendor or developer and simply ask whether they support their application on Arm-based versions of Windows like PCs using Qualcomm Snapdragon X Elite hardware.
  2. Do research yourself: You might just have to search the web for the name of the application and “Arm” or “Snapdragon” to see if other people are reporting their experiences. You might find some good discussions on Reddit. Your mileage may vary depending on how many people use the application in question.
  3. Test it yourself: Many businesses will want to test the applications they depend on before buying Arm-based PCs for their employees. There’s really no way to determine whether a workflow works other than to try it yourself. If you’re an individual, I recommend thinking about return policies: For example, the Microsoft Store has a 60-day return policy. If you buy a Copilot+ PC with an Arm processor from Microsoft and find it doesn’t work with your apps or hardware, you can return it.

With the release of these new laptops in July 2024, it’s still early days. While Windows on Arm has existed for many years now, it’s finally starting to look competitive. The demand for compatible software will likely motivate application developers to start taking it more seriously.

But we all know how Windows works: Some business applications were written many years ago and will never get a major update that ports them to a new architecture. The good news is that many should run fine on these new PCs with no extra development effort. The bad news is that applications that don’t will be left behind.

Still, maybe that’s not so bad. Intel is promising that its upcoming Lunar Lake hardware will be competitive with these Arm-based PCs when it comes to snappy performance with long battery life. Intel’s big pitch is that you’ll get these advantages without the headaches of an architectural shift and with compatibility for all your existing x86 software — no Prism translation layer necessary.

We’ll see whether Intel can deliver on its promises when its next-generation Core Ultra hardware starts arriving later in 2024.

I’ll have lots more to say as I spend more time with these new PCs! Sign up for my free Windows Intelligence newsletter to get all my latest musings along with three new things to try every Friday and free Windows Field Guides as a special welcome bonus.

AI washing: Silicon Valley’s big new lie

“Can you go through all the old pitch decks and replace the word ‘crypto’ with ‘A.I.’?”

This caption, part of a New Yorker cartoon by Benjamin Schwartz, perfectly captures Silicon Valley’s new spirit of AI washing.

AI washing sounds like just another spin cycle, but it’s actually a complex and multifaceted phenomenon. And it’s important for everyone reading this column — technology leaders, marketers, product builders, users, and IT professionals of every stripe — to understand the exaggeration, warped emphases, and outright lying that we all encounter in not only marketing and sales, but also the stories we read based on industry claims.

Understanding AI washing

AI washing is a deceptive marketing practice that overemphasizes the role of artificial intelligence in the product or service being promoted. The phrase is based on “greenwashing,” coined by environmentalist Jay Westerveld in 1986, where consumer products are marketed as environmentally friendly regardless of environmental impact.

Products using old-school algorithms are labeled as “AI-powered,” taking advantage of the absence of a universally agreed upon definition for what AI is and what AI is not. Startups build apps that plug into a publicly available generative AI API and market it as an AI app. Big, bold AI projects that are supposed to showcase technology often rely on people working behind the scenes, because humans are the only way to make the ambitious AI solution work.

Let’s talk more about that last one.

AI: It’s made out of people

Retail giant Amazon rolled out 44 high-tech stores called Amazon Go and Amazon Fresh, which (starting in 2016) used the company’s “Just Walk Out” set of technologies. (I first told you about this initiative in 2017.)

Amazon’s vision: Stores where consumers could walk in, choose their items from shelves, then walk out without encountering a human behind a cash register. Sensors, including cameras, would feed into AI, which could figure out who bought what and charge accordingly — all without any checkout process. It felt like shoplifting, but legal.

The system was powered by advanced computer vision, which watched customers and what they picked up. Sensors in the shelves conveyed the weight of items removed, confirming the kind and number of items detected by the cameras. RFID tagged items also added information to the mix. Advanced machine learning algorithms processed the data from cameras and sensors to identify products and associate them with specific shoppers. Electronic entry and exit gates determined who was entering and leaving and when.

The algorithms were trained on millions of AI-generated images and videos to recognize products, human behavior, and human actions.

For seven years, Amazon has been eager to talk about these components of its Just Walk Out technologies. But the tech giant has been hesitant to discuss the 1,000 or so human beings hired to make it all actually more or less function — and admitted the existence of these employees only after press reports exposed them. Even then, Amazon has obscured the specific role these employees played, saying only that they didn’t review video.

Even with 1,000 employees monitoring and enabling 44 stores (checking three-quarters of orders, according to reports), the technology has been beset by problems, including delayed receipts, mismanaged orders, and high operational costs. 

This year, Amazon has been phasing out Just Walk Out technology from its main stores but still offers it as a service to other companies.

Another big example of humans behind the AI curtain is the world of self-driving cars.

Alphabet’s Waymo (the operation formerly known as Google’s self-driving car initiative) has a NASA-style command center where employees monitor cars through cameras and step in remotely when there’s a problem. (Here’s a fast-motion video I took recently of a ride through San Francisco in a Waymo car.)

General Motors’ Cruise subsidiary admits its self-driving taxis need human assistance on average every 4 to 5 miles, with each remote control session lasting an average of 3 seconds.

Other self-driving companies rely on remote human operators even more. In fact, a German company called Vay straight up uses human operators to drive the cars, but remotely. The company recently rolled out a valet parking service in Las Vegas. The car is remotely driven to you, and you drive it wherever you like. Upon reaching your destination, you just get out and a remote operator will park it for you.

Amazon’s stores and self-driving cars are just two available examples of a phenomenon that’s widespread.

Why AI washing happens

The high-level, high-paid technologists building AI systems believe in AI, and believe it can solve extremely complex problems. Which it can — theoretically. They tell their superiors it can be done. Those leaders tell their board it can be done. Company C-suites tell investors it can be done. And as a company, they tell the public it can be done.

There’s just one small problem: It can’t be done.

Most companies feel some sense of accountability for lofty claims, and so they hide the degree to which the product or service depends on humans behind the curtain making decisions, working through problems, and enabling the “magic” to take place.

The more shameless companies remain undeterred by proof that their AI isn’t quite as capable as they claimed or believed, so they just re-up their claims again and again. Tesla CEO Elon Musk comes to mind.

In October 2016, Musk said Tesla would demonstrate a fully autonomous drive from Los Angeles to New York by the end of 2017.

By April 2017, he predicted that in about two years, drivers would be able to sleep in their vehicle while it drove itself.

In 2018, Musk moved his promise of full Tesla self-driving to be by the end of 2019.

In February 2019, Musk promised full self-driving “this year.”

In 2020, Musk claimed that Tesla would have over 1 million self-driving robotaxis on the road by the end of the year.

Even this year, Musk claimed full self-driving Teslas might happen “later this year.”

It’s not going to happen. Musk is deluding himself and his customers. Musk is the Mr. Clean of AI washing.

The real problem with AI washing

The cumulative effect of AI washing is that it leads both the public and the technology industry astray. It fuels the delusion that AI can do things it cannot do. It makes people think AI is some kind of all-purpose solution to every problem — or a slippery slope into dystopia, depending on one’s worldview.

AI washing incentivizes inferior solutions, focusing on “magic” rather than quality. Claims that your dog-washing hose is “powered by AI” doesn’t mean you end up with a cleaner dog. It just means you have an overpriced hose.

AI washing warps funding. Silicon Valley investment nowadays is totally captured by both actual AI and AI-washing solutions. Even savvy investors may overlook AI-washing exaggeration and lies knowing that the AI story will sell in the marketplace thanks to buyer naiveté.

The biggest problem, however, is not delusional selling by the industry, but self-delusion. Purveyors of AI solutions believe that human help is a badge of shame, when in fact I think human involvement would be received with relief.

People actually want humans involved in their shopping and driving experience.

What we need is more human and less machine. As we speak, AI-generated garbage is flooding the zone with cringy prose and falsehoods, along with weird, sometimes horrifying, images. Google is so eager to replace its search engine with an answer engine that we end up with glue on our pizza.

What the public really wants is a search engine that will point us to human-created content or, at least, a PageRank system that favors the human and labels the AI-generated.

The AI-washing phenomenon is built on delusion. It’s built on the delusion that people want machines creating and controlling everything, which they don’t. It’s based on the delusion that adding AI to something automatically improves it, which it doesn’t. And it’s based on the delusion that employing people represents a failure of technology, which it doesn’t.

Enough delusional AI washing already! Sellers need to tell the truth about AI. And buyers need to demand proof that any AI in the products and services we pay for actually does something useful.

I think I speak for all of us in the technology industry, the technology customer community, and the tech press when I say to Silicon Valley: Stop gaslighting everybody about AI.

Proton launches ‘privacy-first’ alternative to Word and Google Docs

Proton has unveiled an end-to-end encrypted document editor that it said will provide an alternative to Microsoft Word and Google Docs for privacy-conscious users.

Docs in Proton Drive, announced on Wednesday by the Swiss software vendor that’s best known for its encrypted email app, contains many of the document creation features that office workers might expect.

Users can create and edit documents, share with colleagues for real-time collaborative work, leave comments and replies, and import and export common file types such as .docx and .txt. The app is available in Proton Drive, an encrypted cloud storage service launched by the vendor in 2022.

But it’s the end-to-end encryption rather than the document editing features that makes Proton’s editor stand out from well-established alternatives on the market.

Only customers are given access to the end-to-end encryption keys, which means that any data entered into a document in Proton Docs is inaccessible by Proton, the company said. That includes keystrokes and cursor movements.

Privacy measures in Proton’s Docs app contrast with the likes of Google Docs, which can “see everything you write and keep a record of all changes that you have ever made,” said Anant Vijay, senior product manager for Proton Mail and Proton Drive, in a blog post.

“Once you provide your data to these companies, you no longer have control over how it is used,” he said, citing growing concerns around the ability of software vendors to train their AI algorithms on customer data.

There’s also the risk that data contained in documents could be accessed should a vendor’s server be compromised.

Another advantage of Proton Docs, the company claims, is that user data is stored on Proton’s cloud servers in Switzerland. Strict Swiss data privacy laws ensure that information stored on Proton’s servers is not subject to access by government authorities in the EU or US, for instance.

The rollout of Docs to Proton Drive customers starts today, with the feature available to all users in the “next couple of days,” Proton said. Proton Drive is available to consumers under a freemium model, with individual subscriptions costing up to €10 a month (currently about US$10.80) billed annually. Proton for Business subscriptions start at €7 per user per month.

Apple’s Phil Schiller may join OpenAI’s board

Apple Fellow and App Store head Phil Schiller may have something else to fill his time, taking an observer role on the OpenAI board, a Financial Times report claims. It’s yet another signal of the importance Big Tech now attaches to generative AI.

Schiller hasn’t attended a meeting yet but is expected to take the role as ChatGPT support is rolled into Apple devices. There is a precedent to this. The firm’s other Big Tech partner, Microsoft, also holds an observer’s seat on the board.

Some might say

Some might say the decision to bring Apple more fully inside the tent means OpenAI hopes to persuade Apple to integrate its tech more deeply into Apple products. It seems unlikely that Apple will easily be convinced to move beyond a certain point, in part because it is expected to work with other AI suppliers (principally Google Gemini), but also on strength of its own investments in Apple Intelligence and future fee-based AI services. It seems far more likely to reflect the need to ensure good governance.

Think back and you’ll remember that Microsoft, which has invested $13 billion in OpenAI, gained its own observer’s seat after the November 2023 boardroom battle at OpenAI during which co-founder Sam Altman was fired and then rehired as CEO

The truth is neither Apple nor Microsoft will want to countenance poor governance or flawed results as they make the tech available to the world’s population of Windows, Mac, Surface, iPad, and iPhone users. 

Wonderwall

Holding positions, even nonvoting observer positions, on the OpenAI board may help them protect against that, and those roles may expand should Altman’s board have a second meltdown, or in the event the company becomes an acquisition target for either, both, or another big firm.

Microsoft and Apple may also recognize the need to both partner and support AI firms while also developing their own tech, particularly in light of increased regulatory interest in the sector. The US Federal Trade Commission earlier this year launched an inquiry into the partnerships between Big Tech firms and genAI companies. 

“Our study will shed light on whether investments and partnerships pursued by dominant companies risk distorting innovation and undermining fair competition,” said FTC Chair Lina M. Khan in a statement at that time.

Definitely maybe

Competitive concerns aside, the swift evolution of these technologies has thrown a very large brick into the middle of the tech industry pond. Not only does server-based AI generate problems around energy and water supply, but hardware manufacturers are hustling to make or deploy devices with enough computational horsepower to handle this form of AI. Even Apple appears to have been forced to accelerate progress along its processor road map — the M4 MacBook Air was a huge surprise, and with additional M4 models set to ship this year and expectation now that all iPhone models will gain their own higher-end chip, it’s crystal clear the hardware is being tooled up to handle genAI.

There is, however, a limit to what is possible, so it makes sense for Apple — and Microsoft — to gain insight into OpenAI’s future plans, which will both inform their own product development and help guide OpenAI’s. 

Standing on the shoulder of giants

In Apple’s case, the company is also developing its own Apple Intelligence strategy with the introduction of on-device and self-hosted AI models to handle some common tasks, and an anticipated intention to monetize that work somewhere down the line.

Along the way, the company will also be exposing ChatGPT tech to hundreds of millions of people who may never have experienced it before — after all, even though most of the planet now has a smartphone, they may never have experienced artificial intelligence at this level before.  

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Omnissa downplays its VMware past in official launch

News that VMware’s former End User Computing (EUC) division is now officially called Omnissa — and that reference to the former was mentioned only in a footnote in the firm’s press release — is not surprising at all, said Shannon Kalvar, research director of virtual client computing at IDC.

Yesterday marked the official launch of the new organization, now owned by Menlo Park, Calif.-based KKR. The global investment firm paid $4 billion for VMware’s EUC division in a deal announced in late February, only a few months after Broadcom’s $69 billion acquisition of VMware was finalized. The EUC division purchase included Horizon, a desktop and application virtualization platform, and Workspace One, a unified endpoint management platform for the enterprise.

Instead of dwelling on the past, the Omnissa executive team, which includes Shankar Iyer as the firm’s CEO and who formerly headed up the VMware EUC division, has an opportunity to “come out and really lay out a vision for end user computing in an era where companies are increasingly very much digital and becoming AI driven,” Kalvar said.

“By that, I don’t mean all the excitement about LLMs,” he added. “But there have been tremendous advancements in hundreds of different kinds of models for predictive and interpreted analytics, for all kinds of things,” he said.

There is, he said, also an opportunity to say, “OK, we are stable now, but we can go further, we can do more.”

John Annand, practice lead at Info-Tech Research Group, said that as “Broadcom has continued its attempts to mend fences following the acquisition of VMware, we now finally know the outcome of the division they did not want to take into the new partnership.”

Annand described Omnissa as a company that is “aggressively looking to retain the former VMware client base by appealing to the goodwill VMware used to have in both the enterprise and reseller partner space. Senior staff in operations, engineering, marketing, product, and, of course, the new CEO, Shankar Iyer, are all familiar faces for those who took the EUC track at past VMWorld conferences.”

Combine these staff choices, he said, with the “vision and value statements, and the messaging seems clear: ‘We will be the company you used to like doing business with.’”

Omnissa is “wasting no time reaching out to industry analysts to schedule briefings and invite us to attend their Omnissa Live conference” on July 23, Annand said.

“I imagine over the next 20 days, in the lead-up to their conference, we’ll begin to get a sense of their partner program and pricing models. Certainly, these are topics that are foremost on the minds of former VMware customers. And whatever goodwill Omnissa hopes to retain will depend on a large part of how they respond to these questions.” 

Position-wise, said Annand, “this is a great time for them, and it makes a lot of sense for them to move quickly. Citrix recently had to go back to the well in order to raise some more cash and is aggressively ‘evaluating’ its customer portfolio, which is to say focusing on strategic ones at the expense of nonstrategic ones. And while Microsoft continues to reimagine what an entirely cloud-native desktop experience might look like, enterprises need solutions that work with existing software and devices today and not just into the future.”

Annand added that the need for desktop and app virtualization, as well as end-user device management, “has not gone away by any means. Zero-trust and security requirements across all the different form factors, manufacturers, and operating systems we put in front of workers these days have exponentially increased the operational complexity of enterprise IT.”

The challenge for Omnissa will be, he said, “do they bring the same bag of well-rehearsed tricks to the party, or can they, without legacy VMware hanging around their necks, do something truly innovative? If not, then at least we’ll have some competition as Microsoft continues to win the EUC space by default.”

Forrester principal analyst Naveen Chhabra noted in an email, “Companies that use VMware EUC products and plan to continue to do so will have to deal with Omnissa for continued support unless they need no more vendor support. Support is critical for most large organizations for functionality, performance, and security reasons.”

Chhabra noted that VMware customers have had to navigate a lot of change, first adjusting to the Broadcom acquisition and then to EUC division’s sale to KKR. And they’re not done yet.

“Omnissa is a new company, new leadership. Clients will have to learn how to work with a new company, new policies, new roadmap, new licensing,” he said. “So it is not going to be as easy or straightforward as one may want or like. There are credible alternatives from vendors like HCL, Microsoft, IBM, and Ivanti, but, as always, transition/migration is not going to be pain-free.”

China sets its sights on human brain-computer interface standards

China aims to be among the first countries to begin developing standards for the future of brain-computer interfaces with the establishment of a new technical committee by its Ministry of Industry and Information Technology specifically for this purpose.

The ministry’s Brain-Computer Interface Standardization Technical Committee is currently fielding opinions and ideas on various issues associated with the technology and standards that the country already has set for its development, according to a press release published online by the Ministry.

These include developing and revising basic standards not only for the technology’s technical aspects, but also to hammer out issues around ethics and safety — which become increasingly more critical as technology that pushes boundaries for human-machine interaction advance.

The newly formed standards committee is currently soliciting comments regarding topics such as the “typical paradigms” of brain-computer interfaces; input and output interfaces such as brain information collection and preprocessing; and brain information encoding and decoding, data communication, and data visualization.

It’s also formulating and revising technical standards and test specifications for brain-computer interfaces in various fields, including medical, health, education, industry, and consumer electronics. It also will consider ethics and safety aspects such as the safety of emerging interface systems, as well as clinical applications of them.

Organizing standards leadership

Overall, the standards effort will attempt to create some kind of organization around stakeholders involved in China’s domestic brain-computer interface industry, including those in academia, research, and the tech industry itself.

The ultimate goals are “to focus on the hot spots of the industry and the needs of industry development, accelerate the research on the roadmap for the standardization of brain-computer interfaces, clarify the key directions and research and development priorities of brain-computer interface standardization, and coordinate and promote the formulation of brain-computer interface standards,” according to the release.

People have until July 30 to share their comments with the Science and Technology Department of the Ministry during the public announcement period.

The move supports China’s previously revealed three-year plan to establish itself as a global leader in computing standards, particularly for emerging technologies such as artificial intelligence. China is vying to strengthen its position in its ongoing technology race with the US and other nations taking the lead in tech that’s pushing the boundaries of how humans interact with machines.

Ethics to play a key role

While many technology standards efforts focus on interoperability, stewards for technologies such as AI and brain-computer interfaces — which push the boundaries of human-machine interaction — have a more pressing set of concerns, noted Brad Shimmin, chief analyst, AI & Data Analytics at Omdia. China’s new committee and groups such as the Institute of Electrical and Electronics Engineers (IEEE) in the US that seek to clarify these emerging standards will need to put ethical and safety considerations at the forefront of their agendas, he said.

“These organizations will be tasked with the difficult task of providing ethical guidance, providing a sustainable foundation upon which innovators can build solutions, as well as placing constraints on research and experimentation,” Shimmin said. “Such efforts can help to accelerate innovation while also ensuring that funded research conforms to the current socio-political expectations of the host country.”

Even with standards bodies such as the IEEE, the United States has historically encouraged aggressive research and experimentation with new technologies — up to a point, Shimmin noted. In the US, for example, Elon Musk’s brain-computer interface company Neuralink is currently in human trials with its surgically implanted brain chip, though it hit a snag this week when the second patient who was to receive the chip bowed out for medical reasons. As these trials evolve, however, organizations like the National Institutes of Health will continue to collaborate with lawmakers so they can step in to limit potentially dangerous research, he said.

Still, countries that can take a lead on the standardization of methods, interface mechanics, or materials used in creating human brain-computer interfaces, as well as the consideration of ethical issues, can “fuel national pride” that in turn drives investment in innovation and an influence on the global stage, Shimmin noted.

“Any country able to set the tone for highly impactful areas of innovation … can to a great degree shape the future of influence in that market, drawing in talented researchers and investors,” he said.

Still, no matter what standards bodies decide about human brain-computer interfaces, the pace of the technology will likely move very slowly — at least in the US, given that any meaningful use or market application will have to be approved by medical and healthcare regulators, experts said. This may give China’s standards efforts an edge if they are not limited by such a rigorous approval structure. 

CocoaPods flaws left iOS, macOS apps open to supply-chain attack

Recently patched vulnerabilities in a software dependency management tool used by developers of applications for Apple’s iOS and MacOS platforms, could have opened the door for attackers to insert malicious code into many of the most popular apps on those platforms.

One particular security weakness in the CocoaPods dependency manager created a mechanism for hackers to launch supply chain attacks, security researchers at EVA Information Security warned Monday.

Developers who relied on CocoaPods over recent years should verify the integrity of open source dependencies in their code in response to these security weaknesses, EVA advised.

CocoaPods is an open-source dependency manager for Swift and Objective-C projects. Software developers use the technology to verify the integrity and authenticity of the components they’re using by ensuring the checksums and digital signatures of packages are all present and correct.

With iOS 18, Apple deepens its connection to India

Beyond Intelligence, India is another ‘I’ Apple is making big investments in, and the scale of its journey there becomes easier to see every single day. It’s a commitment that goes OS deep.

I say that because Apple has woven eight India-focused enhancements within iOS 18, which shows how the company is focused on building its reach into the nation’s smartphone market.

The market isn’t the only thing it wants to build in India. Manufacturing there is also on the rise — and Apple and its manufacturing partners are actually growing their business there even faster than they agreed with India’s government in the first place.

Designed in California, Made in India

Apple has three manufacturing partners in India: Foxconn, Pegatron, and Tata Electronics. All three are in receipt of various forms of support under India’s PLI scheme, which aims to bring more technology manufacturing to India. Under the scheme, manufacturers must agree to meet certain production targets to qualify for that help. 

Apple’s iPhone partners have massively exceeded those agreed targets, with production reaching levels 45% higher than was agreed. 

Apple’s iPhone sales are also increasing, reaching 10 million in 2023, up from six million the previous year. That gives the company 23% of India’s smartphone revenue share. 

In tandem with Apple’s other consumer-facing initiatives in India, including high street Apple retail stores and various developer education offerings, the company does seem to be successfully stimulating business there.

What else can it do?

India inside your iPhones

Localization isn’t just a good thing to do, it’s also the right thing to do. People recognize when a company has gone the extra mile to make products or services that are relevant to them. Believe it or not, the world is not one vast monoculture, but a medley of many, who at their best rub alongside each other. 

Recognizing this, it matters that Apple in iOS 18 will introduce numerous enhancements designed to reach India’s consumers. It’s a big message that tells India’s consumers the company remains seriously committed to doing business there, and will no doubt help it further improve those all important customer satisfaction levels upon which the company builds so much, from services to app and accessory sales.

That constant reaching out to the target market is typical of Apple. (Though not always consistent — for example, I do wish the company would introduce European Portuguese language support and do not understand why it has not.)

Ultimately, Apple knows that if you reach out effectively, you build business for tomorrow. That’s implicit across the company’s entire approach to its business, even to the extent of, for example, the high-quality design of the headbands on Vision Pro. That doesn’t necessarily mean its products are the most affordable but does mean it has a great reputation for being the best.

Bottom line? Additional iOS localization in India will help Apple spread its gospel in this strategically important market, creating stronger foundations for development there. It’s focus and investment that gave Apple its highest ever iPhone sales in India last year

iOS 18 gets ready for India

So, what has Apple added to its iPhone OS? A wave of improvements that represent the company’s growing understanding of the needs of that market:

  • You will be able to customize the Lock Screen’s time display using Indian numerals from 12 of the nation’s languages, including Arabic, Arabic Indic, Bangla, Devanagari, Gujarati, Gurmukhi, Kannada, Malayalam, Meitei, Odia, Ol Chiki, and Telugu.
  • If your carrier supports it, Live Voicemail transcription will be available in Indian English.
  • The multilingual keyboard will support English and up to two additional Indian languages, including Bangla, Gujarati, Hindi, Marathi, Punjabi, Tamil, and Telugu. 
  • Different keyboard alphabetical layouts will be available in 11 Indian languages (Bangla, Gujarati, Hindi, Kannada, Malayalam, Marathi, Odia, Punjabi, Tamil, Telugu, and Urdu).
  • Language search will be improved with the addition of select Indian languages.
  • Siri will support nine Indian languages in addition to Indian English. That means you’ll be able to interact in Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Punjabi, Tamil, and Telugu.
  • The Translate app will support Hindi, and that support extends to translation in Safari, Notes, and elsewhere across the OS.

A thoughtful strategy

The journey from Apple’s entry to India to now has been a very long road. Along the way, the company has demonstrated a brilliant strategy that should be part of the playbook for any firm seeking to access new markets. It’s so simple to articulate, and so complex to do. It works like this:

  • Every market is different. Engage with new markets on their own terms.
  • Invest selflessly. That new factory you spend millions on will build its own rewards in terms of local employment and consumer loyalty.
  • Meet people where they are.
  • Iterate and improve over time.

Apple’s successful execution of this approach is precisely why India is set to become Apple’s third biggest market.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.