Author: Security – Computerworld

How to install App Store apps onto SSD drives using macOS Sequoia

Did you know that Apple’s macOS 15.1 Sequoia now lets you install and use applications acquired from the Mac App Store directly onto an external drive? This enhancement is actually particularly useful if your workflow requires you to handle a space-devouring application.

Here’s what you need to know about it and how it works.

What’s changed?

While anyone who is paying attention should already be impressed by the sheer speed and performance of Apple’s new Macs, that performance also means pro users will push the platform to its limits, banging into any inherent challenges to how Macs work.

One of these challenges is the need to optimize the space you have on your Mac when running larger applications — and given the cost of installing additional space on most Apple hardware, there was demand for a lower-cost way to do just that. The solution comes with macOS Sequoia 15.1.

Wait, is this really new?

So you’ve spotted that many Mac apps (downloaded from outside the App Store) allow users to install and use them on external drives. This is not automatically the case for applications downloaded and installed from the Mac App Store,however — these insist on being hosted on the Mac’s own drive. You have always been able to run most apps and macOS from an external drive, but now you can do the same with App Store apps, including Pro Apple apps.

What are the limitations?

There are some limits to the new feature tweak. 

  • The biggest is that you’ll only be able to install applications larger than 1GB in size, which is great for games and pro apps, less great for users of smaller apps, who may just want to manage storage their own way. We can hope Apple lifts the 1GB restriction eventually.
  • The second limitation is the speed of the external SSD; obviously, the speedier it is, the better the offloaded application will perform.
  • The final — and most inconvenient — limitation is that once it is enabled it is not optional. In the future, you’ll need to install any application of 1GB or more on external storage unless you turn the setting off. 

What do you need?

You need to be running macOS 15.1 and have a suitable connected drive. The drive must also be formatted to APFS. To check that this is so, with the drive connected to your Mac, right-click the drive icon in Finder and select “Get Info.”

How to begin installing Mac apps on external drives

Before you use the feature, you need to open the Mac App Store on your Mac.

  • Go to App Store>Settings in the Menu bar.
  • Check the box beside the “Download and install large apps to a separate disk” item in Settings.
  • When you have enabled that setting, you can select the external drive you want to save your applications to.

After that, when you want to install a large application from the Mac App Store, you will need to ensure the external SSD you want to use is connected to your computer.

How to use a Mac app on an external drive

At the risk of sounding obvious, you do need to connect the drive your application is stored on to your Mac to use the application you have hosted there. It is relatively seamless after that — the app will be visible in your Applications folder, opens with a double click and can be used just like any other app. (One thing it does not do is appear in Launchpad.)

Why does it matter?

Cost is the biggest reason this is important. Additional storage in Macs isn’t cheap; it will cost you an additional $600 to slot 2TB of storage inside the base model MacBook Pro, while a good and speedy external SSD should cost you around two-thirds of that, or less if you’re a little more flexible. That cost increases if you are provisioning multiple seats, so in some cases this feature could help you stretch purchasing budgets a little further. Consumer users can also use this to enable them to better explore and learn about professional applications without needing to worry about having enough space on their Mac.

Please follow me on LinkedInMastodon, or join me in the AppleHolic’s bar & grill group on MeWe.

Microsoft adds Copilot AI features to some non-US M365 consumer plans

Microsoft is bundling its Copilot generative AI (genAI) assistant with consumer Microsoft 365 subscriptions in several countries, the company announced last week

Copilot Pro will be included in Microsoft 365 Personal and Family subscriptions in Australia, Malaysia, New Zealand, Singapore, Taiwan, and Thailand, the company said in a statement first spotted by ZDNet. It means users will gain access to Copilot features in apps such as Word, Excel, and PowerPoint. Designer —Microsoft’s text-to-image app — is also included.

Microsoft will also increase the cost of the subscription — prices will vary in each country — though this will be less than the cost of a separate Copilot Pro subscription. Australian customers, for example, will pay an additional $4 AUD a month for M365 Family subscriptions, and an extra $5 AUD for M365 Personal subscriptions, according to The Verge. In comparison, Copilot Pro costs $33 AUD per user each month. 

Customers will be limited in how much they use Copilot in apps, however, with a credit system in place. Those who want unrestricted access will need to pay for a Copilot Pro subscription. 

Microsoft didn’t say whether it plans to extend the changes to consumer M365 subscriptions in other regions, but it’s possible the move is a trial run for US and European markets.  

In the US, Copilot Pro costs an extra $20 per user per month for M365 Family and Personal customers. 

“I suspect this is just the first step in [Microsoft] bundling Copilot to a larger audience,” said Jack Gold,founder and principal analyst at J. Gold Associates. “The initial countries are probably a trial deployment to see how it goes, what the most common uses are, and how much they can charge. I’ll bet that in the next [one to two] quarters, you’ll see a much wider rollout to many other countries.”

It’s also possible the Copilot bundling in consumer M365 subscriptions could presage a similar move for business customers, though there’s no mention of such a move on the horizon just yet. 

Microsoft charges an extra $30-per-user-a-month fee to businesses for access to Copilot in Microsoft 365. Despite considerable interest in the M365 Copilot, businesses have been slow to rollout the genAI assistant widely across their organizations, in part due to high costs and a perceived lack of value.

It’s likely this will be the case sooner or later: Analysts at Gartner have said they expect genAI features to be included at no extra cost in office software subscriptions by 2028, according to a recent report (subscription required), as vendors seek broader adoption of their AI tools. 

For Microsoft, this could even mean the addition of a new M365 pricing tier — the long-rumored “E7” — that would include premium features currently available as paid-for add-ons, such as Copilot.  

US consumer protection agency bans employee mobile calls amid Chinese hack fears

The US Consumer Financial Protection Bureau (CFPB) has issued an urgent directive barring employees and contractors from using mobile phones for work-related calls, following a major breach in US telecommunications infrastructure attributed to Chinese-linked hackers.

According to an internal memo, CFPB’s chief information officer advised staff to move sensitive discussions to secure platforms like Microsoft Teams and Cisco WebEx, reported the Wall Street Journal (WSJ).

What if robots learned the same way genAI chatbots do?

There’s no question that robotics is transforming our world. Thanks to computerized machines, manufacturing, healthcare, agriculture, supply chains, retail, automotive, construction, and other industries are seeing rapidly increasing efficiencies and new capabilities.

One challenge with bringing new robots online is that it’s hard, expensive, and time-consuming to train them for the task at hand. Once you’ve trained them, you have to retrain them with every minor tweak to the system. Robots are capable, but highly inflexible. 

Some of the training is handled by software coding. Other methods use imitation learning, where a person teleoperates a robot (which, during training, essentially functions as a puppet) to kickstart data for robot movement. 

Both approaches are time-consuming and expensive. 

Compounding the difficulty is a lack of standards. Each robot manufacturer uses its own specialized programming language. The interfaces used for teaching robots, especially “teach pendants,” tend to lack the modern attributes of the major, non-proprietary software development environments. (A teach pendant is a handheld control device that enables operators to program and control robots, enabling precise manipulation of the robot’s movements and functions.)

The lack of standards adds both complexity and costs for obvious reasons. Robot programming courses can cost thousands of dollars, and companies often need to train many employees on several robotics programming platforms. 

Because of a lack of standards, because robots are inflexible once trained, and because robot skill development is manual and task-by-task, it is complex, time-intensive, and costly. 

MIT to the rescue?

To solve the enormous problems of robot training, MIT researchers are developing a radical, brilliant new method called Heterogeneous Pretrained Transformers, or HPTs.

The concept is based roughly on the same concept of large language models (LLMs) now driving the generative AI boom. 

LLMs use vast neural networks with billions of parameters to process and generate text based on patterns learned from massive training datasets. 

HPTs work by using a transformer model to process diverse robotic data from multiple sources and modalities. To that data, the model adds and aligns vision and robot-movement inputs in the form of tokens. And all this is processed by an actual LLM. The larger the transformer, the better the robot’s performance. 

While LLMs and HPTs are very different — for starters, every physical robot is mechanically unique and very different from other robots — they both involve vast training datasets from many sources. 

In the case of HPTs, researchers added data from real physical robots and simulation environments and multi-modal data (from vision sensors, robotic arm position encoders, and others). The researchers created a massive dataset for pretraining, including 52 datasets with more than 200,000 robot trajectories.

As a result, HPTs need far fewer task-specific data. And this is early days for the method. As with LLMs, it’s reasonable to expect massive advances in capability with additional data and optimization. 

Researchers found that the HPT method outperformed training from scratch by more than 20% in both simulations and real-world experiments.

Limitations to HPT robot training

While HPTs show promise, they’re still limited and need development. 

Just as even more advanced LLM-based chatbots can “hallucinate” and tend to be polluted with bad data, HPTs need a mechanism for filtering out bad data from the datasets. Nobody wants a powerful industrial robot “hallucinating” and freaking out on the factory floor.

While LLMs and HPTs are similar in concept, LLMs are far more advanced because the available datasets are massively higher. To industrialize the method, the models would need massive quantities of probably simulated data to add to the real-world data. 

As it was during the early days of LLMs, HPT research at MIT is currently averaging below 90% success rates.

According to the researchers, future research should explore several key directions to overcome the limitations of HPT.

To unlock further potential in robotic learning, training objectives beyond supervised learning, such as self-supervised or unsupervised learning, should be investigated. 

It is important to grow the datasets with diverse, high-quality data. This could include teleoperation data, simulations, human videos, and deployed robot data. Researchers need to learn the optimal blend of data types for higher HPT success rates. 

Researchers and later industry will need to create standardized virtual testing grounds to facilitate the comparison of different robot models. (These would likely come from Nvidia.)

Researchers also need to test robots on more complex, real-world tasks. This could involve robots using both hands (bimanual) or moving around (mobile) to complete longer, more intricate jobs. Think of it as giving robots more demanding, more realistic challenges to solve.

Scientists are also looking into how the amount of data, the size of the robot’s “brain” (model), and its performance are connected. Understanding this relationship could help us build better robots more efficiently.

Another exciting area is teaching robots to understand different types of information. This could include 3D maps of their surroundings, touch sensors, and even data from human actions. By combining all these different inputs, robots could learn to understand their environment more like humans do.

All these research ideas aim to create smarter, more versatile robots that can handle a wider range of tasks in the real world. It’s about overcoming the current limitations of robot learning systems and pushing the boundaries of what robots can do.

According to an MIT article on the research, “In the future, the researchers want to study how data diversity could boost the performance of HPT. They also want to enhance HPT so it can process unlabeled data like GPT-4 and other large language models.”

The ultimate goal is a “universal robot brain” that could be downloaded and used without additional training. In essence, HPTs would enable robots to perform far closer to how people act. Specifically, a new, un-trained employee hired to work on an assembly line already knows how to pick things up, walk around, manipulate objects, and identify widgets by sight. They then start out haltingly, gaining confidence with additional skills acquired through practice. MIT researchers see HTP-trained robots as operating the same way. 

This raises obvious concerns about replacing human workers with robots, but that’s a subject for another column. 

In the meantime, I think MIT researchers are onto something here: a new technology that could — and probably will — radically accelerate the industrial robotics revolution. 

Microsoft Ignite 2024 – get the latest news and insights

Microsoft Ignite 2024 kicks off in Chicago and runs Nov. 19-22.  If you can’t make it to Chicago, no worries. First, the physical event is sold out, according to the Ignite event page. Second, it’s a hybrid event, so you can attend Ignite virtually. 

Whether you’re there physically or online, expect to learn more about the latest technologies from Microsoft — everything from artificial intelligence (AI) to cloud computing, security, productivity tools, and more  In the keynote address, Microsoft CEO Satya Nadella and Microsoft leaders — including Charlie Bell, executive vice president of Microsoft Security and Scott Guthrie, executive vice president of the Microsoft Cloud + AI Group — will share how the company is creating new opportunities across its platforms in this rapidly evolving era of AI.

You can also network with industry experts and Microsoft’s team, IT leaders, and other tech enthusiasts; gain hands-on experience and learn from experts at technical sessions; and learn about new products and services. (Microsoft often announces new products and features at Ignite.)

As you get ready for the event to start, here’s a look back at some of our previous Ignite coverage, as well as recent articles that touch on some of the topics you can expect to see at the event. And remember to check this page often for more on Ignite 2024.

Previous Microsoft Ignite coverage

Microsoft to launch autonomous AI at Ignite

Oct. 21, 2024: Microsoft will let customers build autonomous AI agents that can be configured to perform complex tasks with little or no input from humans. Microsoft announced that tools to build AI agents in Copilot Studio will be available in a public beta that begins at Ignite on Nov. 19, with pre-built agents rolling out to Dynamics 365 apps in the coming months.

Microsoft Ignite 2023: 11 takeaways for CIOs

Nov., 15, 2023: Microsoft’s 2023 Ignite conference might as well be called AIgnite, with over half of the almost 600 sessions featuring AI in some shape or form. Generative AI (genAI), in particular, is at the heart of many of the product announcements Microsoft is making at the event, including new AI capabilities for wrangling large language models (LLMs) in Azure, new additions to the Copilot range of genAI assistants, new hardware, and a new tool to help developers deploy small language models (SLMs) too.

Microsoft partners with Nvidia, Synopsys for genAI services

Nov. 16, 2023: Microsoft has announced that it is partnering with chipmaker Nvidia and chip-designing software provider Synopsys to provide enterprises with foundry services and a new chip-design assistant. The foundry services from Nvidia will be deployed on Microsoft Azure and will combine three of Nvidia’s elements — its foundation models, its NeMo framework, and Nvidia’s DGX Cloud service.

As Microsoft embraces AI, it says sayonara to the metaverse

Feb. 23, 2023: It wasn’t just Mark Zuckerberg who led the metaverse charge by changing Facebook’s name to Meta. Microsoft hyped it as well, notably when CEO Satya Nadella said, “I can’t overstate how much of a breakthrough this is,” in his keynote speech at Microsoft Ignite in 2021. Now, tech companies are much wiser, they tell us. It’s AI at heart of the coming transformation. The metaverse may be yesterday’s news, but it’s not yet dead.

Microsoft Ignite in the rear-view mirror: What we learned

Oct. 17, 2022: Microsoft treated its big Ignite event as more of a marketing presentation than a full-fledged conference, offering up a variety of announcements that affect Windows users, as well as large enterprises and their networks. (The show was a hybrid affair, with a small in-person option and online access for those unable to travel.)

Related  Microsoft coverage

Microsoft’s AI research VP joins OpenAI amid fight for top AI talent

Oct. 15, 2024: Microsoft’s former vice president of genAI research, Sebastien Bubeck, left the company to join OpenAI, the maker of ChatGPT. Bubeck, a 10-year veteran at Microsoft, played a significant role in driving the company’s genAI strategy with a focus on designing more efficient small language models (SLMs) to rival OpenAI’s GPT systems.

Microsoft brings Copilot AI tools to OneDrive

Oct. 9, 2024: Microsoft’s Copilot is now available in OneDrive, part of a wider revamp of the company’s cloud storage platform.  Copilot can now summarize one or more files in OneDrive without needing to open them first; compare the content of selected files across different formats (including Word, PowerPoint, and PDFs); and respond to questions about the contents of files via the chat interface. 

Microsoft wants Copilot to be your new AI best friend

Oct. 09, 2024: Microsoft’s Copilot AI chatbot underwent a transformation last week, morphing into a simplified pastel-toned experience that encourages you…to just chat. “Hey Chris, how’s the human world today?” That’s what I heard after I fired up the Copilot app on Windows 11 and clicked the microphone button, complete with a calming wavey background. Yes, this is the type of banter you get with the new Copilot.

EU launches probe of Corning’s Gorilla Glass for competition violations

The European Commission has opened a formal investigation into whether US glass producer Corning, known for its Gorilla Glass, might have abused its dominant position in the market for protective glass for electronic devices. Corning’s products are used, among other things, in several of Apple’s and Samsung’s devices.

The Commission suspects the company might have entered into anticompetitive agreements with cell phone makers and glass refiners, including claims for exclusive purchases and discounts based on those pacts. Gorilla Glass has been used in mobile devices for more than a decade.

The agreements might have prevented competitors from entering the market, reducing consumer choice, raising prices and inhibiting innovation. If Corning is found guilty, the company could be fined. Before that happens, Corning will have the chance to respond to the European Commission’s objections and the investigation can be closed if the company fulfills certain commitments.

Apple is back in the server business

Does anyone else out there remember Xserve? 

Discontinued in 2010, this was an Apple server that saw adoption as a supercomputer cluster, and found another use within movie industry workflows as a RAID system. Fans might be interested to know that an Xserve cluster at Virginia Tech ranked No. 7 on the Top 500 list of supercomputers in 2004, topping out at 12.25 teraflops of performance. (That, incidentally, is about the performance of an iPhone 12, or an M1-based Mac.)

Holding it wrong

Apple discontinued the Xserve with a famously terse Steve Jobs email apparently claiming “hardly anyone was buying it.”

Today, with what is arguably the world’s most performant low-power computer chips rolling off production lines, the Apple Silicon opportunity means the company is returning to the server market; it’s tasking Foxconn with making M4-powered servers to run Apple Intelligence as that service gets rolled out globally over the coming year. 

Apple Intelligence servers are currently powered by the M2 Ultra chip, but Apple intends to upgrade these to M4 chips next year. It is alleged that the choice of Taiwan is deliberate, as the company hopes to gain some input from engineers who have worked on Nvidia servers, though as Apple Intelligence is an internal Apple project there’s no conflict of interest in that proposal — at least, not yet.

After all, Apple is not competing in the server market simply by making servers for its own AI, though its M4 Ultra chip might even outperform Nvidia’s mighty RTX 4090 processor, reports claim. So perhaps there’s a pathway there.

Apple now makes servers

Apple uses these servers for Apple Intelligence functions that require more power than the Apple device used to request the task. When those tasks are uploaded to the cloud, they are given to Apple’s own super-private servers or (optionally) outsourced to OpenAI.

To protect the flow of data, the company’s Private Cloud Compute is a server-based Apple Intelligence rig that lets Mac, iPhone, and iPad users exploit Apple’s own AI in the cloud. What’s important about the service is that it maintains the high privacy and security we already expect from Apple. That means Apple won’t get to see or keep your data and will not know what you’ve requested. “Private Cloud Compute allows Apple Intelligence to process complex user requests with groundbreaking privacy,” said Craig Federighi , Apple’s senior vice president of software engineering. 

The idea is that you can use these LLM tools with peace of mind — the kind any rational person will require when handling their own information. I’ve argued before that this is what every cloud-based AI service should strive to deliver, though I don’t think they will; too many business models are based around capturing, exploiting, and even selling information about their users. That’s why some companies ban staff from using AI.

Perhaps it could sell or rent these servers?

The one thing Apple Intelligence has that perhaps isn’t being fully explained is that Apple also offers developers APIs so they can weave the generative AI technology into their products. Right now, that means introducing Apple Intelligence features within them, but given the importance of AI to developers, and the desire among some of them to make smart tools that can be used privately for specific use cases, at what point might Apple offer Private Cloud Compute as a service to provide trusted computing? Perhaps that is why it is putting the system through such rigorous security review?

There has to be an opportunity. There will be some companies who want to make their own AI solutions, but demand the kind of hardcore security Private Cloud Compute provides. Given that Apple has tasked Foxconn with making servers to support that service, at what point will provision of the servers, along with the bare bones, highly secure, software they run, become a business opportunity? There’s a business case, and given Apple is already leading the industry in just how willing it is to open these boxes up for security review, it feels like a potential direction — if there’s any money in it.

And there clearly is — quite a lot, in fact.

As everything becomes AI, where’s the money?

Recognition of the value and need for AI servers is, in part, what has driven Nvidia’s market cap to intermittently overtake that of Apple this year. The need for servers to provide support for AI is a growth opportunity for all in the space — except perhaps for Intel and AMD, who are watching as ARM’s reference designs define expectations for processor performance.

Whether it wants to be or not, Apple is in the server business, and now that it is, it makes sense for the company to generate more revenue from it. After all, who else promises the kind of rock-solid platform-focused security? Who else can provide such fast chips at such low energy requirements? The only snag in this particular ointment is that Apple Intelligence is not inherently cross-platform, though this hasn’t really got in the way of the company’s success for the last couple of decades. 

Please follow me on LinkedInMastodon, or join me in the AppleHolic’s bar & grill group on MeWe.

Amazon CEO: In-office requirement isn’t designed to make workers quit

In an internal meeting, Amazon CEO Andy Jassy responded to recent criticism from many employees about the company’s new plan for a full return to the office in January. The mandate means that as the beginning of the new year, almost all employees will have to be in the office five days a week.

Jassy said ​​the aim is not to force any resignations among staffers or to satisfy decision-makers in cities, which were among the allegations made by angry employeesReuters reports.

Employees have also objected that return-to-work plan is stricter than arrangements at other large tech companies and that it will make work less efficient due to commuting times. Jassy previously said his goal is to increase efficiency at work and promote collaboration and innovation.

AMD rolls out open-source OLMo LLM, to compete with AI giants

AMD has launched its first open-source large language models (LLMs) under the OLMo brand, aiming to strengthen its position in the competitive AI landscape led by giants like Nvidia, Intel, and Qualcomm.

AMD OLMo is a series of 1-billion parameter large language models trained from scratch using trillions of tokens on a cluster of AMD Instinct MI250 GPUs. They are designed to excel in reasoning, instruction-following, and chat while embracing an open-source ethos that allows developers access to data, weights, training recipes, and code.

“Continuing AMD tradition of open-sourcing models and code to help the community advance together, we are excited to release our first series of fully open 1 billion parameter language models, AMD OLMo,” AMD said in a statement.

AMD’s open-source approach positions OLMo as an accessible and scalable option for companies seeking alternatives in AI technology. The model can be deployed in data centers or on AMD Ryzen AI PCs equipped with neural processing units (NPUs), allowing developers to leverage advanced AI directly on personal devices, the statement added.

“AMD is following Nvidia’s lead by expanding into the large language model (LLM) space alongside its well-established strength in computing hardware — a direction that Intel and Qualcomm have not yet fully embraced,” said Abhigyan Malik, practice director at Everest Group. “By fostering an open ecosystem, AMD enables developers to innovate and build diverse applications through a network effect.”

According to Malik, this strategy amplifies AMD’s core value proposition, particularly in driving demand for its underlying hardware, including AMD Instinct MI250 GPUs and Ryzen CPUs, where “AMD seeks to create lasting market impact.”

Extensive training and fine-tuning

The OLMo series follows a detailed three-phase training and fine-tuning process, according to AMD.

Initially, OLMo 1B was pre-trained on a subset of the Dolma v1.7 dataset using a transformer model focused on next-token prediction. This helped the model grasp general language patterns. In its second phase, the OLMo 1B was supervised and fine-tuned (SFT) on multiple datasets to refine its capabilities in science, coding, and mathematics.

The final model, OLMo 1B SFT DPO, was optimized with Direct Preference Optimization (DPO) based on human feedback, resulting in a model that effectively aligns its responses with typical user expectations.

Competitive performance and benchmark success

In internal benchmarks, AMD’s OLMo models performed well against similarly sized open-source models, such as TinyLlama-1.1B and OpenELM-1_1B, in multi-task and general reasoning tests, the company claimed. Specifically, its performance increased by over 15% on tasks in GSM8k, a substantial gain attributed to AMD’s multi-phase supervised fine-tuning and Direct Preference Optimization (DPO). ‘

In multi-turn chat tests, AMD claimed, OLMo showed a 3.41% edge in AlpacaEval 2 Win Rate and a 0.97% gain in MT-Bench over its closest open-source competitors.

However, when looking at the broader LLM landscape, Nvidia’s GH200 Grace Hopper Superchip and H100 GPU remain leaders in LLM processing, particularly for large, multi-faceted AI workloads. Nvidia’s focus on innovations like C2C link, which accelerates data transfer between its CPU and GPU, gives it an edge, providing a speed advantage for high-demand inference tasks such as recommendation systems.

Intel, while slightly behind in peak speed, leverages its Habana Gaudi2 accelerator for cost-effective yet robust performance, with future upgrades planned for increased precision. ‘

Meanwhile, Qualcomm’s Cloud AI100 emphasizes power efficiency, meeting the needs of organizations seeking high AI performance without the extensive energy demands associated with Nvidia’s high-end systems.

AMD’s OLMo models also showed strong performance on responsible AI benchmarks, such as ToxiGen (for toxic language detection), crows_pairs (bias assessment), and TruthfulQA-mc2 (accuracy). These scores reflect AMD’s commitment to ethical AI, an essential focus as AI integration scales across industries.

AMD’s position in the AI market

With its first open-source LLM series, AMD is positioned to make significant inroads in the AI industry, offering a compelling balance of capability, openness, and versatility to compete in a market currently led by Nvidia, Intel, and Qualcomm.

However, AMD’s ability to close the gap will depend on how well its open-source initiative and hardware enhancements keep pace with rivals’ advances in performance, efficiency, and specialized AI capabilities.

“AMD’s entry into the open-source LLM space strengthens the ecosystem, potentially lowering the operational costs associated with adopting generative AI,” said Suseel Menon, practice director at Everest Group.

AMD’s move into LLMs places it against established players like Nvidia, Intel, and Qualcomm, who have gained market prominence with their proprietary models.

“This move also puts pressure on proprietary LLMs to continually innovate and justify their pricing structures,” Menon added.

Analysts believe AMD’s unique open-source strategy and accessibility aim to attract enterprises and developers looking for flexible, affordable AI solutions without proprietary constraints.

“For large enterprises with long-term data privacy concerns, AMD’s open-source model offers a compelling alternative as they navigate AI integration,” Menon added. “By building a cohesive, full-stack AI offering that spans hardware, LLMs, and ecosystem tools, AMD is positioning itself with a distinct competitive edge among leading silicon vendors.”

IT certifications for cloud architects, data security engineers, and ethical hackers yield the biggest pay boosts

Cloud architects, data security engineers, and ethical hackers are among the highest-paying skills that can be attained through IT certifications — and AI technology didn’t even make the list.

Online learning platform Skillsoft analyzed the top reported salaries of IT professionals around the world to find the highest-paying certifications and developed a list of more than 20.

This year’s list shows that cloud computing skills remain in high demand and can be quite lucrative. The AWS Certified Security Specialty training jumped from sixth-highest to the top-paying certification this year to now command a $204,000 annual salary on average — a up 22% or $40,000 over last year.

The presence of certifications for Google Cloud Platform (GCP), AWS, Azure, and Nutanix also highlights the value of a diverse cloud skillset, as organizations adopt multi-cloud or hybrid cloud strategies, according to Skillsoft.

Its list is similar to one published earlier this year by job search platform Indeed, which also placed an AWS certification in the No. 1 slot. (Indeed found AWS Certified Solutions Architects could earn from $133,200 to $246,900 a year at some firms.)

“So, are they worth it? For those looking for any of the above, it’s a resounding yes,” Skillsoft said a blog post. “But, earning a certification takes time, effort, and often money.”

Are certifications worth the price?

Earning a certification led to pay raises, promotions and new jobs, according to Skillsoft. In addition to AWS training, rounding out the top five certifications were:

  1. Google Cloud – Professional Cloud Architect, averages $190,204.
  2. Nutanix Certified Professional – Multicloud Infrastructure (NCP-MCI) v6.5, averages $175,409.
  3. CCSP – Certified Cloud Security Professional, averages $171,524.
  4. CCNP Security, averages $168,159.

Indeed’s list of 17 top certifications had these top five:

  1. AWS Certified Solutions Architect – Associate
  2. Certified Data Privacy Solutions Engineer (CDPSE)
  3. Certified Cloud Security Professional (CCSP)
  4. Certified Data Professional (CDP)
  5. Certified Ethical Hacker (CEH)

Gartner Research, in an August report, also found that AWS Certified Cloud Practitioners and Microsoft Certified Azure Fundamentals certifications were top upskilling opportunities for tech workers. Other IT certifications with fast-growing demand this year are in cybersecurity, including the CISSP certification, CISA, and CompTIA Security+, according to Gartner. (The latter — IT certifications from the Computing Technology Industry Association (CompTIA), a non-profit trade association — were also among the general class of top certifications on multiple lists.)

“While learning new technology skills is vital, the ability for employees to demonstrate practical expertise through industry-recognized certifications is increasingly valued,” Gartner said. “Though they may not be a mandatory prerequisite for every position, certifications can empower individuals and organizations alike.”

“Our data suggests that tech professionals skilled in cloud computing, security, data privacy, and risk management, as well as able to handle complex, multi-faceted IT environments, will be well positioned for success,” said Greg Fuller, vice president of online learning platform Codecademy Enterprise. “Overall, the IT job market is characterized by a significant imbalance between supply and demand, which continues to drive salaries higher.”

What’s happening with AI training?

While AI certifications have not yet to the top of IT certification lists, the increasing emphasis on data privacy and compliance is closely tied to the rollout of AI technologies. And while AI skills are gaining popularity, it often takes time for certifications to gain traction, Fuller said.

“Right now, what we see with areas like AWS Security at the top is that organizations are still preparing for large scale AI rollouts,” he said. “So more adjacent skills are on this year’s list. Ultimately, it’s a mix of certifications being a bit slower to evolve and adjacent skills rising in criticality.

“In the meantime, the backbone of AI is cloud, so getting cloud certified is a good first step. Then, look at some of the more specialized Cloud AI certifications,” Fuller added.

Recruitment and talent consulting firm WilsonHCG released a report this week indicating that while AI certifications might not be on the top 20 lists, there is rising demand for AI skills across sectors. The market for AI-skilled workers is expanding, too, with 5,898 average monthly job postings in October, according to WilsonHCG.

The rise in the number AI-focused certifications reflects a significant increase from the 12-month average of 5,147, driven by heightened interest in roles like data scientist, AI research engineer, and machine learning engineer.

Companies such as TikTok, Apple, Google, Amazon, and Deloitte are among the most active in AI recruitment, underscoring the technology’s growing adoption in sectors from tech to finance and professional services, according to WilsonHCG.

The need for AI skills extends beyond traditional tech positions. Companies are seeking professionals across a range of roles, including Founding AI Engineer and Senior Software Engineer for AI products,” WilsonHCG said in its report. “This trend is reshaping hiring practices and job titles as more organizations prioritize data-driven and AI-enabled functions across departments.”

Skills continue to matter more than formal education

Skills-based hiring approaches that emphasize strong work backgrounds, certifications, assessments, and endorsements, continue to dominate the tech industry. And soft skills are becoming a key focus of hiring managers, even over hard skills.

Elise Smith, co-founder and CEO of Praxis Labs, an AI-based learning platform, said she has worked with enterprises like Google, Uber, and ServiceNow to help senior leaders develop the skillsets needed for “new-age talent retention” and collaboration in the workplace.

“As workplaces continue to transform — whether its emerging technologies like genAI transforming how we work or sociopolitical conflicts that cause disruption to our workflows — human skills will become more and more important,” Smith said.

What’s often missing from higher education is a focus on skills building around interpersonal communication, conflict resolution, critical reasoning, and the ability to determine fact from opinion or misinformation. “What once may have been called soft skills will be seen as power skills, and workforces who focus and develop these skills will differentiate in market outcomes,” Smith said.

While building relations and moving beyond “transactional trust” in the workplace can be challenging — especially for a hybrid global workforce — it’s important to build skills around workplace connection.

“When managers are skilled in asking open-ended questions, coaching disengaged team members, learning more about individuals’ backstories and contexts, and encouraging them in their work, teams thrive,” she said. “These are the skillsets we help our clients and their people leaders develop.”