Month: September 2024

How macOS Sequoia can help you at work

Along with iOS 18, Apple today is releasing macOS Sequoia, iPadOS 18, and the latest update to watchOS. (Apple Intelligence, which isn’t expected to begin to appear until next month, has gotten a lot of attention, but the pre-AI versions of these operating systems offer plenty of useful features and updates.)

Focusing today on macOS Sequoia, should you upgrade immediately? That depends. 

There are good things to tempt you, but you might need to wait — particularly if third-party services or applications you use (especially higher-end apps) don’t yet support the new OS.

If Apple Intelligence is the thing you’re most interested in, there’s no need to rush, since those tools won’t available until October in some countries, and next year in others. Global launch (including in Europe) will follow. Apple will also let Mac admins manage access to the service.

So, what’s Sequoia got to make you swoon if you ignore Apple Intelligence?

iOS, Mac, and iPhone: S.W.A.L.K.

What may turn out to be one of the most useful productivity-enhancing features in Sequoia is the increased integration between the Mac and iPhone. While EU customers won’t get this feature yet, iPhone Mirroring lets you use your iPhone on your Mac in a compact mini window. This lets you interact with iPhone apps via your Mac, and also lets you drag-&-drop files between the devices (though, that feature won’t debut until later this year). I think this could get really interesting if you use an iPad and a second Mac, too, as the implication is that you’ll be able to move files and folders around between the machines to your heart’s content in a quite focused way.

A second integration means notifications received on your iPhone can also appear on your Mac. 

Manage busy desktops

Dragging a window to the edge of the screen will automatically place that window in a tile in the main window. This works across multiple windows, making it much easier to parse information from numerous websites and applications in one clear to the eye view. You can shift windows around, if you like.

Solving the eternal presentation headache

If you use Webex, Zoom, or even FaceTime, Sequoia will show you a view of what will be made visible to other meeting attendees when they share their screen before they actually share it. If you’ve ever unexpectedly needed to share a document during a meeting while other confidential items are open on your Mac, you’ll recognize what a small but handy improvement this is.

Even experienced Zoom hosts can’t help but exhale a little when they share their screen, as this is never quite as certain as you need for comfort. Now, it is.

Safari redesigned to get web clutter out of your way

Safari is smarter than before. You’ll be able to read page summaries or gather together links at the touch of a button. Reader view has been improved with a variety of features, including auto-generated table of contents to make navigating complex pages much easier. If a page features video, Safari will either open the clip in a big window or pop it inside a smaller pop-up window if you decide to navigate to another site while leaving the original site window open.

Finally, Safari will let you hide distracting items such as subscription pop-ups from view when you visit a site. 

Notes, Reminders, and Calendar

Just as on the iPhone, Mac users can expect audio transcription and summarization features in Notes when Apple Intelligence appears. The app has also become more capable, with collapsible section headers and different text and highlight colors. Finally, if you record a call taken on your iPhone, a transcript will be created that can also sync with Notes on your Mac.

Calendar and Reminders work more smoothly now, with Reminder tasks showing up in Calendar and a Month view that lets you see both Calendar and Reminder entries at a glance.

macOS Sequoia na MacBooku Pro

Apple

The Password application

Apple’s all-new Password application is a new skin on Keychain, making the information — passwords, passkeys, Wi-Fi passwords — much more accessible and usable than before. The app uses iCloud to sync across all your logged-in Apple devices (and Windows hardware using the iCloud for Windows application).

Better collaboration tools

Freeform remains a really useful collaborative space where teams can work on ideas together from whatever Apple device they happen to use. On the Mac, the latest iteration includes a new diagramming mode to connect different objects and usability improvements when moving around a large board using a mouse.

Some Siri intelligence

Kick the system around and you’ll find a new accessibility tool that lets you make custom voice commands to invoke Shortcuts. You might use this to set up a tool that lets you ask Siri to create a PDF from what you are reading on a Mac, for example. You can find these options in System Settings>Accessibility>Vocal Shortcuts.

Smaller, useful tweaks

A handful of additional system tweaks wipe old annoyances away. You can schedule when messages are sent, for example. Another change allows you to install larger applications (more than 1GB) on external drives, subject to some restrictions. You also won’t need to have double the amount of space for an app on your drive to install it. 

In addition:

  • An updated Calculator application lets you see mathematical expressions and previous calculations, and integrates with Notes to create what Apple calls Math Notes. The latter is essentially a way to do algebraic equations on your Mac. 
  • There’s a new Keep Downloaded option that will ensure a local copy of a file is kept on your Mac rather than being stored in iCloud.

One more thing? 

Apple has shipped a Chess application with Macs for decades. Yet the last time this got updated was with Mac OS X 10.3 Panther — 20 years ago. The historically important game, probably included in the Mac because Steve Jobs liked Chess so much, clearly isn’t on Apple’s speedy upgrade cycle. In Sequoia, it finally gets a makeover with new graphics, though sadly without a 3D or Kriegspiel mode.

Which Macs does macOS Sequoia work with?

If you ignore Apple Intelligence, the new Mac operating system is compatible with the following devices:

  • MacBook Pro (2018 and later).
  • MacBook Air (2020 and later). 
  • Mac mini (2018 and later).
  • iMac (2019 and later). 
  • iMac Pro (2017 and later). 
  • Mac Pro (2019 and later). 
  • Mac Studio (2022 and later).

The problem is that not all of the above devices support Apple Intelligence. To use Apple Intelligence, you need to be working with a Mac running an M1 or later Apple Silicon chip. No Intel Macs will run Apple Intelligence.

Finally, on security — once macOS Sequoia is available, it will be the only version to receive full security updates in the next 12 months. The two most recent versions (Sonoma and Ventura) will get some updates, but Monterey and earlier versions will get none. This means that if you rely on Macs, it’s worth ensuring you know which machines you run, what version of the OS they use, and what data they have access to.

Please follow me on LinkedInMastodon, or join me in the AppleHolic’s bar & grill group on MeWe.

Microsoft revamps M365 Copilot chatbot with Pages shared ‘canvas’

Microsoft has added a new collaborative document tool to the Microsoft 365 Copilot chatbot that lets users store and share information created by the generative AI (genAI) assistant. It’s one of several M365 Copilot features announced Monday, including new Copilot features in apps such as Teams, Outlook, and Excel. 

Microsoft describes Copilot Pages as a “dynamic, persistent canvas” accessible within Copilot’s Business Chat conversational interace.

With Pages, users can paste Copilot responses into a collaborative document that can be accessed and edited by coworkers. The document can be shared as a link or embedded in another M365 document as a Loop component

“Pages takes ephemeral AI-generated content and makes it durable, so you can edit it, add to it, and share it with others,” said Jared Spataro, Microsoft corporate vice president. “You and your team can work collaboratively in a page with Copilot, seeing everyone’s work in real time and iterating with Copilot like a partner, adding more content from your data, files, and the web to your Page.”

Copilot Pages will be available later this month for M365 Copilot customers and via the free-to-use Copilot, provided users are signed in with a Microsoft Entra account — Microsoft’s identity and access management system. 

Microsoft also announced updates to Copilot in various M365 apps, including the general availability of the M365 Copilot in Excel; it had been in beta since the M365 Copilot launch last November. Updates in Excel include the ability for the assistant to access data that hasn’t been formatted in a table; support for additional formulas, such as XLOOKUP and SUMIF; and the ability to work with text as well as numerical data. 

It’s also possible to perform data analysis using Python in Copilot (a feature now in public preview).   

“Now, anyone can work with Copilot to conduct advanced analysis like forecasting, risk analysis, machine learning, and visualizing complex data — all using natural language, no coding required. It’s like adding a skilled data analyst to the team,” said Spataro. 

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/09/Prioritize-my-inbox-with-Copilot.png?quality=50&strip=all 1920w, https://b2b-contenthub.com/wp-content/uploads/2024/09/Prioritize-my-inbox-with-Copilot.png?resize=300%2C168&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2024/09/Prioritize-my-inbox-with-Copilot.png?resize=768%2C432&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2024/09/Prioritize-my-inbox-with-Copilot.png?resize=1024%2C576&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2024/09/Prioritize-my-inbox-with-Copilot.png?resize=1536%2C864&quality=50&strip=all 1536w, https://b2b-contenthub.com/wp-content/uploads/2024/09/Prioritize-my-inbox-with-Copilot.png?resize=1240%2C697&quality=50&strip=all 1240w, https://b2b-contenthub.com/wp-content/uploads/2024/09/Prioritize-my-inbox-with-Copilot.png?resize=150%2C84&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2024/09/Prioritize-my-inbox-with-Copilot.png?resize=854%2C480&quality=50&strip=all 854w, https://b2b-contenthub.com/wp-content/uploads/2024/09/Prioritize-my-inbox-with-Copilot.png?resize=640%2C360&quality=50&strip=all 640w, https://b2b-contenthub.com/wp-content/uploads/2024/09/Prioritize-my-inbox-with-Copilot.png?resize=444%2C250&quality=50&strip=all 444w" width="1024" height="576" sizes="(max-width: 1024px) 100vw, 1024px">

Copilot in Outlook can now help users prioritize emails.

Microsoft

In Outlook, the “Prioritize my inbox” feature highlights emails the Copilot considers to be of interest to a user, along with a summary of the email’s content. Users will be able to tell the Copilot which topics, people, and keywords are most important to them when the feature is available in public preview later this year. 

In PowerPoint, a Copilot update lets users create presentations with an organization’s branded template andpull approved images stored in SharePoint Organization Asset Library. 

A new feature coming to the Teams Copilot later this month allows the genAI assistant to provide information on meetings based on both video and text conversations, while Copilot in Word can now add in data kept in emails and meetings (in addition to searching web data and files such as Word and PDFs). For Copilot in OneDrive, users will be able to ask the AI assistant to compare up to five documents when the feature launches later this month.

Finally, Microsoft has announced general availability of Copilot agents, which lets users customize the tool to automatically carry out business processes. 

Despite significant business interest in the possibilities of Copilot, many Microsoft 365 customers have yet to deploy the assistant widely across their organizations. A combination of data security concerns related to its use internally, as well as questions over the value it provides and the significant change management efforts required to implement the technology successfully, are all factors in the rollout pace.

The latest announcements improve the Copilot experience within apps such as Excel and PowerPoint and enhance the usefulness of the AI assistant, said Jason Wong, distinguished vice president analyst at Gartner.

He also pointed to the addition of Copilot Pages and the Team Copilot announced in May, both of which open the AI assistant to collaborative uses in addition to individual productivity. Copilot agents can provide “role-based and domain specific knowledge to be accessed through Copilot,” he said. 

“Some Gartner clients are inquiring about Copilot Studio and how to extend generative AI to curated knowledge bases, but most are looking for something even simpler like the Copilot in SharePoint experience, which is currently in preview,” said Wong.  

“However, it remains to be seen if all these new capabilities can drive the sticky adoption that Microsoft wants, since there’s already a lot of change fatigue in the workforce brought on by new generative AI features from many vendors and products.”

Everything we know about Apple Intelligence

Apple’s latest iPhones support a new breed of Apple AI called Apple Intelligence, a collection of artificial intelligence (AI) tools that will be made available across the company’s platforms starting in October with the release of iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1.

Apple Intelligence supplements Apple’s existing machine-learning tools and relies on generative AI (genAI) technology similar to that used by OpenAI’s ChatGPT. Apple’s version to a great extent runs on its own self-trained genAI models, which are built to be integrated across platforms, capable of using a user’s personal information, and private by the design.

Announced at this year’s Worldwide Developer’s Conference in June, Apple Intelligence is designed “to make your most personal products even more useful and delightful.” (That’s how Apple CEO Tim Cook described it.)

Essentially, the company has moved to build an AI ecosystem that is personal, private, and powerful, what Apple calls “AI for the rest of us.”

Here’s a look at what’s coming and how Apple got to this point.

Why Apple Intelligence matters

Apple has worked with AI since its earliest days (more about this below), but in in the last couple of years — since the arrival of ChatGPT and others — the company has been perceived as falling behind its competitors. There are many reasons for that, not least that Apple’s innate secrecy was a turn-off to researchers at the cutting edge of AI. Internal squabbles over precious R&D resources may also have slowed development.

But one moment that might have changed the scene took place over the winter holidays in late 2023, when Apple Senior Vice President for Software Craig Federighi tested GitHub Copilot code completion. He was reportedly blown away — and redirected Apple’s software development team to begin to apply Large Language Models (LLMs, a basic part of genAI tools) across Apple products. The company now sees this work as foundational to future product innovation and has diverted vast quantities of resources to bringing its own genAI technologies to its devices.

Analysts note that with Apple Intelligence soon to be available across the newer Macs, iPhones, and iPads, the company could quickly become one of the most widely used AI ecosystems in the world. (Wedbush Securities analyst Daniel Ives predicts Apple’s devices will be running 25% of global AI soon.) This matters, since AI smartphones and PCs will drive sales in both markets across the coming months, and Apple now has a viable product family to tout.

How Apple approaches Apple Intelligence

To deliver AI on its devices, Apple has refused to dilute its longstanding commitment to user privacy. With that in mind, it has developed a three-point approach to handling queries using Apple Intelligence:

On device

Some Apple Intelligence features will work natively on the device. This has the advantage of working faster while preserving privacy. Edge-based processing also reduces energy requirements, because no cloud communication or server-side processing is required. (More complex tasks must still be handled in the cloud.)

In the cloud

Apple is deploying what it calls Private Cloud Compute. This is a cloud intelligence system designed specifically for private AI processing and capable of handling complex tasks using massive LLMs.

The idea behind this system is that it provides the ability to flex and scale computational capacity between on-device processing and larger, server-based models. The servers used for these tasks are made by Apple, use Apple Silicon processors, and run a hardened operating system that aims to protect user data when tasks are transacted in the cloud. The advantage here is you can handle more complex tasks while maintaining privacy.

Externally

Apple has an agreement with OpenAI to use ChatGPT to process AI tasks its own systems can’t handle. Under the deal, ChatGPT is not permitted to gather some user data. But there are risks to using third-party services, and Apple ensures that users are aware if their requests need to be handled by a third-party service. 

The company says it has designed its system so when you use Private Cloud Compute, no user data is stored or shared, IP addresses are obscured, and OpenAI won’t store requests that go to ChatGPT. The focus throughout is to provide customers with the convenience of AI, while building strong walls around personal privacy.

Apple Intelligence

Apple

What Apple Intelligence features exist?

Apple has announced a range of initial features it intends making available within its Apple Intelligence fleet. The first new tools will appear with iOS 18.1, which is expected to appear when new Apple Silicon Macs and iPads are introduced later this fall.

Additional services will be introduced in a staggered rollout in subsequent releases. While not every announced feature is expected to be available this year, all should be in place by early 2025. In the background, Apple is not resting on its laurels; its teams are thought to be exploring additional ways Apple Intelligence can provide useful services to customers, with a particular focus on health.

At present, these are the Apple Intelligence tools Apple has announced:

Writing Tools

Writing Tools is a catch-all term for several useful features, most of which should appear in October with iOS 18.1 (and the iPad and Mac equivalents). These tools work anywhere on your device, including in Mail, Notes, Pages, and third-party apps. To use them, select a section of text and tap Writing Tools in the contextual menu.

  • Rewrite will take your selected text and improve it.
  • Proofread is like a much smarter spellchecker that checks for grammar and context.
  • Summarize will take any text and, well, summarize it. This also works in meeting transcripts. 
  • Priority notifications: Apple Intelligence understands context, which means it should be able to figure out which notifications are most important to you.
  • Priority messages in Mail: The system will also prioritize the emails it thinks are most important.
  • Smart Reply: Apple’s AI can also generate email responses. You can edit these, reject them, or write your own.
  • Reduce Interruptions: A new Focus mode that is smart enough to let important notifications through.
  • Call transcripts: It is possible to record, transcribe, and summarize audio captured in Notes or during a Phone call. When a recording is initiated during a call in the Phone app, participants are automatically notified. After the call, Apple Intelligence generates a summary to help recall key points.

Search and Memory Movies in Photos

Search is much better in Photos. It will find images and videos that fit complex descriptions and can even locate a particular moment in a video clip that fits your search description.

Search terms can be highly complex; enter a description and Apple Intelligence will identify all the most appropriate images and videos, put together a storyline with chapters based on themes it figures out from within the collection, and create a Memory Movie. The idea is that your images are gathered, collected, and presented in an appropriate narrative arc; this feature is expected to debut with iOS 18.1.

Clean Up tool in Photos

At least in my parts of social media, the Photos AI tool that most seemed to impress early beta testers was Clean Up. This super-smart implementation means Apple Intelligence can identify background objects in an image and let you remove them with a tap. I can still recall when removing items from within images required high-end software running on top-of-the-range computers equipped with vast amounts of memory.

Now you can do it in a trice on an iPhone.

Image Playground for speedy creatives

Expected to appear in iOS 18.2, Image Playground uses genAI to let you create animations, illustrations, and sketches from within any app, including Messages. Images are generated for you by Apple Intelligence in response to written commands. You can also choose between a range of themes, places, or costumes, and also create an image based on a person from your Photos library.

The feature is also available within its own app and should appear in December.

Genmoji get smarter

Genmoji uses genAI to create custom emoji. The idea is that you can type in a description of the emoji you want to use and select one of the automatically generated ones to use in a message. You will also be able to keep editing the image to get to the one you want. (The only problem is that the person on the receiving end may not necessarily understand your creative zeal.)

This feature should show in December with iOS 18.2.

Image Wand

This AI-assisted sketching tool can transform rough sketches into nicer images in Notes. Sketch an image, then select it; Image Wand will analyze the content to create a pleasing and relevant image based on what you drew. You can also select an empty space and Image Wand will look at the rest of your Note to identify a context for which it will create an image for you.

Image Wand is now expected late 2024 or early 2025.

Camera Control in iPhone 16 Pro

A new feature in iPhone 16 Pro relies on visual intelligence and AI to handle some tasks. You can point your camera, for example, at a restaurant to get reviews or menus. It will also be possible to use this feature to access third-party tools for more specific information, such as accessing ChatGPT.

Additional visual tools are coming. For example, Siri will be able to complete in-app requests and take action across apps, such as finding images in your collection and then editing them inside another app.

Coming soon: Siri gains context and ChatGPT

ChatGPT integration in Siri is expected to debut at the end of the year, with additional enhancements to follow. The idea is that when you ask Siri a question, it will try to answer using its own resources; if it is unable to do so it will ask whether you want to use ChatGPT to get the answer. You don’t have to, but you will get free access to using it if you choose. Privacy protections are built in for users who access ChatGPT — IP addresses are obscured, and OpenAI won’t store requests. 

Siri will also get significant improvements to deliver better contextual understanding and powerful predictive intelligence based on what your devices learn about you. You might use it to find a friend’s flight number and arrival time from a search through Mail or to put together travel plans — or any other query that requires contextual understanding of your situation. 

The contextual features should appear next year.

On-screen awareness, but not until 2025

A new evolution in contextual awareness is scheduled to arrive at some point in 2025. This will give Siri the ability to take and use information on your display. The idea here is that whatever is on your screen becomes usable in some way — you might use this to add addresses to your contacts book, or to track threads in an email, for example. It’s a profound connection between what you do on your device and wherever you happen to be.

Another, and perhaps even more powerful, improvement will allow Siri to control apps, and because it uses genAI, you’ll be able to pull together a variety of instructions and apps — such as editing an image and adding it to a Note without having to open or use any apps yourself. This kind of deep control builds on the accessibility tools Apple already has and leans into some of the visionOS user interface improvements.

It’s another sign of the extent to which user interfaces are becoming highly personal.

Where can I get Apple Intelligence?

Apple has always been quite clear that Apple Intelligence will first be made available in beta in US English. During beta testing, Apple adjusted this slightly so that these tools work on any compatible iPhone running US English as its language and for Siri.

The company will introduce Apple Intelligence with localized English in Australia, Canada, New Zealand, South Africa, and the UK in December. Additional language support — such as Chinese, French, Japanese, and Spanish — is coming next year.

What devices work with Apple Intelligence?

Apple Intelligence requires an iPhone 15 Pro, iPhone 15 Pro Max, or iPhone 16 series device. It also runs on Macs and iPads equipped with an M1 or later chip.

What AI is already inside Apple’s systems?

All these features are supplemented by numerous forms of AI tools Apple already has in place across its platforms, principally around image vision intelligence and machine learning. You use these built-in applications each time you use FaceID, run facial recognition in Photos, or make use of the powerful Portrait Mode or Deep Fusion features when taking a photograph.

There are many more AI tools, from recognition of addresses and dates in emails for import into Calendar to VoiceOver all the way to Door Detection, even the Measure app on iPhones. What’s changed is that while Apple’s deliberate focus had been on machine-learning applications, the emergence of genAI unleashed a new era in which the contextual understanding available to LLM models uncovered a variety of new possibilities.

The omnipresence of various kinds of AI across the company’s systems shows the extent to which the dreams of Stanford researchers in the 1960s are becoming real today.

An alternative history of Apple Intelligence

Apple Intelligence might appear to have been on a slow train coming, but the company has, in fact, been working with AI for decades.

What exactly is AI?

AI is a set of technologies that enable computers and machines to simulate human intelligence and problem-solving capabilities. The idea is that the hardware becomes smart enough to learn new tricks based on what it learns, and carries the tools needed to engage in such learning.

To trace the trail of modern AI, think back to 1963, when computer scientist and LISP inventor John McCarthy launched the Stanford Artificial Intelligence Laboratory (SAIL). His teams engaged in important research in robotics, machine-vision intelligence, and more.

SAIL was one of three important entities that helped define modern computing. Apple enthusiasts will likely have heard of the other two: Xerox’s Palo Alto Research Center (PARC), which developed the Alto that inspired Steve Jobs and the Macintosh, and Douglas Engelbart’s Augmentation Research Center. The latter is where the mouse concept was defined and subsequently licensed to Apple. 

Important early Apple luminaries who came from SAIL included Alan Kay and Macintosh user interface developer Larry Tesler — and some SAIL alumni still work at the company.

“Apple has been a leader in AI research and development for decades,” pioneering computer scientist and author Jerry Kaplan told me. “Siri and face recognition are just two of many examples of how they have put this investment to work.”

Back to the Newton…

Existing Apple Intelligence solutions include things we probably take for granted, going back to the handwriting recognition and natural language support in 1990’s Newton. That device leaned into research emanating from SAIL — Tesler led the team, after all. Apple’s early digital personal assistant first appeared in a 1987 concept video and was called Knowledge Navigator. (You can view that video here, but be warned, it’s a little blurry.)

Sadly, the technology couldn’t support the kind of human-like interaction we expect from ChatGPT, and (eventually) Apple Intelligence. The world needed better and faster hardware, reliable internet infrastructure, and a vast mountain of research-exploring AI algorithms, none of which existed at that time.  

But by 2010, the company’s iPhone was ascendant, Macs had abandoned the PowerPC architecture to embrace Intel, and the iPad (which cannibalized the netbook market) had been released. Apple had become a mobile devices company. The time was right to deliver that Knowledge Navigator. 

When Apple bought Siri

In April 2010, Apple acquired Siri for $200 million. Siri itself is a spinoff from SAIL, and, just like the internet, the research behind it emanated from a US Defense Advanced Research Projects Agency (DARPA) project. The speech technology came from Nuance, which Apple acquired just before Siri would have been made available on Android and BlackBerry devices. Apple shelved those plans and put the intelligent assistant inside the iPhone 4S (dubbed by many as the “iPhone for Steve,” given Steve Jobs’ death around the time it was released).

Highly regarded at first, Siri didn’t stand the test of time. AI research diverged, with neural networks, machine intelligence, and other forms of AI all following increasingly different paths. (Apple’s reluctance to embrace cloud-based services — due to concerns about user privacy and security — arguably held innovation back.)

Apple shifted Siri to a neural network-based AI system in 2014; it used on-device machine learning models such as deep neural networks (DNN), n-grams and other techniques, giving Apple’s automated assistant a bit more contextual intelligence. Apple Vice President Eddy Cue called the resulting improvement in accuracy “so significant that you do the test again to make sure that somebody didn’t drop a decimal place.”

But times changed fast.

Did Apple miss a trick?

In 2017, Google researchers published a landmark research paper, “Attention is All you Need.” This proposed a new deep-learning architecture that became the foundation for the development of genAI. (One of the paper’s eight authors, Łukasz Kaiser, now works at OpenAI.)

One oversimplified way to understand the architecture is this: it helps make machines good at identifying and using complex connections between data, which makes their output far better and more contextually relevant. This is what makes genAI responses accurate and “human-like” and it’s what makes the new breed of smart machines smart.

The concept has accelerated AI research. “I’ve never seen AI move so fast as it has in the last couple of years,” Tom Gruber, one of Siri’s co-founders, said at the Project Voice conference in 2023.

Yet when ChatGPT arrived — kicking off the current genAI gold rush — Apple seemingly had no response. 

The (put it to) work ethic

Apple’s Cook likes to stress that AI is already in wide use across the company’s products. “It’s literally everywhere on our products and of course we’re also researching generative AI as well, so we have a lot going on,” he said. 

He’s not wrong. You don’t need to scratch deeply to identify multiple interactions in which Apple products simulate human intelligence. Think about crash detection, predictive text, caller ID based on a number not in your contact book but in an email, or even shortcuts to frequently opened apps on your iPhone. All of these machine learning tools are also a form of AI. 

Apple’s CoreML frameworks provide powerful machine learning frameworks developers can themselves use to power up their products. Those frameworks build on the insights Adobe co-founder John Warnock had when he figured out how to automate the animation of scenes, and we will see those technologies widely used in the future of visionOS.

All of this is AI, albeit focused (“narrow”) uses of it. It’s more machine intelligence than sentient machines. But in each AI application it delivers, Apple creates useful tools that don’t undermine user privacy or security.

The secrecy thing

Part of the problem for Apple is that so little is known about its work. That’s deliberate. “In contrast to many other companies, most notably Google, Apple tends not to encourage their researchers to publish potentially valuable proprietary work publicly,” Kaplan said.

But AI researchers like to work with others, and Apple’s need for secrecy acts as a disincentive for those in AI research. “I think the main impact is that it reduces their attractiveness as an employer for AI researchers,” Kaplan said. “What top performer wants to work at a job where they can’t publicize their work and enhance their professional reputation?” 

It also means the AI experts Apple does recruit subsequently leave for more collaborative freedom. For example, Apple acquired search technology firm Laserlike in 2018, and within four years, all three of that company’s founders had quit. And Apple’s director of machine learning, Ian Goodfellow (another a SAIL alumni), left the company in 2022. I imagine the staff churn makes life tough for former Google Chief of Search and AI John Giannandrea, who is now Apple’s senior vice president of machine learning and AI strategy. 

That cultural difference between Apple’s traditional approach and the preference for open collaboration and research in the AI dev community might have caused other problems. The Wall Street Journal reported that at some point both Giannandrea and Federighi were competing for resources to the detriment of the AI team. 

Despite setbacks, the company has now assembled a large group of highly regarded AI pros, including Samy Bengio, who leads company research in deep learning. Apple has also loosened up a great deal, publishing research papers and open source AI software and machine learning models to foster collaboration across the industry.

What next?

History is always in the rear view mirror, but if you squint just a little bit, it can also show you tomorrow. Speaking at the Project Voice conference in 2023, Siri co-founder Adam Cheyer said: “ChatGPT style AI…conversational systems…will become part of the fabric of our lives and over the next 10 years we will optimize it and become accustomed to it. Then a new invention will emerge and that will become AI.”

At least one report indicates Apple sees this evolution of intelligent machinery as foundational to innovation. While that means more tools, and more advances in user interfaces, each those steps leads inevitably toward AI-savvy products such as AR glasses, robotics, health tech — even brain implants

For Apple users, the next step — Apple Intelligence — arrives this fall.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

September’s Patch Tuesday update fixes 4 zero-days

Addressing four zero-days flaws (CVE-2024-38014, CVE-2024-38217, CVE-2024-43491 and CVE-2024-38217), this month’s Patch Tuesday release from Microsoft includes 79 updates to the Windows platform. There are no patches to Microsoft Exchange Server or the company’s development tools (Visual Studio or .NET). And Microsoft addressed a recently exploited vulnerability in Microsoft Publisher with two critical updates and nine patches rated important for Microsoft Office. 

Significant testing will be required for this month’s Microsoft SQL Server patches, which affect both server and desktop components — with a focus on application installations due to a change in how Microsoft Installer handles changes and installation rollbacks.

The team at Readiness has crafted a useful infographic outlining the risks associated with each update. 

Known issues 

Microsoft always publishes a list of known issues that relate to the operating system and platforms included in each update, including the following two minor issues for September:

  • After installing the Windows update released on or after July 9, 2024, some Windows Servers may experience intermittent interruptions to remote desktop connections. Those using RDP over HTTP while employing a Remote Gateway server are most likely to experience this issue. Microsoft is working on a resolution and published a knowledge article (KB5041160) to assist with mitigations.
  • As a result of the recent updates to Microsoft SharePoint Server, some users are reporting an issue in which SharePoint workflows can’t be published because the unauthorized type is blocked. The issue also generates the event tag “c42q0” in SharePoint Unified Logging System (ULS) logs. In addition, recent changes could cause the deserialization of custom types that inherit from IDictionary to fail. For more information, see KB5043462 on these issues. (Sounds like something from the Succession TV series.)

Due to recent changes to Windows Installer, User Account Control (UAC) does not prompt for credentials on application installation repairs. Once this update (September 2024) has been installed, UAC will again prompt properly. Your scripts will need to be updated if you have not already accounted for this change. 

Though Microsoft has provided documentation on avoiding the issue by disabling this feature in UAC, we think this is a much-needed change and recommend following this latest best practice.

Major revisions 

This month, Microsoft published the following major revisions to past security and feature updates, including:

  • CVE-2020-17042: Windows Print Spooler Remote Code Execution Vulnerability. This print spooler update was first released in November 2020. This is an information update to reflect that Windows Server 2022 (Core) is now affected.
  • CVE-2024-30077: Windows OLE Remote Code Execution Vulnerability. This two-month-old patch from Microsoft has been updated to include support for the ARM platform. 
  • CVE-2024-35272: SQL Server Native Client OLE DB Provider Remote Code Execution. First released in July, the affected software table has been updated to include entries for Visual Studio 2019 and 2022. No further action required.
  • CVE-2024-38138: Windows Deployment Services Remote Code Execution Vulnerability. This is a documentation update to a patch released last month to include support for all supported versions of Windows Server. No further action required.

Unusually, we have a patch revision that is not strictly documentation related. This month, it’s CVE-2024-38063 (Windows TCP/IP Remote Code Execution Vulnerability). Unlike other revisions, this latest version of a critical network patch will require testing as if it were a new update. System administrators need to take this latest patch revision seriously and test before (re)deployment.

Testing guidelines

Each month, the Readiness team analyzes the latest Patch Tuesday updates and provides detailed, actionable testing guidance based on a large application portfolio and a detailed analysis of the patches and their potential impact.

For September, we have grouped the critical updates and required testing efforts into separate product and functional areas including:

Microsoft SQL Server

Microsoft released several updates to the Microsoft SQL Server platform that affects both Windows desktops and SQL Server installations, including:

  • A significant update to all supported versions (2016-2022) of Microsoft SQL Server that will require a full installation test. 
  • An updated core Windows library (SQLOLEDB) that helps Windows applications communicate with SQL Server databases and tools. Though Microsoft rated this change low-risk, Readiness recommends a portfolio analysis that highlights all apps that depend on this data-bound communication approach and a full test cycle for each one identified.

Due to the nature of this September SQL Server update, we highly recommend testing the patch itself and the patching process — with a view to the patch REMOVAL process. We understand that this will require time, skill, and effort — but it will be better than a full restore from backup. 

Windows

Microsoft made networking and memory handling security issues a focus this month with the following changes to Windows:

  • Due to an update to 64-bit to 32-bit memory handling in Windows (called thunking), 32-bit Camera applications will require testing on 64-bit machines this month. Using Microsoft Teams or playing a video from a USB drive would provide good testing coverage for this change.
  • Virtual Machines (VMs) that require a VPN will require connectivity testing. In addition, the following protocols — PPP, PPTP, SSTP — will require a basic connectivity test. 
  • A minor update to Windows defender will require basic testing for endpoint security.
  • A minor update to core networking functions will require a test of high network traffic this month. The focus should be on the transfer of large files using applications such Teams, Outlook and Microsoft Edge.

Microsoft delivered a significant update to the MSI Installer (application installer) sub-system that will require application install level testing for a portion of your portfolio. Part of this update relates to how shell links are handled in the storage subsystem, which might cause redirected folders or shortcuts to behave unexpectedly during an installation — particularly on secure or locked-down configurations.

We suggest that installations, rollbacks, un-installations and UAC checks be validated this month. Checking for “zero” exit codes on the MSI Installer log is always a good start.

Windows lifecycle and enforcement updates

This section contains important changes to servicing, significant feature depredations, and security related enforcements across the Windows desktop and server platforms.

  • Enforcements: Microsoft Entra now requires TLS 1.2 (using the latest Microsoft cryptographic libraries) as defined by RFC5246. Microsoft has published several scripts to assist with assessing whether your clients are using the latest libraries and protocols (they’re found here).
  • Lifecycle: General support for Microsoft SQL Server 2019 ends in January 2025. Given the large number of updates to this aging server, it might be time to upgrade.

Mitigations and workarounds

Microsoft did not publish any mitigations or workarounds this month.

Each month, we break down the update cycle into product families (as defined by Microsoft) with the following basic groupings: 

  • Browsers (Microsoft IE and Edge).
  • Microsoft Windows (both desktop and server).
  • Microsoft Office.
  • Microsoft Exchange Server.
  • Microsoft Development platforms (ASP.NET Core, .NET Core and Chakra Core).
  • Adobe (if you get this far).

Browsers

Microsoft’s Edge browser no longer synchronizes exactly with Patch Tuesday; there were several updates to Microsoft’s version of the Chromium browser that address the following reported vulnerabilities:

Once we are done with the Microsoft updates, we can focus on these Chromium patches:

After checking for compatibility or suitability challenges presented by these changes, we have not seen anything in the Edge or Chromium update that could affect most enterprise deployments. Add these browser updates to your standard release schedule.

Windows

Microsoft released two critical rated updates to the Windows platform (CVE-2024-38119 and CVE-2024-43491) and 43 patches rated important. The following Windows features have been updated:

  • Windows Update and Installer.
  • Windows Hyper-V.
  • Windows Kernel and Graphics (GDI).
  • Microsoft MSHTML and Mark of the Web.
  • Remote Desktop (RDP) and TCP/IP subsystems.

The real concern is that three of these vulnerabilities (CVE-2024-38014, CVE-2024-38217, CVE-2024-43491 have been reported as exploited. In addition, another reported vulnerability in the Windows HTML subsystem (CVE-2024-38217) has been reported as publicly disclosed. Given these four zero-days, we recommend that you add these Windows updates to your Patch Now release schedule.

Microsoft Office 

Microsoft addressed two critical vulnerabilities in the SharePoint platform (CVE-2024-38018 and CVE-2024-43464) that will require immediate attention. There are nine other updates rated important that affect Microsoft Office, Publisher and Visio. Unfortunately, CVE-2024-38226 (which affects Publisher) has been reported as exploited in the wild by Microsoft. If your application portfolio does not include Publisher (many don’t) then add these Microsoft updates to your standard patch release cycle.

Microsoft SQL (nee Exchange) Server 

This month brings a significantly larger update to the Microsoft SQL Server platform with 15 updates (all) rated as important. There are no reports of public disclosures or active exploits, and these patches cover the following broad vulnerabilities:

  • Microsoft SQL Server Native Scoring Remote Code Execution Vulnerability.
  • Microsoft SQL Server Native Scoring Information Disclosure Vulnerability.
  • Microsoft SQL Server Information Disclosure Vulnerability.
  • Microsoft SQL Server Elevation of Privilege Vulnerability.

Though there will be a significant testing profile this month, affecting both server and desktop systems, we suggest you add these SQL Server patches to your standard release schedule. 

Microsoft development platforms 

No development tools or features (Microsoft Visual Studio or .NET) have been updated this month.

Adobe Reader (and other third-party updates) 

Things are a little different this month for Adobe Reader. Normally, Microsoft releases an Adobe Reader update to the Windows platforms. Not so, this month. 

Adobe Reader has been updated (APSB24-70) but has not been included in the Microsoft release. This month’s Adobe Reader update addresses two critical memory-related security vulnerabilities and should be added to your standard app release cycle.

Apple gets ready for app sideloading on EU iPads

Apple didn’t make a song and dance about it during this week’s iPhone 16 launch, but one other thing that’s about to change (at least in Europe) is that it will support third-party app stores with the release of iPad OS 18. (It already supports this on iPhones in the EU.

We knew this was coming. 

European regulators decided Apple needed to open up its platform earlier this year when they imposed requirements in the Digital Markets Act (DMA). What we don’t yet know is the extent to which the move to open up iPads and iPhones to this kind of competition will leave European customers vulnerable to security and privacy attacks

Changing the story

We also don’t yet know whether every store that appears will be legitimate, or whether their security procedures will be as rock solid as those Apple provides. 

In part, that’s because we can’t predict how stable those regimes will become, or the extent to which increasingly well-resourced hackers will identify and exploit vulnerabilities in third-party app shops. That’s the big experiment that’s really taking place here, and we won’t see the results of this regulatory dedication to market ‘liberalization’ for some time to come.

It’s hard to believe Apple is having a good time in Europe. The bloc just demanded $14 billion in tax from the company, and regulators seem resistant to giving Apple the transparency it needs before offering Apple Intelligence there. 

Your private answer

Privacy is a core commitment to Apple. It works hard to protect it. And yet, the regulators say the company’s demand for transparency around how the DMA will be applied to these features in the EU shows how anti-competitive the company is.

That’s a stretch. Apple’s argument is predicated on the nature of the personal data its system can access on devices. That information is personal, and the company is committed to keeping it that way. This’s why Apple Intelligence is being developed as a super-private AI service you can use when you want to hold your data close. 

If Apple finds itself forced to make that information available to third parties, then what will be the consequences on personal privacy? When you have a regulator who seems to think it’s a victory to play ‘Fortnite’ on her iPhone, then Apple would probably prefer to negotiate with someone possessed of more nuance. Sometimes things get worse before they get better.

Opening up…

Context aside, the addition of iPads to the open market does expand the number of potential consumers third-party stores can approach. 

However, it’s fair to say that developers have so far been pretty slow at taking Apple up on the terms under which it has so far offered to open up app store access. I suspect further compromise will be reached, but I also think Apple has the right to ensure its business is sustainable; I doubt critics will get a free ride, no matter how entitled to one they believe they are. 

In the end, the big question around the matter never seems to be asked. No one yet has stuck their neck above the parapet to ask how much profit a business should legitimately make? It is amusing the extent to which business-backed political entities everywhere want to avoid defining an ethical approach to profit margins. 

Perhaps they fear losing election contributions if they do.

Let the games begin

Nevertheless, the Great European App Store experiment is under way, and while the number of third-party stores that have appeared so far is limited, this may change. As well as Apple’s App Store, European iPhone and iPad users can now pick between Setapp MobileAltStore PALAptoidMobivention, and the Epic Games Store. (Two of these are games stores, one a B2B white label app distro service, SetApp is an app subscription service, and Aptoid is an open-source friendly indie app store.)

From baby acorns, new trees grow. But the way I expect this to play out is that as the number of such stores grows, the sector will become more competitive, and then grow a bit until M&A action starts. Once the inevitable market consolidation does take place, it seems reasonable to expect we’ll end up with a couple of stores that have unique USPs, and two or three larger concerns, one of which may (or may not) be Apple’s App Store. 

That’s assuming Apple’s concerns around platform security and third-party apps are never realized; if they are, consumers will flock to the only secure store they know. As of Monday, EU consumers on iPads as well as iPhone will be able to try their luck. Good luck with that.

Please follow me on LinkedInMastodon, or join me in the AppleHolic’s bar & grill group on MeWe.

New brain-on-a-chip platform to deliver 460x efficiency boost for AI tasks

The Indian Institute of Science (IISc) has announced a breakthrough in artificial intelligence hardware by developing a brain-inspired neuromorphic computing platform. Capable of storing and processing data across 16,500 conductance states in a molecular film, this new platform represents a dramatic leap over traditional digital systems, which are limited to just two states (on and off).

Sreetosh Goswami, assistant professor at the Centre for Nano Science and Engineering (CeNSE), IISc, who led the research team that developed this platform, said that with this discovery, the team has been able to nail down several unsolved challenges that have been lingering in the field of neuromorphic computing for over a decade.

Decoding OpenAI’s o1 family of large language models

OpenAI said its project Strawberry has graduated to a new family of large language models (LLMs) that the company has christened OpenAI o1.

The new family of models, which also includes an o1-mini version for cost efficiency, according to the company, can be differentiated from the latest GPT-4o models basis their reasoning abilities.

“We’ve developed a new series of AI models designed to spend more time thinking before they respond. They can reason through complex tasks and solve harder problems than previous models in science, coding, and math,” the company wrote in a blog post, adding that the models were currently in preview.

According to OpenAI, the next model update performs similarly to PhD students on challenging benchmark tasks in physics, chemistry, and biology, and even excels in math and coding.

“In a qualifying exam for the International Mathematics Olympiad (IMO), GPT-4o correctly solved only 13% of problems, while the reasoning model scored 83%. Their coding abilities were evaluated in contests and reached the 89th percentile in Codeforces competitions,” it added.

The reasoning capabilities inside the OpenAI o1 models are expected to help tackle complex problems in the fields of science, coding, and mathematics among others, according to OpenAI.

“For example, o1 can be used by healthcare researchers to annotate cell sequencing data, by physicists to generate complicated mathematical formulas needed for quantum optics, and by developers in all fields to build and execute multi-step workflows,” it explained.

How do the models get reasoning capabilities?

The new family of o1 models gets its reasoning capabilities from the company’s large-scale reinforcement learning algorithm that teaches the models how to think productively using its “Chain of Thought” mechanism in a “highly data-efficient training process.”

“We have found that the performance of o1 consistently improves with more reinforcement learning (train-time compute) and with more time spent thinking (test-time compute),” the company said in another blog post and highlighted that this approach has substantially different constraints when compared to LLM pretraining.

In the field of AI and generative AI, experts say that any model, during training time, tries to rearrange or modify its parameters depending on the training data it has been fed to reduce errors in an effort to increase accuracy.

In contrast, during testing time, developers and researchers expose the model to new data in order to measure its performance and how it adapts to new instances of data.

Therefore, in the case of the new models, the more time it spends analyzing and solving a problem, the more it learns resulting in the sharpening of its reasoning abilities.

This learning is activated by the model’s Chain of Thought algorithm that works similar to how a human may think for a long time before responding to a difficult question, often breaking the problem into smaller chunks.

Speaking about the models’ reasoning capabilities, Nvidia senior research manager Jim Fan, via a LinkedIn post, said that the world is finally seeing the paradigm of inference-time scaling popularized and deployed in production.

“You don’t need a huge model to perform reasoning. Lots of parameters are dedicated to memorizing facts, in order to perform well in benchmarks like trivia QA. It is possible to factor out reasoning from knowledge, i.e. a small ‘reasoning core’ that knows how to call tools like browsers and code verifiers. Pre-training compute may be decreased,” Fan explained.

Further, Fan said that OpenAI must have figured out the inference scaling law a long time ago, which academia is just recently discovering. However, he did point out that productionizing o1 is much harder than nailing the academic benchmarks and raised several questions.

“For reasoning problems in the wild, how (the model) to decide when to stop searching? What’s the reward function? Success criterion? When to call tools like code interpreter in the loop? How to factor in the compute cost of those CPU processes? Their research post didn’t share much.
 

OpenAI, too, in one of the blog posts has said that the new model, which is still in the early stages of development and is expected to undergo significant iteration, doesn’t yet have many of the features that make ChatGPT useful, such as browsing the web for information and uploading files and images.

“For many common cases GPT-4o will be more capable in the near term,” the company said.

OpenAI is hiding the reasoning tokens

Although the new family of models has better reasoning, OpenAI is hiding the reasoning tokens or the Chain of Thought algorithm for the models.

While the company acknowledges that exposing the Chain of Thought algorithm could allow enterprises to understand how the models were functioning and if they were showing signs of manipulating a user, it has decided that it would not be helpful to open up a model’s unaligned Chain of Thought or reasoning tokens directly visible to its users.

Interfering with any unaligned Chain of Thought or reasoning tokens is counterintuitive to the model’s functioning, the company explained, adding that to exactly understand how the model is reasoning, it must have the freedom to express its thoughts in unaltered form.

This is why OpenAI cannot train any policy compliance or user preferences onto the Chain of Thought.

“We acknowledge this decision has disadvantages. We strive to partially make up for it by teaching the model to reproduce any useful ideas from the Chain of Thought in the answer,” it added.

British programmer Simon Wilson, who is the co-founder of the social conference directory Lanyrd and co-creator of the Django Web framework, in his blog post said he wasn’t happy with the OpenAI’s policy decision. “The idea that I can run a complex prompt and have key details of how that prompt was evaluated hidden from me feels like a big step backward,” he wrote.

Other limitations of the o1 model

Another issue about the reasoning tokens that Wilson pointed out is that though reasoning tokens are not visible in the API response, they are still billed and counted as output tokens.

From a technical standpoint, this means that enterprises will have to increase their prompt budgets due to the reasoning tokens.

“Thanks to the importance of reasoning tokens — OpenAI suggests allocating a budget of around 25,000 of these for prompts that benefit from the new models — the output token allowance has been increased dramatically — to 32,768 for o1-preview and 65,536 for the supposedly smaller o1-mini,” Wilson wrote.

These output token allowances are an increase from the gpt-4o and gpt-4o-mini models, both of which currently have a 16,384 output token limit, the programmer added.

OpenAI is also advising enterprises to use retrieval-augmented generation (RAG) differently for the new models.

Unlike the usage of RAG presently where the advice is to potentially cram as many relevant documents as possible, OpenAI suggests that in the case of the new models, users should include only the most relevant information to prevent the model from overcomplicating its response, Wilson explained.

How to get the new o1 family of models? 

ChatGPT Plus and Team users will be able to access o1 models in ChatGPT starting Thursday.

Both o1-preview and o1-mini can be selected manually in the model picker, and at launch, weekly rate limits will be 30 messages for o1-preview and 50 for o1-mini, the company said, adding that it was working to increase those rates and enable ChatGPT to automatically choose the right model for a given prompt.

Alternatively, ChatGPT Enterprise and Edu users will get access to both models beginning next week. Open AI said that developers who qualify for API usage tier 5 can start prototyping with both models in the API starting Thursday with a rate limit of 20.

“We’re working to increase these limits after additional testing. The API for these models currently doesn’t include function calling, streaming, support for system messages, and other features,” the company said, adding that it was planning to bring o1-mini access to all ChatGPT Free users.

What North Korea’s infiltration into American IT says about hiring

American companies have unwittingly hired hundreds — maybe thousands — of North Korean workers for remote IT positions, according to the US Department of Justice, the FBI, the US State Department, and cybersecurity companies.

The sophisticated scheme, perpetrated by the North Korean government for years, partly funds that country’s weapons program in violation of US sanctions. 

Agents working for the North Korean government use stolen identities of US citizens, create convincing resumes with generative AI (genAI) tools, and make AI-generated photos for their online profiles.

Using VPNs and proxy servers to mask their actual locations — and maintaining laptop farms run by US-based intermediaries to create the illusion of domestic IP addresses — the perpetrators use either Western-based employees for online video interviews or, less successfully, real-time deepfake videoconferencing tools. And they even offer up mailing addresses for receiving paychecks. 

These North Korean government agents have landed positions at more than 300 US companies, including Fortune 500 corporations, major tech firms, cybersecurity consultant companies, and aerospace manufacturers. 

US officials estimate that the scheme generates hundreds of millions of dollars annually for North Korea, directly funding its nuclear and ballistic missile programs, as well as espionage. 

In addition to collecting the salaries, the North Korean government tasks these fake employees with stealing intellectual property (IP) and sensitive information and deploying malware in corporate networks that provides backdoors for future cyberattacks. 

Mandiant (Google Cloud’s cybersecurity division) discovered a list of email addresses created as part of a big North Korean operation targeting US companies in June 2022. Some 80 or so of these addresses were used to apply for jobs at critical infrastructure organizations in the US. At the time, Mandiant said the operation was a way to raise money for espionage and IP theft; Mandiant analyst Michael Barnhart said North Korean IT workers were “everywhere.” 

The number of North Korean agents seeking IT work in the US has increased in the past two years. 

In May, an Arizona woman named Christina Chapman was arrested and accused of conspiring with North Korean “IT workers” Jiho Han, Chunji Jin, Haoran Xu, and others (all allegedly working for the North Korean Munitions Industry Department) to illegally land remote jobs with US companies. This one band of criminals allegedly used an online background check system to steal the identities of more than 60 people to generate nearly $7 million for the North Korean government at more than 300 US companies, including a car maker, a TV network, and a defense contractor. 

Among her assigned tasks, Chapman maintained a PC farm of computers used to simulate a US location for all the “workers.” She also helped launder money paid as salaries (companies sent the paychecks to her home address).

The group even tried to get contractor positions at US Immigration and Customs Enforcement and the Federal Protective Services. (They failed because of those agencies’ fingerprinting requirements.) They did manage to land a job at the General Services Administration, but the “employee” was fired after the first meeting.

A Clearwater, FL IT security company called KnowBe4 hired a man named “Kyle” in July. But it turns out that the picture he posted on his LinkedIn account was a stock photo altered with AI. The company sent a work laptop to the address “Kyle” supplied, which was, in fact, a US-based collaborator. The “employee” tried to deploy malware on the company’s networks on his first day but was caught and fired. 

“He was being open about strengths and weaknesses, and things he still needed to learn, career path ideas,” Stu Sjouwerman, founder and CEO of KnowBe4, told The Wall Street Journal. “This guy was a professional interviewee who had probably done this a hundred times.”

What the hiring of North Korean agents says about US hiring

Statistically, it’s unlikely you or your company will hire North Korean agents. But knowing this can happen should raise questions about your corporate hiring practices and systems. Are they so inadequate that you could hire and employ someone who’s not who they say they are, does not have the experience they claim, does not live where they say they live, or who is illegal to hire?

The truth is that the world has changed, and hiring practices aren’t keeping up. Here’s what has changed, specifically, and what you should do to keep up: 

  • Remote work. Since the pandemic, remote work has been normalized. Along with this change, companies have also embraced remote interviews, hiring, and onboarding. A straightforward solution is to allow remote work, but build at least one in-person meeting into the hiring or onboarding process. Fly the would-be hire to your location and put them up in a hotel to sign the employment contract (this provides the added assurance of having their legal signature on file), or have them meet with a local representative where they are. Also: Protect access to work laptops or applications with biometrics and have them register those biometrics in person. That way, you’ll see that the applicant is who they say they are and that the ongoing work is really performed by the person you hired. You might also deploy a mobile device management solution to identify the location of provided laptops, tablets, or phones. 
  • Generative AI chatbots. One metric for gauging the communication skills of a prospective employee is to look at their resume and cover letter. But anyone can create such documents with flawless English using ChatGPT or some other chatbot. Clarity of communication in any written document tells you exactly nothing about the employee’s ability to communicate. Make a writing test part of the evaluation process, where the applicant can’t use AI help. 
  • Generative AI image tools. Thanks to widely available tools, anyone can create a profile picture that looks real. Never assume a photo shows what a person looks like. Physical characteristics shouldn’t play a part in the hiring anyway; headshots’ only role in hiring is to bias the hiring manager. 

Some things haven’t changed. It’s always been a good idea to check references to ensure prospective employees have worked where they say they’ve worked and have gotten the education and certifications they say they’ve gotten. 

Yes, malicious North Korean agents are out there trying to get a job at your company so they can funnel money to a despotic regime and hack your organization. 

But the broader crisis is that, thanks to recent developments in technology, you might only truly know who you’re hiring if you modify your hiring approach. 

Make sure you really know who you’re hiring and employing, and take the necessary steps now to be absolutely sure. 

How to bring Google’s Pixel 9 Pro Fold multitasking magic to any Android device

After spending the past couple weeks living with Google’s new Pixel 9 Pro Fold — a.k.a. the second-gen Pixel Fold — I’ve got two big thoughts swimming around my murky man-noggin:

  1. Multitasking really is a whole new game on a device like this, and that opens the door to some incredibly interesting ways to get stuff done on the go.
  2. Part of that is undoubtedly tied to the phone’s folding form — but part of it is also a result of the Android-based software enhancements Google’s built into the gadget.

More than anything, that very last part keeps coming back to the forefront and making my brain say, “Hmmmmmmmm.”

We can talk all day about advantages related to one specific device, after all (and, erm, we did, earlier this week) — but especially with a phone like the Pixel 9 Pro Fold and its hefty $1,800 price tag, most people aren’t gonna end up with it inside their paws, purses, or pantaloons.

So what if there were a way to take at least some of the folding Pixel’s multitasking magic and make it available on other Android devices — more traditional phones without the Fold’s unusual (and unusually expensive) folding screen parts?

My friend, lemme tell ya: Such a slice of sorcery absotively exists — two such slices, in fact. They’re off-the-beaten-path advanced adjustments that’d only be possible here on Android. And they can be on your own personal phone this minute, if you know where to look.

[Psst: Love shortcuts? My Android Shortcut Supercourse will teach you tons of time-saving tricks for your phone. Sign up now for free!]

Prepare to be blown away.

Google Pixel 9 Pro Fold multitasking trick #1: The split-screen shortcut

We’ll start with the simpler of our two Pixel-9-Pro-Fold-inspired multitasking advantages, and that’s the newly Google-given ability to open two apps together in Android’s split-screen mode with a single tap.

Part of what makes the Fold so useful, y’see, is that splendid inner screen it sports and the way that added space serves as a canvas for viewing and even interacting with two apps side by side together at the same time.

Google Pixel 9 Pro Fold Multitasking: Split screen
Android’s split-screen interface, as seen on the inner display of a Pixel 9 Pro Fold phone.

JR Raphael, IDG

With this new second-gen Pixel Fold model, Google’s upped the ante by adding in a new native feature that lets you save specific app pairings and then have a simple on-screen shortcut for launching ’em side by side anytime with one fast tap — without all the usual hunting, opening, and arranging effort.

In the Pixel 9 Pro Fold’s software, setting up such a feat is as simple as booping a newly added button inside Android’s Overview mode, right beneath any active app pairing you’ve opened:

Google Pixel 9 Pro Fold Multitasking: "Save app pair" button
A subtle but powerful button added into the Pixel 9 Pro Fold’s Overview interface.

JR Raphael, IDG

All you’ve gotta do is tap that son of a gibbon, and bam: You get an easy-as-can-be icon right on your home screen for zipping back to that ready-to-roll pairing in the blink of an eye.

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-app-pairs.webp?quality=50&strip=all 600w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-app-pairs.webp?resize=289%2C300&quality=50&strip=all 289w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-app-pairs.webp?resize=162%2C168&quality=50&strip=all 162w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-app-pairs.webp?resize=81%2C84&quality=50&strip=all 81w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-app-pairs.webp?resize=463%2C480&quality=50&strip=all 463w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-app-pairs.webp?resize=347%2C360&quality=50&strip=all 347w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-app-pairs.webp?resize=241%2C250&quality=50&strip=all 241w" width="600" height="622" sizes="(max-width: 600px) 100vw, 600px">
One tap, and any app pair is present and ready — exactly as you like it.

JR Raphael, IDG

It’s incredibly handy — and while you may not have the same amount of screen space as what the Pixel 9 Pro Fold provides, you’d better believe the same instant screen-splitting setup is also available for you on any reasonably recent Android phone.

The secret resides in a simple little app called, rather amusingly, Be Nice: A Tiny App Launcher. It’s free, open source, and ad-free, too, and it doesn’t require any permissions or collect any type of personal data. (Seriously — what more could you ask for?!)

And once you install the thing and set up whatever on-demand app pairs you want, you’ll probably never actively open it or think about it again.

Here’s all there is to getting your own custom Pixel-9-Pro-Fold-caliber app pair shortcut:

  • Install Be Nice from the Play Store.
  • Open it once, and tap the plus icon in the lower-right corner of its configuration interface.
  • Tap “Select first app” and pick the first app that you want to show up in your pairing.
  • Tap “Select second app” and pick the other app that you want to be included.
  • If you want, you can increase the delay between the time when the first app opens and the second app appears. There’s really no need to mess with that, though.
  • And if you want, you can adjust the text that’ll appear alongside the shortcut on your home screen as well as the style of the icon associated with it. But again, the defaults are perfectly fine.
  • Tap “Create” once you’re finished and then confirm that you want to add your newly created shortcut onto your home screen.
Google Pixel 9 Pro Fold Multitasking: Be Nice Create App Pair
Be Nice makes creating an on-demand app pair almost shockingly simple.

JR Raphael, IDG

And that’s it: Once you head back to your home screen, you’ll see that snazzy new shortcut right then and there for easy ongoing access.

Google Pixel 9 Pro Fold Multitasking: Be Nice split screen shortcut home screen
An instant app pair shortcut, as created by the independent Be Nice Android power tool.

JR Raphael, IDG

And now, whenever you’re ready to work with those two specific apps together for desktop-like mobile multitasking, a fast tap of that fresh ‘n’ friendly new icon is all that’s required. How ’bout them apples?!

Google Pixel 9 Pro Fold Multitasking: Be Nice split screen shortcut
Just like on the Pixel 9 Pro Fold, you can launch any app pair in an instant — on any device.

JR Raphael, IDG

It’s a powerful start for a smarter smartphone setup. Now, if you really want to take your Android multitasking to the next level, keep reading.

Google Pixel 9 Pro Fold multitasking trick #2: The on-demand taskbar

This second Pixel-9-Pro-Fold-inspired bit o’ multitasking magic is a little less simple — and a little more limited, too.

But if you’re using one of Google’s other Pixel phones — any ol’ Pixel, so long as it’s running 2022’s Android 13 operating system or higher — it’s already present on your phone and available for the taking. All you’ve gotta do is figure out how to find it.

And goodness gracious, it ain’t easy. This Android-exclusive productivity advantage is buried deep within Google’s Pixel software and something no mere mortal would ever encounter under ordinary circumstances.

But oh, is it ever worth the effort. It’s a way to add my absolute favorite folding Pixel feature onto whatever Pixel phone you’ve got in front of you. I’m talkin’ about the on-demand taskbar that pops up on the Pixel 9 Pro Fold whenever you swipe up gently from the bottom edge of the screen with the device in its unfolded state:

Google Pixel 9 Pro Fold Multitasking: Taskbar
The Pixel 9 Pro Fold taskbar — a true productivity-boosting treasure.

JR Raphael, IDG

That taskbar gives you a desktop-caliber dock for switching to any other app anytime, either via its customizable primary shortcut positions or via the instant access to your entire app drawer also built right into that interface. And better yet, in addition to opening any app without having to head back to your home screen, the taskbar makes it impossibly easy to switch yourself over to that Android split-screen setup we were just ogling — simply by pressing and holding any icon within the taskbar area and then dragging it up into the main area of your screen.

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-taskbar.webp?quality=50&strip=all 600w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-taskbar.webp?resize=289%2C300&quality=50&strip=all 289w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-taskbar.webp?resize=162%2C168&quality=50&strip=all 162w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-taskbar.webp?resize=81%2C84&quality=50&strip=all 81w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-taskbar.webp?resize=463%2C480&quality=50&strip=all 463w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-taskbar.webp?resize=347%2C360&quality=50&strip=all 347w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-taskbar.webp?resize=241%2C250&quality=50&strip=all 241w" width="600" height="622" sizes="(max-width: 600px) 100vw, 600px">
That Pixel 9 Pro Fold taskbar takes Android’s split-screen system to soaring new heights.

JR Raphael, IDG

And here’s the buried Android treasure to beat all buried Android treasures: While the taskbar is officially limited to appearing only on large-sized devices like the Fold, with a quick tweak to a tucked-away area of your system settings, you can actually enable it on any Google Pixel phone this minute — without dropping a single dime on any fancy new hardware.

Now, fair warning: This does require some fairly advanced and ambitious Android spelunkin’ (to use the technical term). And, again, it’ll work only on Pixel phones, as other Android device-makers like Samsung haven’t opted to implement the same feature into their software setup.

What we’ve gotta do is employ a teensy bit of virtual voodoo to trick your Pixel into thinking it’s bigger than it actually is — ’cause, again, the software is set to show that taskbar element only when it’s running on a device of a certain size.

To do that, we need to dive deep into Android’s developer settings, which house all sorts of intimidating options that aren’t intended for average phone-usin’ folk to futz around with. There’s no risk to you or your phone, and as long as you follow these instructions exactly, it’s actually quite easy. (It’s also incredibly easy to undo, if you ever decide you aren’t into it and want to go back.) But we will be pokin’ around in an area of Android that’s meant mostly for developers, and if you veer off-course and mess with the wrong setting, you could absolutely make a mess.

So proceed only if you’re comfortable — and stick closely to the directions on this page. Capisce? Capisce.

Here we go:

1. First, we need to tell your Pixel that you want to even see Android’s advanced developer options in the first place:

  • Head into your phone’s system settings (by swiping down twice from the top of the screen and then tapping the gear-shaped icon in the corner of the panel that comes up).
  • Scroll down to the very bottom of the settings menu and select “About phone.”
  • Scroll down to the very bottom of that screen and find the line labeled “Build number.”
  • Tap your finger onto that line a bunch of times in a row until you see a prompt to enable developer mode on the device. (I swear it’ll work — this isn’t a wild goose chase!) You’ll probably have to put in your PIN, pattern, or passcode to proceed and confirm that you want to continue.

2. Now, with developer mode enabled, we’re ready to make the multitasking magic happen:

  • Mosey your way back out to the main system settings menu and tap the search box at the top of the screen.
  • Type the word smallest into the search prompt. That should reveal a developer option called “Smallest width.” Tap it!
  • Tap “Smallest width” one more time, and in the prompt that comes up, first jot down the number that’s there to start — just in case you want to change it back later. Then change the value to 600 and tap “OK.”
Google Pixel 9 Pro Fold Multitasking: Taskbar smallest width
This curious-seeming setting holds the key to unlocking advanced Android multitasking magic.

JR Raphael, IDG

At this point, you should see all the text on your screen get smaller. This is an unavoidable side effect of this setup, since we’re tricking your Pixel into thinking its screen is larger than it actually is, but we’ll do some things to make it more palatable and easy on the eyes in a second.

First, let’s find that splendid multitasking taskbar, shall we? Provided you’re using the current Android gesture system and not the legacy three-button navigation approach, you should be able to swipe your finger up gently from the bottom of the screen to reveal that newly unleashed productivity beast:

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-developer-settings.webp?quality=50&strip=all 750w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-developer-settings.webp?resize=300%2C294&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-developer-settings.webp?resize=710%2C697&quality=50&strip=all 710w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-developer-settings.webp?resize=171%2C168&quality=50&strip=all 171w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-developer-settings.webp?resize=86%2C84&quality=50&strip=all 86w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-developer-settings.webp?resize=489%2C480&quality=50&strip=all 489w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-developer-settings.webp?resize=367%2C360&quality=50&strip=all 367w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-developer-settings.webp?resize=255%2C250&quality=50&strip=all 255w" width="750" height="736" sizes="(max-width: 750px) 100vw, 750px">
An on-demand Android taskbar — just like on the Pixel 9 Pro Fold.

JR Raphael, IDG

Whee! And, just like on the Pixel 9 Pro Fold, you can now tap any app icon within that taskbar to switch to it, tap the app drawer icon at the left of the bar to access your complete list of installed apps from anywhere, and press and hold any icon and then drag it upward to bring the associated app into an instant split-screen setup.

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-split-screen.webp?quality=50&strip=all 750w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-split-screen.webp?resize=289%2C300&quality=50&strip=all 289w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-split-screen.webp?resize=672%2C697&quality=50&strip=all 672w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-split-screen.webp?resize=162%2C168&quality=50&strip=all 162w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-split-screen.webp?resize=81%2C84&quality=50&strip=all 81w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-split-screen.webp?resize=463%2C480&quality=50&strip=all 463w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-split-screen.webp?resize=347%2C360&quality=50&strip=all 347w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-split-screen.webp?resize=241%2C250&quality=50&strip=all 241w" width="750" height="778" sizes="(max-width: 750px) 100vw, 750px">
Simple Pixel-Fold-style screen-splitting, on any Android phone? Yes, please!

JR Raphael, IDG

Not bad, right?!

So, back to that tiny text that’s come along with this adjustment — here’s the fix:

  • Head back into your phone’s main settings menu.
  • Tap “Display,” then select “Display size and text.”
  • Place your finger on the slide beneath “Font size” and crank the sucker all the way over to the right.

That’ll make the text bigger and easier to read everywhere while still keeping that taskbar available whenever you want it.

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-display-settings.webp?quality=50&strip=all 750w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-display-settings.webp?resize=300%2C294&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-display-settings.webp?resize=711%2C697&quality=50&strip=all 711w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-display-settings.webp?resize=171%2C168&quality=50&strip=all 171w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-display-settings.webp?resize=86%2C84&quality=50&strip=all 86w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-display-settings.webp?resize=490%2C480&quality=50&strip=all 490w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-display-settings.webp?resize=367%2C360&quality=50&strip=all 367w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-display-settings.webp?resize=255%2C250&quality=50&strip=all 255w" width="750" height="735" sizes="(max-width: 750px) 100vw, 750px">
You can have your Pixel-Fold-inspired taskbar without having to squint.

JR Raphael, IDG

All that’s left is to explore your newly enhanced Android environment and see whatcha think. You’ll probably notice other interesting changes sparked by this shift — like the ability to see six Android Quick Settings shortcuts instead of four when you swipe down once from the top of your screen and the presence of a more desktop-like tab interface within your Android Chrome browser.

Google Pixel 9 Pro Fold Multitasking: Chrome browser tabs
Desktop-like browser tabs on an Android phone? Eeeenteresting. Very, very eeeeenteresting.

JR Raphael, IDG

You might also notice the presence of multipaned interfaces in certain apps that allow you to see different bits of info on screen at the same time.

It’s up to you to decide if you appreciate or are annoyed by these adjustments. But now you know how to make it happen. And if you ever decide you aren’t thrilled with the overall package, all you’ve gotta do is (a) tap the “Reset settings” options within that same “Display size and text” menu and then (b) either change the “Smallest width” developer setting back to its original value or just turn off Android’s developer options entirely (via the toggle at the top of the “Developer options” menu, within the System section of your phone’s settings) to return to your standard Android setup.

The power’s in your hands — and that folding-Pixel-level multitasking magic is officially there and available for you, anytime you want to summon it.

Don’t let yourself miss an ounce of Pixel magic. Start my free Pixel Academy e-course to uncover all sorts of hidden wizardry built into your favorite Pixel phone!

Parallels 20 turns Macs into cross platform DevOps powerhouses

Here’s an exciting development that almost got missed during Apple’s heady week of iPhone news: Parallels has hit version 20 and now provides a series of powerful features designed to streamline artificial intelligence (AI) development. 

If you run Windows on your Mac, you’re likely already familiar with Parallels Desktop. It is, after all, the only solution authorized by Microsoft to run Windows in a virtualized environment on Apple Silicon. 

If you think back to when Apple introduced the M1 Macs, you might recall the entire industry was impressed by the performance Apple Silicon unleashed. One tester went on the record to say running Windows for ARM on an M1 Mac using Parallels Desktop 16 was “the fastest version of Windows” they’d ever used. “Apple’s M1 chip is a significant breakthrough for Mac users,” Nick Dobrovolskiy, Parallels senior vice president of engineering and support, told me at the time.

Parallels now says its software can run in excess of 200,000 Windows applications quite happily on Macs. With M4 Macs on the horizon, you can anticipate further performance gains — and with Parallels, Apple Intelligence has now come to Windows. 

Apple Intelligence meets Windows?

If you are running a virtualized Windows environment on your Mac using Parallels, you will be able to use Apple’s AI-powered Writing Tools once macOS Sequoia ships. 

Parallels hasn’t told us whether we’ll also be able to access other AI features from within the Windows environment, but it has said we’ll be able to sign into Apple ID across multiple macOS virtual machines on the same Mac. What this means is that developers can fully leverage virtual Macs for building and testing software in an isolated environment.

But the big hook for Parallels in this release is the AI development tools packed inside. The new Parallels AI Package is designed to make building AI models more accessible. To do so, it offers a virtual machine pre-loaded with 14 AI development tools, sample code, and instructions. The idea is that people who want to build AI solutions can install the package and run third-party small language models inside the virtual environment, even while they are offline.

This is included free in Parallels Desktop for Mac Business and Enterprise editions and is free to install in the Desktop for Mac Pro Edition for the rest of the year.

Why did Parallels do this?

“As PCs become more AI-capable, we believe AI will soon be standard on every desktop,” said Prashant Ketkar, CTO at Parallels. “This shift challenges developers to update their applications to fully leverage AI-enabled PCs.

“That’s why we created the Parallels AI Package: to equip development teams, whether experts or beginners, with accessible AI models and code suggestions. This enables ISVs to build AI-enabled applications in minutes, significantly boosting productivity for every software development team using a Mac.”

What else has improved?

Parallels, now owned by Corel Corporation, might have put a lot of effort into support for the AI wave, but the company has also delivered additional features that should improve the experience of running Windows on a Mac.

One big change: you might experience up to 80% better performance while running legacy Windows apps using the Prism emulator on Arm.

Another enhancement comes with a new shared folders technology, which makes it much easier to work across Mac and Windows files on apps. This feature also supports Linux virtual machines, which in combination with the power of Macs and the new AI toolkits from Parallels makes for a powerful DevOps machine. The Visual Studio Code extension lets you manage multiple machines, and even lets you access Microsoft Copilot when you do. 

The enterprise connection

Lots of people working with Windows on a Mac work at companies in which both platforms are used. For IT, this can raise challenges around licensing and deployment of operating system licenses.

For them, Parallels now offers a new enterprise portal that IT can use to manage virtual machines, licensing issues and more. To achieve this, Parallels built new tech to make it possible to deploy Parallels Desktop without resorting to complex scripts.

“These advancements mark a significant milestone in our ongoing commitment to improving the IT admin experience. With these new features, deploying Parallels Desktop across a network of Macs is simpler and more flexible than ever before,” the company said in a blog post.

You’ll also find GitHub Actions to transform CI/CD workflows. In a related move, the software has attained a SOC Type 2 report, which means it is undergoing regular aggressive tests to ensure it remains secure. 

Smart for business

I’ve been watching Parallels since it first appeared on the Mac, and I’m liking the direction in which the company is going. While it remains a solid option for consumers who just want to run a few Windows apps (including games) on their Mac, it is becoming a powerful adjunct for developers, enterprise pros, and (with version 20), a useful passport to enable AI development as well. This edition builds on the many enhancements introduced in 2023.

That’s not bad for something that costs from $99 to $149 per year (Windows licenses extra).

Please follow me on LinkedInMastodon, or join me in the AppleHolic’s bar & grill group on MeWe.