Month: July 2024

Microsoft 365 suite suffers outage due to Azure networking issues

Microsoft 365 customers are having trouble connecting to the service and seeing degraded performance due to networking infrastructure issues across Microsoft’s Azure cloud regions globally.

“We’re currently investigating access issues and degraded performance with multiple Microsoft 365 services and features. More information can be found under MO842351 in the admin center,” the company wrote on X, formerly Twitter, via its Microsoft 365 Status account.

Microsoft’s Azure Service status page also showed a service degradation warning and said that users who are able to access impacted services may experience latency while performing actions or operations.

That warning also lists the services affected, including the Microsoft 365 admin center itself, Intune, Entra and Power Platform.

Services not affected, according to the cloud status page, include SharePoint Online, OneDrive for Business, Microsoft Teams, and Exchange Online.

The M365 Office service status portal also showed no signs of any services down. The site showed that all components of the suite, including M365 consumer, Outlook.com, OneDrive, Microsoft Copilot, Microsoft-To-Do, Skype, Office for the web (consumer), Whiteboard, Phone Link, Teams (consumer), and Microsoft Lists, were all working normal.

A separate page showing Microsoft 365 network health status that enables users to check network connectivity, also showed no sign of any issues.

But third-party outage reporting service Downdetector.com had received reports from users suggesting that emails, calendars and other Microsoft 365 services were not working for them.

Microsoft’s Azure Service status page, which itself had stopped working at time of writing, also showed another entry suggesting that Azure’s networking infrastructure was experiencing issues, starting approximately at 11:45 UTC on July 30.

The page showed that networking infrastructure across all Azure regions were experiencing connectivity issues.

“We have implemented networking configuration changes and have performed failovers to alternate networking paths to provide relief. Monitoring telemetry shows improvement in service availability from approximately 14:10 UTC onwards, and we are continuing to monitor to ensure full recovery,” a separate page that reports Azure’s status in detail showed.

This is Microsoft’s 8th service status-related incident, according to the company’s service status page. It included the incident caused by a flaw in CrowdStrike’s security sensor software that cost users millions of dollars in repairs and lost business opportunities, because it caused some Azure Virtual Machines to fail to restart.

Last year was also riddled with outages for Microsoft 365 users. Azure’s service page shows that the last incident reported in 2023 was in September, when the US East region faced issues.

Apple Intelligence: Coming to an app near you

While the actual introduction of Apple Intelligence isn’t expected until after the release of the iPhone 16 with iOS 18, developers can now begin testing the service on iPhones, Macs, and iPads.

The first developer betas of iOS 18.1, iPad OS 18.1, and macOS Sequoia 15.1 — all of which contain Apple Intelligence — are available now. These provide access to some — though not all — of the features the company plans to introduce, including proofreading, writing assistance, and summarization tools.

What features are available in the beta?

Apple Intelligence is currently only available to developers in the US (though there may be a workaround as described below).

The beta includes the following features:

  • Call recording and call transcription. This useful feature records calls made using an iPhone and generates a transcription of the conversation directly into the Notes app. All parties are informed when this is in use.
  • Tools to rewrite and proofread texts.
  • Text summary tools.
  • A new interaction sequence when working with Siri, and the option to move between voice and typed command when using it.
  • Contextual Answers in Siri.
  • Smart Reply in Mail and Messages.
  • Improved photo and video search in Photos, and the capacity to create Memories collections using spoken prompts.

The features that aren’t yet available in the beta include ChatGPT integration, Image Playground, Genmoji, on-screen awareness and intelligent contextually aware features such as Priority Notifications. These are expected to be introduced later in the beta process.

Accompanying the release of the iOS 18.1 beta on Monday, Apple also published an extensive technical report on the Foundation Language Models used to run these features. That report tells us the company used chips designed by Google rather than Nvidia in building its advanced AI models.

What can developers do with Apple Intelligence?

Developers can weave Apple Intelligence features, including Siri improvements, inside their apps. Apple has now made those Siri features (domains) available to developers for use. Some examples include:

  • Developers can use the AI to create/close tabs & windows, bookmark URLs, clear history, search web, find items on page, switch tabs, and open bookmarks.
  • Journaling tools can be used to create, update and delete text and audio entries and support rich content, including media and text. 
  • Document, presentation and spreadsheet tools let users create, open, and update presentations and slides, add media, comments, and control playback verbally.
  • Users can also open, create, delete and duplicate images and albums. Siri and Apple Intelligence will let you edit image and videos, add or remove metadata, and more

The idea behind this is that developers will be able to use these App Intents to make their software smarter and easier to work with, ushering in new generations of powerful and innovative apps. Developers should read the documentation pertaining to these tools, which is available here

How to get the beta?

First, a word of caution: at this stage of the beta process, it is highly inadvisable to install the software on mission-critical devices. The Apple Intelligence beta software is currently available exclusively to US developers and can only be installed on an iPhone 15 Pro, Pro Max, or any iPad or Mac with an M-series processor. There is a queue to access Apple Intelligence. Once you have downloaded the beta software, you can join the waiting list by navigating to Settings > Apple Intelligence & Siri.

If you are not in the US, you might still be able to access the beta by going to Language & Region Settings and changing your region to the US and Siri’s language to English (United States). You should be accepted to the Apple Intelligence trial after a few hours.

Will there be a public beta?

Apple has already explained that Apple Intelligence will be a beta once it is introduced this fall. The company’s decision to test versions of its new operating systems both with and without Apple Intelligence suggest the possibility that public beta testing will not take place until after the introduction of iOS 18, possibly when new Macs and iPads are introduced this fall

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Apple reportedly used Google chips to train two AI models

Apple has released the first version of Apple Intelligence, a collection of new AI functions for the company’s various operating systems that among other things will improve the voice assistant Siri and be able to generate text and images. (The new AI features, released Monday, are available in the developer betas for IOS 18.1 and iPad OS 18.1.)

Apple Intelligence is expected to be available to the public later this year, but is not expected to arrive at the same time as Apple’s new iPhone, expected to launch in September and initially run iOS 18.

At the same time,  Reuters reports, a new research report from Apple indicates the company will use Google’s, rather than Nvidia’s, hardware to train two of the central AI models. The decision is noteworthy because Nvidia currently makes the most sought-after AI processors.

Apple reportedly used 2048 of Google’s TPUv5 chips to build the AI ​​model to be used for the iPhone and 8192 TPUv4 processors for its server-based AI model. Apple itself has not yet commented on the matter.

Nearly one in three genAI projects will be scrapped

By 2026, 80% of enterprises will have used generative AI (genAI) APIs or large language models (LLMs) or deployed genAI-enabled applications in production environments, according to Gartner Research. That’s up from less than 5% in 2023, as companies embrace the technology to discover patterns and actionable insights and free up workers by automating tedious tasks.

Although 9% of companies are now leveraging genAI to transform business models and create new business opportunities, nearly a third of those projects will be abandoned by the end of next year — largely due to poor data quality, inadequate risk controls, escalating costs, or unclear business value, according to a new Gartner survey of 822 corporate leaders and board directors.

The survey results were released Monday.

“After last year’s hype, executives are impatient to see returns on genAI investments, yet organizations are struggling to prove and realize value,” said Rita Sallam, a Gartner distinguished vice president analyst. “As the scope of initiatives widen, the financial burden of developing and deploying genAI models is increasingly felt.”

AI deployments can be expensive, with costs ranging from $5 million to $20 million. By 2028, more than half the enterprises that have built LLMs from scratch will abandon them due to costs, complexity, and technical debt in their deployments, according to Gartner.

Gartner study on GenAI costs

Gartner

Even so, genAI tools are proving in some cases to be advantageous for earlier adopters. Across industries and business processes, companies are reporting a range of improvements that vary by use case, job type and skill level of the worker. Business leaders surveyed by Gartner reported a 15.8% revenue increase, 15.2% in cost savings and a 22.6% productivity improvement, on average.

“Unfortunately, there is no one-size-fits-all with genAI, and costs aren’t as predictable as other technologies. What you spend, the use cases you invest in, and the deployment approaches you take — all determine the costs,” Sallam said.

Last year was seen as the year of enterprise AI adoption, with 55% of organizations experimenting with genAI in workflows, according to an August 2023 report from consulting firm McKinsey & Co. At the time, however, fewer than a third of enterprises surveyed said they were using AI for more than one function, “suggesting that AI use remains limited in scope.” 

Lucidworks, which sells AI-powered search and discovery software, released the results of its second annual GenAI Global Study; it showed just 63% of global companies plan to increase AI spending in the next 12 months, down from 93% in 2023. LucidWorks also found that financial services organizations deployed only a quarter of the AI initiatives they had planned for 2024, even though nearly 50% of financial services leaders had a positive view of AI in 2023.

The biggest concerns around using genAI in financial services involves data security (45%), followed by accuracy (43%), and cost (40%), according to Lucidworks.

The global study, based on a survey of more than 2,500 business leaders involved in AI technology decision-making, made it clear genAI’s once explosive growth is cooling as businesses face cost and security hurdles. “Businesses are recognizing the potential, but also the risks and costs,” Mike Sinoway, CEO of Lucidworks, said in a statement.

Concerns over AI

Lucidworks

US-based organizations remain among the more bullish among those planning to boost AI spend this year (69%), but even as investment remains high, more companies are looking to balance the potential of genAI with managing risks and costs.

Ironically, most companies deploy genAI tools out of competitive concerns; one-third of business leaders feel like they’re falling behind competitors despite almost everyone struggling to implement the technology, LucidWorks found.

Investment in AI continues, however, and by 2030, companies will spend $42 billion a year on genAI projects such as chatbots, research, writing, and summarization tools.

Though commercial LLMs dominate the marketplace at the moment, more companies are eyeing customized small models that use only internal data. Nearly eight in 10 companies use commercial LLMs, and 21% have opted for open source only, according to LucidWorks.

ROI remains hard to pin down

While the technology has been heralded by many as a boon to productivity, nailing down a return on investment (ROI) has proven elusive, according to Lucidworks and other studies.

Forty-two percent of companies reported they’d not yet seen a significant benefit from their genAI initiatives. Tech and retail sectors stand out with higher deployment and realized gains, but overall, most industries are slow to move beyond pilot programs, Lucidwork found.

Security remains a top concern for business leaders, but cost worries have surged 14-fold in the past year, according to Lucidworks.

Additionally, concerns around response accuracy have risen five-fold, likely due to issues with hallucinations, highlighting the need for careful LLM selection to balance cost and ensure accurate, secure results. “For [genAI], we are not saying that finding ROI may be difficult, but expressing ROI has been difficult because many benefits like productivity…have indirect or non-financial impacts that create financial outcomes in the future,” Sallam said in an earlier interview with Computerworld.

For example, using genAI to automate code generation could make a software developer more productive, giving them additional time to improve productivity and increase innovation. Down the line, that could mean faster time to market for new features — and happier customers.

“Measuring ROI is hard,” said Bret Greenstein, Data & AI leader at professional services firm PriceWaterhouseCoopers (PwC). But by adapting an LLM to perform a function or process, it’s easier to compare its performance — cost, accuracy and speed — against earlier processes.

In the simplest of terms, ROI is a financial ratio of an investment’s gain or loss relative to its cost; so when a company invests in genAI, the benefits of that spending should outweigh costs.

12 month AI spending plans

Lucidworks

“Once you get [genAI] to consistently achieve this new level of performance, you deploy it in production with the proper governance and operational processes and track its usage,” Greenstein said. “When you have a use case that saves two hours in a six-hour process, and track its usage, you can aggregate the savings.”

What to look for when considering genAI

According to Gartner, executive leaders pursuing genAI projects should:

  • Determine potential gains in business value derived from genAI business model innovation by exploring strategic alignment of business adjustments with the genAI deployment.
  • Calculate the total costs of genAI business model innovation by considering both the expenses incurred in genAI deployment and the costs associated with necessary business adjustments.
  • Make informed investment decisions by calculating and assessing the ROI of genAI business model innovation. This involves estimating the financial returns and comparing them to the total costs associated with the innovation, including those associated with needed business adjustments.

If the ROI meets or exceeds expectations, it presents an opportunity to expand investments by scaling genAI innovation and use across a broader user base or implementing it in additional business divisions, according to Gartner. If the ROI falls short, it might be necessary to reconsider investments and explore alternate scenarios for genAI.

D-Wave launches new quantum roadmap geared to AI/ML

Analyst reaction to D-Wave Quantum’s announcement today of an extended product development roadmap aimed at helping organizations address a variety of artificial intelligence/machine learning (AI/ML) workloads was decidedly positive, however, one common sentiment was that much needs to happen for quantum computing to experience widespread adoption.

The Palo-Alto, California-based company said it is “strengthening the connection between quantum optimization, AI, and machine learning” with enhancements to its Leap quantum cloud service, a move, it said, “comes at a time when the broader AI industry is confronting a computing crunch.”

The cost of the amount of compute, and the associated energy, needed to satisfy a growing set of use cases, it said, is rapidly escalating. Its new offering, D-Wave added in a release, is designed to leverage annealing quantum computing’s “unique capability in solving optimization problems to help customers discover better, faster, and more energy efficient AI and ML workloads.”

Roadmap focus

According to the release, the new roadmap will focus on three areas:

  • Quantum distributions for generative AI: Development in this area, the company said, is focused on “designing novel, modern generative AI architectures that use quantum processing unit (QPU) samples from quantum distributions that cannot be generated classically.” They were initially focused on use cases involving molecular discovery.
  • Restricted Boltzmann Machine (RBM) architectures that leverage D-Wave’s QPU for applications that it said range from “cybersecurity and drug discovery to high-energy physics data analysis, which could potentially lead to reduced energy consumption in training and running AI models.”
  • GPU (graphics processing unit) integration with Leap: D-Wave said it will incorporate additional GPU resources for the training and support of AI models alongside optimization workloads. In addition, it said, “efforts are underway to further reduce latency between QPUs and  classical computing resources, a critical step in enabling hybrid-quantum technology for AI/ML.”

Potential impact

Bill Wong, research fellow at Info-Tech Research Group, said, “D-Wave’s advancements in quantum computing for AI are intriguing, but it’s still very early to assess the impact and value of quantum computing for real-world AI use cases.”

Today, he said, “most companies are not preparing for or anticipating breakthroughs in AI from the use of quantum computing. This will likely continue to be one of the key challenges for quantum computing, which is to find those AI use cases that can significantly benefit from this accelerated compute platform when traditional (i.e., GPU-based) can address the compute requirements at a much lower cost.”

Possible use cases, said Wong, “may focus on developing quantum-resistant cryptography, where traditional computing platforms cannot address the resources required. While D-Wave is at the cutting edge of research, I, too, am seeking those use cases that can drive the adoption of this unique platform.”

Heather West, quantum computing analyst at IDC, said, “there has always been a thought about how AI and quantum will work synergistically, but at this point in time, it is more about AI influencing quantum.”

A key piece of the announcement, she said, revolves around annealing quantum computing, due to the fact it has been designed specifically to solve optimization problems, and, in order for quantum to “really have an impact, you have to have a larger customer base.”

Asked if today’s announcement by D-Wave puts the quantum discussion out into the mainstream, she replied, “I think that is fair. D-Wave has taken a customer-centric approach to developing their quantum system.”

Many quantum hardware vendors, she said, “talk about their qubits, they are going to talk about the different components of the quantum system, they are going to talk about potential use cases. But D-Wave talks about use cases that are being explored and gaining value now. They are really driving this customer-centric focus of quantum, which differentiates them.”

Gartner VP Analyst Sid Nag, who specializes in scalable computing, said the announcement represents an “alternative to GPUs, although D-Wave is not saying that explicitly.”

He warned, however, that there is “a whole bunch of specialty cloud providers springing up and competing with hyperscalers, and that is an artifact of the trend we are seeing in terms of AI getting bigger and bigger and bigger. I do not know how big it is going to get in terms of the actual growth. At some point the trough of disillusionment [a Gartner term referring to a phase ‘where, after initial hype and inflated expectations, interest begins to wane.’] is going to set in.”

Nag added that, in the case of D-Wave, “they are going after a very special market.”

Anthropic accused of collecting data for AI models without permission

Several AI companies have recently been accused of collecting data used to train large language models (LLMs) without the consent of the affected parties.

The latest of the accusations come from Ifixit and Freelancer, which say Anthropic has collected data from the sites, even though they used a protocol to prevent that from happening. According to Freelancer CEO Matt Barrie, their site received 3.5 million hits from Anthropic’s Claudebot in four hours, making the bot “the most aggressive” to date.

Ifixit CEO Kyle Wiens, in a comment quoted by Engadget, said Anthropic is taking content without paying for it and forcing iFixit to use its own developer resources to fend off the data collection scheme.

Apple Intelligence delayed?

Apple will reportedly delay introduction of Apple Intelligence on iPhones until it ships the iOS 18.1 update after the iPhone 16 ships this fall. I don’t see this as solely because the tech won’t be ready — developers are already testing it — but suspect regulatory challenges and Apple’s own wider deployment plans led to the delay.

The Artificial Intelligence (AI) hype is all encompassing. The industry is spending billions on it, electricity grids are struggling to maintain it, and regulators are preparing to constrain it. It is a speeding train packed with potential, but momentum is so rapid a mistake could send it off the rails. 

That’s a concern across the industry, one regulators are also attempting to understand. US regulation seems voluntary right now, while in Europe the much tougher EU Artificial Intelligence Act should come into effect this year. 

Apple joins the White House 

In the US, Apple has joined the White House Voluntary AI Safeguards program with 15 other major firms, including Amazon, Google, Meta, and Open AI. The aim of the group is to move toward safe, secure, and transparent development of AI technology.

The goal: to “mitigate AI’s safety and security risks, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more,” the White House said.

While there is always a big element of poacher-turn-gatekeeper in any voluntary industry group, this slightly more laissez-faire approach will probably benefit the industry.

Europe is tougher

It’s different in Europe, where the Act takes the form of a sprawling piece of legislation that will take time to fully comprehend and implement. I expect the complexity of this law means most providers in the AI space will eventually mimic Apple and delay the introduction of services while they figure out how to be compliant. The laws are also being introduced in a staggered way across three years, which could make it harder to reach compliance.

Similar laws are being put in place globally, creating a complex regulatory environment in which most AI services will be forced to slow new product integration. For Apple as a platform provider, the regulatory complexity is amplified.

In this context, it makes a lot more sense for the company to switch on any of Apple’s new Intelligence services only once it has achieved enough clarity to guarantee compliance, particularly in the EU where the company has said it’s delaying the services pending such clarity. (Outgoing EU Commissioner Margrethe Vestager’s seemingly antagonistic response to that request underlines why Apple was concerned.)

Smoke and fire

That’s not to say there’s no smoke at all around this potential fire. Apple Intelligence will not be an iPhone-only animal, it will be available across the company’s entire ecosystem: iPhones, iPads, and Macs. That means the system needs to be widely tested across all these products, including analysis around compliance (above).

We know Apple is likely to have news to share about Macs and iPads this fall, as that’s when the company usually updates its hardware. That news is likely to include the introduction of new Apple Silicon processors, and it’s almost a certainty it will lean deep into its core messages around privacy, edge device AI, secure AI, energy and hardware integration when it does introduce new hardware.

Happy Thanksgiving

Teasing out those launches with the introduction of new AI features across its operating systems will only boost attention around the launch of new hardware. That attention should turn into sales, particularly as we hit the US shopping season and computer users consider the personal and economic consequences of the recent Microsoft/Crowdstrike failure. Against this backdrop, there’s never been a better time to introduce the world’s most advanced and best-designed hardware equipped with the world’s safest and most privacy-conscious form of AI service, Apple Intelligence. 

Summing up

With all of this in mind, I find it hard to be too concerned about Apple’s “delay” in launching its AI service. Regulation and its own internal product launch plans mean a later launch will still excite consumers, while helping it realize the much anticipated bounce in hardware sales everyone now expects as AI goes mainstream. I’m just not entirely certain any of us are truly ready for what the consequences of mass market AI might be. 

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Apple Intelligence delayed?

Apple will reportedly delay introduction of Apple Intelligence on iPhones until it ships the iOS 18.1 update after the iPhone 16 ships this fall. I don’t see this as solely because the tech won’t be ready — developers are already testing it — but suspect regulatory challenges and Apple’s own wider deployment plans led to the delay.

The Artificial Intelligence (AI) hype is all encompassing. The industry is spending billions on it, electricity grids are struggling to maintain it, and regulators are preparing to constrain it. It is a speeding train packed with potential, but momentum is so rapid a mistake could send it off the rails. 

That’s a concern across the industry, one regulators are also attempting to understand. US regulation seems voluntary right now, while in Europe the much tougher EU Artificial Intelligence Act should come into effect this year. 

Apple joins the White House 

In the US, Apple has joined the White House Voluntary AI Safeguards program with 15 other major firms, including Amazon, Google, Meta, and Open AI. The aim of the group is to move toward safe, secure, and transparent development of AI technology.

The goal: to “mitigate AI’s safety and security risks, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more,” the White House said.

While there is always a big element of poacher-turn-gatekeeper in any voluntary industry group, this slightly more laissez-faire approach will probably benefit the industry.

Europe is tougher

It’s different in Europe, where the Act takes the form of a sprawling piece of legislation that will take time to fully comprehend and implement. I expect the complexity of this law means most providers in the AI space will eventually mimic Apple and delay the introduction of services while they figure out how to be compliant. The laws are also being introduced in a staggered way across three years, which could make it harder to reach compliance.

Similar laws are being put in place globally, creating a complex regulatory environment in which most AI services will be forced to slow new product integration. For Apple as a platform provider, the regulatory complexity is amplified.

In this context, it makes a lot more sense for the company to switch on any of Apple’s new Intelligence services only once it has achieved enough clarity to guarantee compliance, particularly in the EU where the company has said it’s delaying the services pending such clarity. (Outgoing EU Commissioner Margrethe Vestager’s seemingly antagonistic response to that request underlines why Apple was concerned.)

Smoke and fire

That’s not to say there’s no smoke at all around this potential fire. Apple Intelligence will not be an iPhone-only animal, it will be available across the company’s entire ecosystem: iPhones, iPads, and Macs. That means the system needs to be widely tested across all these products, including analysis around compliance (above).

We know Apple is likely to have news to share about Macs and iPads this fall, as that’s when the company usually updates its hardware. That news is likely to include the introduction of new Apple Silicon processors, and it’s almost a certainty it will lean deep into its core messages around privacy, edge device AI, secure AI, energy and hardware integration when it does introduce new hardware.

Happy Thanksgiving

Teasing out those launches with the introduction of new AI features across its operating systems will only boost attention around the launch of new hardware. That attention should turn into sales, particularly as we hit the US shopping season and computer users consider the personal and economic consequences of the recent Microsoft/Crowdstrike failure. Against this backdrop, there’s never been a better time to introduce the world’s most advanced and best-designed hardware equipped with the world’s safest and most privacy-conscious form of AI service, Apple Intelligence. 

Summing up

With all of this in mind, I find it hard to be too concerned about Apple’s “delay” in launching its AI service. Regulation and its own internal product launch plans mean a later launch will still excite consumers, while helping it realize the much anticipated bounce in hardware sales everyone now expects as AI goes mainstream. I’m just not entirely certain any of us are truly ready for what the consequences of mass market AI might be. 

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Microsoft shifts focus to kernel-level security after CrowdStrike incident

The CrowdStrike incident that affected more than 8.5 million Windows PCs worldwide and forced users to face the “Blue Screen of Death,” made Microsoft sit down and revisit the resilience of its operating system.

The company is now prioritizing the reduction of kernel-level access for software applications, a move designed to enhance the overall security and resilience of the Windows operating system, as part of its post-CrowdStrike attempt to make its security architecture more resilient and robust.

Doctors weaponize AI in insurance battles over patient care authorizations

Doctors facing an onslaught of AI-generated patient care denials from insurance companies are fighting back — and they’re using the same technology to automate their appeals.

Prior authorization, where doctors must get permission from insurance companies before providing a medical service, has become “a nightmare,” according to experts. Now, it’s becoming an AI arms race.

“And, who loses? Yup, patients,” said Dr. Ashish Kumar Jha, dean of the School of Public Health at Brown University. More often than not, clinicians have historically simply given up once their appeals are denied.

Jha, who is also a professor of Health Services, Policy and Practices at Brown and served as the White House COVID-19 response coordinator in 2022 and 2023, said that while prior authorization has been a major issue for decades, only recently has AI been used to “turbocharge it” and create batch denials. The denials force physicians to spend hours each week challenging them on behalf of their patients.

Generative AI (genAI) is based on large language models, which are fed massive amounts of data. People then train the model on how to answer queries, a technique known as prompt engineering.

“So, all of the [insurance company] practices over the last 10 to 15 years of denying more and more buckets of services — they’ve now put that into databases, trained up their AI systems and that has made their processes a lot faster and more efficient for insurance companies,” Jha said. “That has gotten a lot of attention over the last couple of years.”

While the use of AI tools by insurance companies is not new, the launch of OpenAI’s ChatGPT and other chatbots in the last few years, allowed genAI to fuel a huge increase in automated denials, something industry analysts say they saw coming.

Four years ago, research firm Gartner predicted a “war will break out” among 25% of payers and providers resulting from competing automated claim and pre-authorization transactions. “We now have the appeals bot war,” Mandi Bishop, a Gartner CIO analyst and healthcare strategist, said in a recent interview.

A painful process for all

The prior authorization process is painful for all sides in the healthcare community, as it’s manually intensive, with letters moving back and forth between fax machines. So, when health insurance companies saw an opportunity to automate that process, it made sense from a productivity perspective.

When physicians saw the same need, suppliers of electronic health record technology jumped at the chance to equip their clients with the same genAI tools. Instead of taking 30 minutes to write up a pre-authorization treatment request, a genAI bot can spit it out in seconds.

Because the original pre-authorization requests — and subsequent appeals — contain substantive evidence to support treatment based on a patient’s health record, the chatbots must be connected to the health record system to be able to generate request.

EPIC, one of the largest electronic health record companies in the United States, has rolled out genAI tools to handle prior-authorization requests to a small group of physicians who are now piloting it. Several major health systems are also currently trying out an AI platform from Doximity.

Dr. Amit Phull, chief physician experience officer for Doximity, which sells a platform with a HIPAA-compliant version of ChatGPT, said the company’s tech can drastically reduce the time clinicians spend on administrative work. Doximity claims to have two million users, 80% of whom are physicians. Last year, the company surveyed about 500 clinicians who were piloting the platform and found it could save them 12 to 13 hours a week in administrative work.

“In an eight-hour shift in my ER, I can see 25 to 35 patients, so if I was ruthlessly efficient and saved those 12 to 13 hours, we’re talking about a significant increase in the number of patients I can see,” Phull said.

Clinicians who regularly submit prior authorization requests complain the process is “purposefully opaque” and cumbersome, and it can sometimes force doctors to choose a different course of treatment for patients, according to Phull. At the very least, clinicians often get caught in a vicious cycle of pre-authorization submission, denial, and appeal — all of which require continuous paperwork tracking while keeping a patient up to date on what’s going on.

“What we tried to do is take this technology, train it specifically on medical documentation, and bring that network layer to it so that physicians can learn from the successes of other clinicians,” Phull said. “Then we have the ability to hard wire that into our other platform’s technologies like digital fax.”

Avoiding ‘mountains…of busywork’

For physicians, the need to reduce the work involved in appealing prior authorization denials “has never been greater,” according to Dr. Jesse M. Ehrenfeld, former president of the American Medical Association.

“Mountains of administrative busywork, hours of phone calls, and other clerical tasks tied to the onerous review process not only rob physicians of face time with patients, but studies show also contribute to physician dissatisfaction and burnout,” Ehrenfeld wrote in a January article for the AMA.

More than 80% of physicians surveyed by the AMA said patients abandon treatment due to authorization struggles with insurers. And more than one-third of physicians surveyed by the AMA said prior authorization fights have led to serious adverse outcomes for patients in their care, including avoidable hospitalizations, life-threatening events, permanent disabilities, and even death.

Ehrenfeld was writing in response to a new rule by the Centers for Medicare & Medicaid Services (CMS) due to take effect in 2026 and 2027 that will streamline the electronic approval process for prior authorization requests.

In 2023, nine states and the District of Columbia passed legislation that reformed the process in their jurisdictions. At the start of 2024, there were already more than 70 prior authorization reform bills of varying types among 28 states.

Earlier this month, Jha appeared before the National Conference of State Legislators to discuss the use of genAI in prior authorization. Some legislators feel the solution is to ban the use of AI for prior authorization assessments. Jah, however, said he doesn’t see AI as the fundamental problem.

“I see AI as an enabler of making things worse, but it was bad even before AI,” Jha said. “I think [banning AI] — it’s very much treating the symptom and not the cause.”

Another solution legislators have floated would force insurance companies to disclose when they use AI to automate denials, but Jha doesn’t see the purpose behind that kind of move. ‘Everyone is going to be using it, so every denial will say it used AI,” he said. “So, I don’t know that disclosure will help.”

Another solution offered by lawmakers would get physicians involved in overseeing the AI algorithm insurance companies use. But Jha and others said they don’t know what that means — whether physicians would have to oversee the training of LLMs and monitor their outputs or whether it would be left to a technology expert.

“So, I think states are getting into the action and they recognize there’s a problem, but I don’t think [they] have figured out how to address it,” Jha said.

AI tools a mixed blessing

Jha said policy makers need to think more broadly than “AI good versus AI bad,” and instead see it as any technology that has plusses and minuses. In other words, its use shouldn’t be over regulated before physicians, who are already wary of the technology, can fully grasp its potential benefits.

Most healthcare organizations are acting as slow followers in deploying AI because of potential risks, such as security and data privacy risks, hallucinations, and erroneous data. Physicians are only beginning to use it now, but those who do have become a very vocal minority in praising its benefits, such as creating clinical notes, handling intelligent document processing, and generating treatment options.

“I’d say it’s got to be less than 1% of physicians,” Jha said. “It’s just that if there are a million doctors out there and it’s 1% of them, then that’s 10,000 doctors using AI. And they’re out there publicly talking about how awesome it is. It feels like all the doctors are using AI, and they’re really not.”

Last year, UnitedHealthcare and Cigna Healthcare faced class-action lawsuits from members or their families alleging the organizations had used AI tools to “wrongfully deny members’ medical claims.”

In Cigna’s case, reports claimed it denied more than 300,000 claims over two months in 2022, which equated to 1.2 seconds of review per claim on average. UnitedHealthcare used an AI-based platform called nH Predict from NaviHealth. The lawsuit against it claimed the technology had a 90% error rate, overriding physicians who said the expenses were medically necessary. Humana was later also sued over its use of nH Predict.

The revelations that emerged from those lawsuits led to a lot of “soul searching” by the federal CMS and healthcare technology vendors, according to Gartner’s Bishop. Health insurance firms have taken a step back.

According to Bishop, since the batch denials of claims drew the attention of Congress, there has been a significant increase in healthcare to “auto-approve” treatment requests. Even so, Jha said batch denials are still common and the issue is likely to continue for the foreseeable future.

“These are early days,” Jha said. “I think [healthcare] providers are just now getting on board with AI. In my mind, this is just round one of the AI-vs-AI battle. I don’t think any of us think this is over. There will be escalation here.”

“The one person I didn’t talk about in all this is the patient; they’re the ones who get totally glossed over in this.”