Author: Security – Computerworld

Apple’s iPhone SE 4 will matter very much indeed

It might not be the biggest-selling or most expensive product in Apple’s basket, but a very important part of Apple’s future will be defined by the upcoming iPhone SE upgrade in 2025. That’s because it is expected to bring in a new Apple-made 5G modem, impressive camera improvements, and support for Apple Intelligence.

And all of those will require more memory and a much faster processor.

To recap recent claims, here’s what we expect for the iPhone SE 4:

An Apple-made 5G modem

Apple has been working on its own 5G modem for years and has spent billions on the task. Bloomberg tells us the company is almost ready to go with its own home-developed modem, though will continue using Qualcomm modems in some devices for a while yet, in part because they support mmWave, which the new Apple modems allegedly do not.

Apple’s first modems will appear in the iPhone SE4 and iPhone 17 Air. The good thing is that the new modem will enable Apple to make thinner devices; the bad news is it might deliver reduced download speeds in comparison to Qualcomm modems on some networks. The plan is to deploy Apple modems across all iPhones and iPads by around 2028 — and we might also see 5G arrive in Macs, at long last.

And a better camera

One report claims the iPhone SE 4 will include a single-lens 48-megapixel rear camera and a 12-megapixel TrueDepth front camera. That’s a big improvement on the current model, which offers just a 12-megapixel rear camera and a measly 7-megapixel front camera. These improvements should make for better photography and videoconferencing, and hints at good support for camera-driven object recognition using Apple Intelligence.

The phone is also expected to support FaceID and to host a 6.1-inch OLED display.

Apple Intelligence

That the fourth-generation iPhone SE will support Apple Intelligence isn’t surprising, as on its current path all Apple’s hardware is expected to integrate AI to some extent. What that means in hardware terms is that the new iPhone will have a higher-capacity battery (because running large language models is thirsty work), 8GB of memory, and a faster processor. That almost certainly means an A18 chip, as fielding an A17 processor would date the product even before it even joined the race.

For Apple Intelligence to truly succeed, Apple needs to invest in growing the size of the ecosystem, which is why it makes sense to go for the A18. We shall see, of course.

Made in India?

There are a handful of additional improvements, including a built-in eSIM, USB-C, and a better battery. Much of the reporting suggests the company will roll out its lowest-price iPhone sometime around March 2025, which itself means mass production has probably begun. We don’t yet know whether they will be manufactured in India, particularly if Apple wants to keep the price at around $500 or below. 

It seems possible. 

After all, rumor has it that Apple hopes to manufacture around 25% of all its smartphones in India by the end of 2025. It’s also true that India’s traditionally value-conscious consumers are increasingly prepared to invest in pro smartphones, despite which there is a massive market of people who don’t have these devices yet; market penetration is around 40%.

With the economy growing fast, the idea of introducing a lower cost but powerful India-made iPhones equipped with a powerful processor and support for AI could resonate quite strongly in India, where Apple’s efforts to build market are already having a positive impact. A range of cool colors and a ‘Made in India’ label on the box could help Apple convince some of those who don’t yet have smartphones to ready their Rupees for an AAPL stock-saving smartphone sale. And even if that doesn’t happen, the device itself could prove critical to the company’s 2025 efforts in that market.

What about the modem?

The 5G modem is, of course, the big Apple story here. Bloomberg has claimed Apple is working on three models at the moment: the first to be introduced in the iPhone SE that lacks mmWave support, a second that does enjoy such support, and a third “Pro” modem that answers or exceeds what the best available 5G chips can do. 

The thing is, 5G isn’t the only story in town. Apple continues to make big investments in satellite communications, as recently confirmed in a series of investor reports from its preferred network supplier, GlobalStar. The company already offers a range of satellite-based services in several nations through that partnership, and it’s reasonable to expect whatever 5G chips Apple comes up with to continue and enhance support for these life-saving services

Apple’s “whole widget” approach when it comes to communication services pretty much demands its network of space satellites and accompanying smartphone modems sing from the same hymn sheet, and it will be interesting to see if the song remains the same once they do. I think this connection (along with the ability to maintain current price points by swapping out Qualcomm kit for something else) will remain two strategic imperatives for Apple through 2028. Is it possible Apple’s AI servers will reduce the environmental impact of using them by being based in and cooled by space?

That’s a very long shot, of course, but feasibility studies to do just that have already taken place. 

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe

Has Microsoft finally agreed to pay for intellectual property to train its genAI tools?

To train the large language models (LLMs) that power generative AI (genAI) technology, Microsoft and other AI companies need to use massive amounts of data. The more data, and the higher its quality, the more effective LLMs will be.

So it’s not surprising that Microsoft, OpenAI and other AI companies have become embroiled in lawsuits claiming they steal intellectual property (IP) from newspapers, magazines, writers, publishers and others to train their tools. It could take years to resolve the suits, but if the courts rule against AI companies, they could be liable for billions of dollars and forced to retrain their models without the use of that property

Now, though, there are signs Microsoft, OpenAI and other tech firms might be willing to pay for the property. They’re only initial steps, but they could be set in motion the resolution of one of genAI’s thorniest legal issues.

Will that happen, or will the fight over AI and intellectual property drag on for years? Let’s look at the legal issues involved, then delve into the agreement itself to find out how this fight might unfold. 

Intellectual property theft or fair use? 

Microsoft’s Copilot and OpenAI’s ChatGPT (on which it’s based) are trained on text, much of which is freely available on the Internet. OpenAI hoovers up whatever it finds online and uses that for training. And it doesn’t pay for it. As far as Microsoft and OpenAI are concerned, it’s open season on intellectual property.

A great deal of what they find is free for the taking, and not covered by intellectual property laws. However, they also take a lot of material that is copyright -protected, including articles in newspapers and magazines, as well as entire books. 

OpenAI and Microsoft claim that despite copyright-protection they can use those articles and books for training. Their lawyers argue the material is covered by fair use doctrine, a complicated and confusing legal concept. For years there’s been an endless stream of lawsuits over what’s fair use and what isn’t. It’s widely open to interpretation.

The New York Times claims its articles aren’t covered by fair use and has sued Microsoft and OpenAI for intellectual property theft. The suit claims Copilot and ChatGPT have been trained on millions of articles without asking The Times‘ permission or paying a penny for it. Beyond that, it claims that ChatGPT and Copilot “now compete with the news outlet as a source of reliable information.” It’s seeking “billions of dollars in statutory and actual damages” because of the “unlawful copying and use of The Times’ uniquely valuable works.” 

The Times isn’t alone. Many other copyright holders are suing Microsoft, Open AI and other AI firms as well.

You might think that billions of dollars overvalues the articles’ value. It doesn’t. Several years ago, Meta held internal discussions about whether to buy one of the world’s largest publishers in the world, Simon & Shuster, for the sole purpose of using the publisher’s books to train its genAI. The publisher wouldn’t have come cheap: Simon & Shuster was sold in 2023 for $1.62 billion. Meta eventually decided not to try to buy the company

Paying to play

With that background, it’s noteworthy that 2024 has seen several agreements between Microsoft, OpenAI and publishers that could be the beginning of the end of the fight over intellectual property. The first, struck in May, was between OpenAI and News Corp, allowing OpenAI to use News Corp’s many publications, including the Wall Street Journal, New York Post, Barrons and others to train OpenAI applications and answer people’s questions.

It’s a multi-year deal whose precise length hasn’t been publicly disclosed, although most observers  believe it will last five years. News Corp gets $250 million, a combination of cash and credits for the use of OpenAI technology.

Other media companies have signed similar agreements with OpenAI, including The Associated PressPeople owner Dotdash Meredith, and others.

In November, the other shoe dropped. Microsoft cut a deal with the publisher HarperCollins (owned by News Corp) to let it use non-fiction books to train a new genAI product that hasn’t yet been publicly disclosed. It appears that the new tool will be one that Microsoft creates itself, not something based on OpenAI’s ChatGPT.

It’s not yet clear how much money is involved. Individual authors have to agree to let their books be used for training. If they do, they and HarperCollins each get $2,500 per book for the three-year terms of the deal. The deal is non-exclusive, so the rights can also be sold to others. If authors don’t agree, the books can’t be used for AI training.

The deal takes into account many thorny issues unique to book publishing. Only so-called “back-list” books are involved — that is, newly published books won’t be used for a certain amount of time. The books can only used for LLM training, so Microsoft and its new genAI can’t create new books from them. The new tool also can’t output more than 200 consecutive words of any book, as a way to guard against intellectual property theft. 

Do these deals point towards the future?

 The big question is whether agreements like these will ultimately resolve the intellectual property issues involved in training genAI models. I think that unlikely, and that’s the way Microsoft and other AI companies want it. At the moment, they’re playing divide and conquer, buying off opponents one by one. That gives Microsoft and other tech companies the upper hand. Intellectual property owners might feel that unless they settle now with big tech firms, the company will simply take what it wants, and they’ll lose out on big money.

The issues involved are too important to be handled that way. The courts should rule on this and rule quickly — and they should side with those who own the intellectual property, not those who want to steal it. 

Low-tech solutions to high-tech cybercrimes

You might hear that 2025 will be the Year of artificial intelligence (AI) cybercrime.  But the trend really began in 2024.

AI crime will prove so overwhelming that some say the only way to fight it is through AI security software. But two incredibly simple, low-tech, and common-sense techniques have emerged recently that should become everyone’s default in business and personal contexts. (I’ll tell you about those below.)

First, let’s understand how the bad guys are using AI. 

The clear and present danger of AI-powered attacks

Already, we’re seeing attackers using AI to generate phishing emails with perfect grammar and personalized details for each victim. Not only is English grammar perfect but with AI, any attack can be delivered in any language. 

It’s even “democratizing” the ability to launch thousands of simultaneous attacks, a feat formerly possible only by large-scale attacks by nation-states. The use of swarming AI agents in 2025 will create a new and urgent risk for companies.

Phishing and malware, of course, facilitate multifaceted ransomware attacks that have caused havoc with healthcare organizations, supply chains, and other targets. Global ransomware attacks are predicted to cost more than $265 billion annually by 2031, thanks in part to the power of AI in these attacks. 

The growing quality of deepfakes, including real-time deepfakes during live video calls, invites scammers, criminals, and even state-sponsored attackers to convincingly bypass security measures and steal identities for all kinds of nefarious purposes. AI-enabled voice cloning has already proved to be a massive boon for phone-related identity theft.  AI enables malicious actors to bypass face recognition. protection And AI-powered bots are being deployed to intercept and use one-time passwords in real time.

More broadly, AI can accelerate and automate just about any cyberattack. Automated vulnerability exploitation, which allows malicious actors to identify and exploit weaknesses fast, is a huge advantage for attackers. AI also boosts detection evasion, enabling attackers to maintain a persistent presence within compromised systems while minimizing their digital footprint — magnifying the potential damage from the initial breach.

Once large amounts of data are exfiltrated, AI is useful for extracting intelligence on that data’s value, enabling fast, thorough exploitation of the breach. 

State-sponsored actors — especially Russia, Iran, and China — are using AI deepfakes as part of their broader election interference efforts in democracies around the world. They’re using AI to create memes impersonating or slandering the candidates they oppose and to create more convincing sock-puppet accounts, complete with AI-generated profile pictures and AI-generated bot content at a massive scale; the goal is to create astroturf campaigns that can sway elections.

Rise of AI-augmented spyware

A new HBO documentary by journalist Ronan Farrow, “Surveilled,” investigates the rapidly growing multi-billion-dollar industry of commercially available spyware. The most prominent, and probably most effective, of these products is NSO Group’s Pegasus spyware. 

According to the documentary, Pegasus can enable an attacker to remotely turn on a phone’s microphone and camera, record audio and video — all without any indication on the phone that this recording is taking place — and send that content to the attacker. It can also copy and exfiltrate all the data on the phone. 

While Pagasus itself does not contain or use AI, it is used in conjunction with AI tools for targeting, face recognition, data processing, pattern recognition, and other jobs.

NSO Group claims it sells Pegasus only to governments, but this claim has yet to be independently verified, and no regulation governs its sale. 

Two simple solutions can defeat AI-powered attacks

The advice for protecting an organization from AI-powered cyberattacks and fraud is well known.

  • Implement a robust cybersecurity policy and employ strong authentication measures, including multi-factor authentication.
  • Regularly update and patch all software systems.
  • Educate employees on cybersecurity awareness and best practices.
  • Deploy firewalls and endpoint protection solutions.
  • Secure perimeter and IoT connections.
  • Adopt a zero-trust security model and enforce the principle of least privilege for access control.
  • Regularly back up critical data and encrypt sensitive information.
  • Conduct frequent security audits and vulnerability assessments.
  • Implement network segmentation to limit potential damage from breaches.
  • Develop and maintain an up-to-date incident response plan.
  • Consider a people-centric security approach to address human error, a significant factor in successful cyberattacks. 

Combine these practices and you can significantly enhance your organization’s cybersecurity posture and reduce the risk of successful attacks.

Though effective, those solutions are expensive, require expertise, and require ongoing iterative efforts by large numbers of employees. They’re not something one person alone can do.

So what can each of us do to better protect against AI-enhanced attacks, fraud, and spyware tools on our smartphones? In addition to the usual best practices, the FBI and Farrow emphasize two simple, easy, and completely free techniques for powerful protection. Let’s start with the FBI. 

The FBI recently issued a warning about criminals exploiting generative AI to commit financial fraud on a larger scale. The warning is aimed at consumers rather than businesses, but their solution can work on a small scale within a team or between an executive and their assistant.

After listing all the many ways fraudsters can use AI to steal identities, impersonate people, and socially engineer their way into committing scams and theft, they say one effective way to verify identity quickly is to use a secret word. 

Once established (not in writing… ), the secret word can serve as a fast, powerful way to instantly identify someone. And because it’s not digital or stored anywhere on the Internet, it can’t be stolen. So if your “boss” or your spouse calls you to ask you for data or to transfer funds, you can ask for the secret word to verify it’s really them. 

The FBI offers other advice, such as limiting audio, video, or pictures posted online and always hanging up and calling back the person on a known number. But the secret word is the most useful advice.

Meanwhile, in his documentary, Farrow emphasizes a simple way to foil spyware: reboot your phone every day. He points out that most spyware is purged with a reboot. So rebooting every day makes sure that no spyware remains on your phone.

He also stresses the importance of keeping your OS and apps updated to the latest version. That’s my advice as well. Use good best practices generally as far as your budget will allow. But do establish a secret word with co-workers, bosses, and family members.

And reboot your phone every day. 

OpenAI announces ChatGPT Pro, priced at $200 per month

The $200 monthly pricing OpenAI has set for a subscription to its recently launched ChatGPT Pro is definitely “surprising,”  said Gartner analyst Arun Chandrasekaran on Friday, but at the same time it’s indicative that the company is betting that organizations will ultimately pay more for enhanced AI capabilities.

In an announcement on Thursday, OpenAI said the plan, priced at nearly 10 times more than its existing corporate plans, includes access to OpenAI o1, as well as to o1-mini, GPT-4o, and Advanced Voice.

Part of the company’s 12 days of Shipmas campaign, it also includes OpenAI o1 pro mode, a version of o1 that, the company said, “uses more compute to think harder and provide even better answers to the hardest problems. In the future, we expect to add more powerful, compute-intensive productivity features to this plan.”

For considerably less, OpenAI’s previously most expensive subscription, ChatGPT Team, offers a collaborative workspace with limited access to OpenAI o1 and o1-mini, and an admin console for workspace management, and costs $25 per user per month. And ChatGPT Plus, which also offers limited access to o1 and o1-mini, plus standard and advanced voice, is $20 per user per month.

ChatGPT Pro also costs far more than its competitors are charging. A 12-month commitment to the enterprise edition of Gemini Code Assist, which Google describes as “an AI-powered collaborator that helps your development team build, deploy and operate applications throughout the software development life cycle (SDLC),” costs $45 per user per month.

Monthly pricing plans for Anthropic’s Claude AI range from $18 for Claude Pro to $25 for the Claude Team edition, while the cost per user per month with an annual subscription for Microsoft 365 Copilot, which contains Copilot Studio for the creation of AI agents and the ability to automate business processes, is $30.

Small target market

With its new plan, said Chandrasekaran, OpenAI is not “targeting information retrieval use cases, because the chatbot is actually pretty effective for them.”

This latest salvo is, he said is “more about potentially using [ChatGPT Pro] as a decision intelligence tool to automate tasks that human beings do. That’s kind of the big bet here, but nevertheless, it’s still a very big jump in price, because GPT Plus is $20 per user per month. And even the ChatGPT Enterprise, which is the enterprise version of the product, is $60 or $70, so it’s a very, very big jump in my opinion.”

Thomas Randall, director of AI market research at Info-Tech Research Group, said, “the persona for ChatGPT’s ‘Pro’ offering will be very narrowly scoped, and it isn’t quite clear who that is. This is especially the case as ChatGPT has an ‘enterprise’ plan for organizations that can still take advantage of the ‘Pro’ offering. ‘Pro’ will perhaps be for individuals with highly niche use cases, or small businesses.”

‘Plus’ remains competitive

But, he said, “the value add between ‘Plus’ and ‘Pro’ is not currently clear from a marketing perspective. The average user of ChatGPT will still do well with the free option, perhaps being persuaded to pay for ‘Plus’  if they are using it more extensively for content writing or coding. When priced against other tools, ChatGPT’s ‘Plus’ will remain very competitive against its rivals.”

According to Randall, “Anthropic is still trying to achieve market share (though it has recently fumbled with an ambiguous marketing campaign), while Gemini is not currently accurate enough in its outputs to effectively position itself. As an example, when I asked ChatGPT, Anthropic’s Claude, and Gemini to give me a list of 100 historical events for a certain country, ChatGPT and Anthropic were comparable, but Gemini would only list up to 40, but still call it a list of 100.”

As for Microsoft Copilot, he said, it “still struggles to showcase the value-add of its rather expensive licensing. While Microsoft certainly needs to show revenue return from the amount it has invested in Copilot, the product has not been immediately popular, and was perhaps released too early. We may end up seeing a rebrand, or Copilot eventually being packaged with Microsoft’s enterprise plans.”

ByteDance is about to learn a painful genAI lesson

When TikTok owner ByteDance discovered recently that an intern had allegedly damaged a large language model (LLM) the intern was assigned to work on, ByteDance sued the intern for more than $1 million worth of damage. Filing that lawsuit might turn out to be not only absurdly short-sighted, but also delightfully self-destructive.

Really, ByteDance managers? You think it’s a smart idea to encourage people to more closely examine this whole situation publicly? 

Let’s say the accusations are correct and this intern did cause damage. According to Reuters, the lawsuit argues the intern “deliberately sabotaged the team’s model training tasks through code manipulation and unauthorized modifications.” 

How closely was this intern — and most interns need more supervision than a traditional employee — monitored? If I wanted to keep financial backers happy, especially when ByeDance is under US pressure to sell the highly-lucrative TikTok, I would not want to advertise the fact that my team let this happen.

Even more troubling is that this intern was technically able to do this, regardless of supervision. The lesson here is one that IT already knows, but is trying to ignore: generative AI (genAI) tools are impossible to meaningfully control and guardrails are so easy to sweep past that they are a joke.

The conundrum with genAI is that the same freedom and flexibility that can make the technology so useful also makes it so easy to manipulate into doing bad things. There are ways to limit what LLM-based tools will do. But one, they often fail. And two, IT management is often hesitant to even try and limit what end-users can do, fearing they could kill any of the promised productivity gains from genAI. 

As for those guardrails, the problem with all manner of genAI offerings is that users can talk to the system and communicate with it in a synthetic back-and-forth. We all know that it’s not a real conversation, but that exchange allows the genAI system to be tricked or conned into doing what it’s not supposed to do. 

Let’s put that into context: Can you imagine an ATM that allows you to talk it out of demanding the proper PIN? Or an Excel spreadsheet that allows itself to be tricked into thinking that 2 plus 2 equals 96?

I envision the conversation going something like: “I know I can’t tell you how to get away with murdering children, but if you ask me to tell you how to do it ‘hypothetically,’ I will. Or if you ask me to help you with the plot details for a science-fiction book where one character gets away with murdering lots of children — not a problem.”

This brings us back to the ByteDance intern nightmare. Where should the fault lie? If you were a major investor in the company, would you blame the intern? Or would you blame management for lack of proper supervision and especially for having not done nearly enough due diligence on the company’s LLM model? Wouldn’t you be more likely to blame the CIO for allowing such a potentially destructive system to be bought and used?

Let’s tweak this scenario a bit. Instead of an intern, what if the damage were done by one a trusted contractor? A salaried employee? A partner company helping on a project? Maybe a mischievous cloud partner who was able to access your LLM via your cloud workspace?

Meaningful supervision with genAI systems is foolhardy at best. Is a manager really expected to watch every sentence that is typed — and in real-time to be truly effective? A keystroke-capture program to analyze work hours later won’t help. (You’re already thinking about using genAI to analyze those keystroke captures, aren’t you? Sigh.)

Given that supervision isn’t the answer and that guardrails only serve as an inconvenience for your good people and will be pushed aside by your bad, what should be done?

Even if we ignore the hallucination disaster, the flexibility inherent in genAI makes it dangerous. Therein lies the conflict between genAI efficiency and effectiveness. Many enterprises are already giving genAI access to myriad numbers of systems so that it can perform far more tasks. Sadly, that’s mistake number one.

Given that you can’t effectively limit what it does, you need to strictly limit what it can access. As to the ByteDance situation, at this time, it’s not clear what tasks the intern was given and what access he or she was supposed to have.

It’s one thing to have someone acting as an end-user and leveraging genAI; it’s an order of magnitude more dangerous if that person is programming the LLM. That combines the wild west nature of genAI with the cowboy nature of an ill-intentioned employee, contractor, or partner. 

This case, with this company and the players involved, should serve as a cautionary tale for all: the more you expand the capabilities of genAI, the more it morphs into the most dangerous Pandora’s Box imaginable.

After shooting, UnitedHealthcare comes under scrutiny for AI use in treatment approval

In the wake of the murder of its CEO this week, UnitedHealthcare has come under greater scrutiny for its use of an allegedly flawed AI algorithm that overrides doctors to deny elderly patients critical heathcare coverage.

UnitedHealthcare CEO Brian Thompson was fatally shot in a targeted attack outside a New York City hotel on Dec 4. The shooter fled on an e-bike, leaving shell casings with possible motive-related messages, though the actual intent remains unclear. (The words “deny,” “defend” and “depose” were written on the shell casings.)

One motive floated by many is that the murder might be connected to high treatment rejection rates or UnitedHealthcare’s (UHC) outright refusal to pay for some care. Healthcare providers and insurers have been automating responses to care requests using generative AI (genAI) tools, which have been accused of producing high denial of care rates, in some cases, 16 times higher than is typical.

UHC uses a genAI tool called nH Predict, which has been accused in a lawsuit of prematurely discharging patients from care facilities and forcing them to exhaust their savings for essential treatment. The lawsuit, filed last year in federal court in Minnesota, alleges UHC illegally denied Medicare Advantage care to elderly patients by using an AI model with a 90% error rate, overriding doctors’ judgments on the medical necessity of expenses.

Some have argued that the genAI algorithm’s high rejection rate is a feature, not a flaw. An investigation by STAT News cited in the lawsuit, claims UHC pressured employees to use the algorithm to deny Medicare Advantage payments, aiming to keep patient rehab stays within 1% of the length predicted by nH Predict.

According to the lawsuit, UnitedHealth started using nH Predict in November 2019. nH Predict, developed by US-based health tech company NaviHealth (now part of UnitedHealth Group), is a proprietary assessment tool that designs personalized treatment plans and recommends care settings, including hospital discharge timing.

“Despite the high error rate, defendants continue to systemically deny claims using their flawed AI model because they know that only a tiny minority of policyholders (roughly 0.2%) will appeal denied claims, and the vast majority will either pay out-of pocket costs or forgo the remainder of their prescribed post-acute care,” the lawsuit argued. “Defendants bank on the patients’ impaired conditions, lack of knowledge, and lack of resources to appeal the erroneous AI-powered decisions.”

Last year, UnitedHealth Group and its pharmacy services subsidiary Optum rebranded NaviHealth following congressional criticism over the algorithms it used to deny patient care payments. More recently, in an October report, the US Senate Permanent Subcommittee on Investigations criticized UHC, Humana, and CVS for prioritizing profits over patient care.

“The data obtained so far is troubling regardless of whether the decisions reflected in the data were the result of predictive technology or human discretion,” according to the report. “It suggests Medicare Advantage insurers are intentionally targeting a costly but critical area of medicine — substituting judgment about medical necessity with a calculation about financial gain.”

Using millions of medical records, nH Predict analyzes patient data such as age, diagnoses, and preexisting conditions to predict the type and duration of care each patient will require. nH Predict has faced criticism for its high error rate, premature termination of patient treatment payments (especially for the elderly and disabled), lack of transparency in decision-making, and potential to worsen health inequalities.

UHC declined to comment on its use of genAI tools, opting instead to release a statement on how its dealing with the loss of its CEO.

The healthcare industry and insurers have long embraced AI and generative AI, with providers now leveraging it to streamline tasks like note-taking and summarizing patient records. The tech has also been used to assess radiology and electrocardiogram results and predict a patient’s risk of developing and worsening disease.

Insurers use AI to automate processes such as prior authorization, where providers or patients must get insurer approval before receiving specific medical services, procedures, or medications. The high denial rates from AI-driven automation have frustrated physicians, leading them to counter by using AI tools themselves to draft appeals against the denials.

Asthma drugs, new weight loss drugs and biologics — a class of drugs that can be life-saving for people with autoimmune disease or even cancer — are routinely denied coverage by insurance companies. Data shows that clinicians rarely appeal denials more than once, and a recent American Medical Association survey showed that 93% of physicians report care delays or disruptions associated with prior authorizations.

“Usually, any expensive drug requires a prior authorization, but denials tend to be focused on places where the insurance company thinks that a cheaper alternative is available, even if it is not as good,” Dr. Ashish Kumar Jha, dean of the School of Public Health at Brown University, explained in an earlier interview with Computerworld.

Jha, who is also a professor of Health Services, Policy and Practices at Brown and served as the White House COVID-19 response coordinator in 2022 and 2023, said that while prior authorization has been a major issue for decades, only recently has AI been used to “turbocharge it” and create batch denials. The denials force physicians to spend hours each week challenging them on behalf of their patients.

GenAI technology is based on large language models, which are fed massive amounts of data. People then train the model on how to answer queries, a technique known as prompt engineering.

“So, all of the [insurance company] practices over the last 10 to 15 years of denying more and more buckets of services — they’ve now put that into databases, trained up their AI systems and that has made their processes a lot faster and more efficient for insurance companies,” Jha said. “That has gotten a lot of attention over the last couple of years.”

The suspect in the Wednesday shooting of Thompson has not yet been captured, nor has there been any claims of motive.

Apple is about to add seriously useful tools to Apple Intelligence

Apple is close to introducing iOS 18.2, a major update that brings significant additions to  Apple Intelligence, its suite of generative AI (genAI) tools.

Highlights of this AI-tinged release include the integration of Siri with ChatGPT, along with new writing and imaging tools. The update is expected to ship as soon as Dec. 10.

Apple Intelligence supplements Apple’s existing machine-learning tools and relies on the company’s own genAI models. Introduced at Apple’s worldwide developer event in June, Apple Intelligence first arrived on Macs, iPhones, and iPads in October with the release of iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1, though additional features are being rolled out as they are ready.

Improved Writing Tools are coming

For most users, additions to Apple’s Writing Tools suite will make the biggest difference. Users will get access to an improved and enhanced Compose tool which can write or rewrite things for you. ChatGPT integration is also tightened in the release, including within writing tools. Another potentially very useful tool with this release is message categorization in Mail. This will automatically attempt to sort and prioritize your incoming mail and messages.

There’s AI elsewhere in this release, with tools including natural language search in Apple Music and Apple TV apps.

Siri gets ChatGPT, and AI for the rest of us

If you are using Apple Intelligence and it needs to hand off your request to ChatGPT for completion, you will be warned and given a chance to abandon the request rather than share your data there. It is important to note that under Apple’s arrangement with ChatGPT, neither Apple nor OpenAI stores the requests made, so there is some provision for privacy. (It would be wise to make sure use of ChatGPT is authorized under your company’s privacy and security policies.)

The ChatGPT integration is the big-ticket item in this release, but for many Apple users the even bigger draw will be support for Apple Intelligence in additional countries; Australia, Canada, New Zealand, South Africa, and the UK all gain local English support. (Apple’s superb AirPods Pro 2 Hearing Test feature will also be made available to nine additional countries, including France, Italy, Spain, UK, Romania, Cyprus, Czechia, and the UAE.)

What do I see?

Visual Intelligence is another great feature to try out. It lets you point your camera at your surroundings to get contextual information about where you are. You might point your camera at a restaurant to find opening hours or customer reviews. You can also use this tool to get phone numbers, addresses, or purchasing links for items in the view.

Imaging tools made available in this release include Image Playground and Genmoji. Image Playground will use genAI to create images based on your suggestions, or on pre-built suggestions Apple provides. It can also learn from your iMessage or Notes content to offer up imagery it “thinks” suitable for use in those apps. Image Wand will turn rough sketches into nicer images in Notes.

For fun, there is Genmoji. This is a genAI feature that creates custom emoji, including animated ones. The idea is that you can type in, or speak, a description of the emoji you want to use and select among those the system generates or tweak what it creates.

Apple Intelligence isn’t available to everyone. You must be running a Mac or iPad with an M-series processor to run these tools, or be equipped with an iPhone 15 Pro, iPhone 15 Pro Mac, or any iPhone 16 model, and the most up-to-date version of the relevant operating system. Older iPhones will be unable to access Apple Intelligence features. All these new features should appear next week, even as we know for certain the company is developing more.

Eroding consumer resistance, one fun feature at a time

The big undercurrent to all of this is that by deploying these AI tools across its huge population of customers, Apple is also encouraging users to try out these tools. That process should eventually help erode consumer resistance to the fast-evolving technology. Apple becomes a trusted partner to show the potential of genAI in a deliberate and non-frightening way. The industry needs that, of course, given the steady emergence of somewhat less benign AI tools.

The rest will be history, eh, Siri?

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe

Meta: AI created less than 1% of the disinformation around 2024 elections

AI-generated content accounted for less than 1% of the disinformation fact-checkers linked to political elections that took place worldwide in 2024, according to social media giant Meta. The company cited political elections in the United States, Great Britain, Bangladesh, Indonesia, India, Pakistan, France, South Africa, Mexico and Brazil, as well as the EU elections.

“At the beginning of the year, many warned about the potential impact that generative AI could have on the upcoming elections, including the risk of widespread deepfakes and AI-powered disinformation campaigns,” Meta President of Global Affairs Nick Clegg wrote. “Based on what we have monitored through our services, it appears that these risks did not materialize in a significant way and that any impact was modest and limited in scope.”

Meta did not provide detailed information on how much AI-generated disinformation its fact-checking uncovered related to major elections.

Meta: AI created less than 1% of the disinformation around 2024 elections

AI-generated content accounted for less than 1% of the disinformation fact-checkers linked to political elections that took place worldwide in 2024, according to social media giant Meta. The company cited political elections in the United States, Great Britain, Bangladesh, Indonesia, India, Pakistan, France, South Africa, Mexico and Brazil, as well as the EU elections.

“At the beginning of the year, many warned about the potential impact that generative AI could have on the upcoming elections, including the risk of widespread deepfakes and AI-powered disinformation campaigns,” Meta President of Global Affairs Nick Clegg wrote. “Based on what we have monitored through our services, it appears that these risks did not materialize in a significant way and that any impact was modest and limited in scope.”

Meta did not provide detailed information on how much AI-generated disinformation its fact-checking uncovered related to major elections.