Page 9 of 126

Apple’s emotional lamp and the future of robots

Pixar Animation Studios has an unusual logo. The basic logo is the word “Pixar.” But sometimes, an animated lamp named Luxo Jr. hops into the frame and jumps on the letter “i.” The lap exudes personality and represents Pixar’s ability to turn any object into a compelling character. 

Inspired by Luxo Jr., Apple’s Machine Learning Research division decided to create a personality-expressive lamp of their own. Apple’s ELEGNT research project explores what’s possible with an expressive physical user interface for non-humanoid robots

Based on the situation and context of the user, as well as voice interaction, gestures and touch, the lamp can appear to express itself through a variety of movements, including nodding or shaking its “head,” lowering its head to convey sadness, “tail wagging” to signify excitement, “sitting down” to imply relaxation, head tilting to show curiosity, leaning forward to show interest, gazing to direct attention, adjusting speed and pausing to communicate attitudes and emotions, and moving forward or away to show interest or disinterest. 

It can do some of the things smartphone apps can do but with a greater sense of fun. For example, smartphone apps can remind you to drink water, but the ELEGNT can do this by physically pushing a cup of water toward you. 

As you can see in this video, Apple’s project is fascinating. But as with all robot makers in Silicon Valley, as far as I can tell, the company loses the plot when dealing with any robot designed to simulate human communication. 

In their paper, they say: “The framework integrates function-driven and expression-driven utilities, where the former focuses on finding an optimal path to achieve a physical goal state, and the latter motivates the robot to take paths that convey its internal states —s uch as intention, attention, attitude, and emotion — during human-robot interactions.”

Did you catch the lie (or worse, a possibly self-delusional claim)? They’re falsely saying that their expression-driven utilities “motivate” the lamp to convey its “internal states,” and among those internal states is “emotion.” 

They toss out the falsehood with shocking casualness, considering how big the statement is and how formal the research paper is. If Apple had actually invented a lamp that can feel emotions, that would be the computer science event of the century, a singularity of world-historic import. It would challenge our laws and our definition of sentience, throwing into question religious and philosophical questions that have been settled for 10,000 years. 

(I’ve reached out to Apple for comment on this point, but haven’t heard back.) 

It’s clear that Apple’s lamp is programmed to move in a way that deludes users into believing that the it has internal states that it doesn’t actually have. 

(I admire Apple’s research; I don’t understand why companies lie about humanoid robotics and play make-believe in their research papers about what’s going on with their robots. In the future, it will be hard enough for people to understand the nature of AI and robotics without the researchers lying in formal, technical research papers.)

But if you ignore the lie, Apple’s lamp research definitely sheds light on where our interaction with robots may be heading—a new category of appliance that might well be called the “emotional robot.” 

A key component of the research was a user study comparing how people perceived a robot using functional and expressive movements versus one that uses only functional movements. 

The study found that movements incorporating expressive qualities boosted user “ratings,” especially during social-oriented tasks. But when users wanted some specific useful action to take place — for example, to shine light on an object so the user could take a picture of it — study participants found the lamp’s “personality” distracting. 

The researchers drew upon the concept of Theory of Mind, the human ability to attribute mental states to others, to help design the lamp’s movements. Those movements were intended to simulate intention, attention, attitude, and emotion. 

The movements aren’t specifically human but rather the body language of a person, a monkey, or a dog — a sentient mammal generally.

The biggest takeaway from Apple’s ELEGNT research is likely that neither a human-like voice nor a human-like body, head, or face is required for a robot to successfully trick a human into relating to it as a sentient being with internal thoughts, feelings, and emotions. 

ELEGNT is not a prototype product; it is instead a lab and social experiment. But that doesn’t mean a product based on this research will not soon be available on a desktop near you. 

Apple’s emotional robot 

Apple is developing a desktop robot project, codenamed J595, and is targeting a launch within two years. According to reports based on leaks, the robot might look a little like Apple’s iMac G4, which was a lamp-like form factor featuring a screen at the end of a moveable “arm.” The device would function like an Apple HomePod with a screen but with additional intelligence courtesy of large language model-based generative AI. 

The estimated $1,000 robot would provide a user interface for home smart products and doorbell cams, answer questions, display photos and incoming messages, and function as a camera and screen for FaceTime calls. 

But here’s the most interesting part. Although there’s no direct evidence for this claim, it makes sense for Apple to incorporate ELEGNT research into the desktop robot project. The robot is expected to move, lean, and tilt as part of its interaction with users. 

Apple’s next appliance might be an emotional robot. 

The consumer market for emotional robots

The idea of a consumer electronics product advertising “personality” through physical movements isn’t new. Among others, there’s:

  • Jibo: A social robot with expressive movements and a rotating body.
  • Anki’s Cozmo: A small robot toy with a movable arm and LED eyes for emotional expression.
  • Sony Aibo: A robotic dog using its entire body to express emotions.
  • Kuri: A home robot using head tilts, eye expressions, and sounds for communication.
  • Lovot: A companion robot from Japan expressing affection through body movements.
  • Amazon Astro: A home robot with a periscope camera and digital eyes for engagement.

The latter product is worthy of an update since I first mentioned it in 2021.

Amazon discontinued its Astro for Business program on July 3, 2024, less than a year after launch. The business robots were remotely deactivated by Amazon last Sept. 25, and now Amazon is exclusively focusing on Astro for consumers. 

The $1,599 consumer version of Astro, introduced in 2021, is still available (by invitation only).

The business market for emotional robots

No major company has tried emotional robots for business except Amazon, and it killed that program. 

Meanwhile, the European Union’s AI Act prohibits the use of AI systems for emotion recognition in workplaces or educational settings, except in cases of medical or safety necessity. This ban became effective on Feb. 2.

So, from a business, legal, and cultural standpoint, it appears that appliances that can read your emotions and respond with gestures expressing fake emotions are not imminent. 

We’ll see whether users bring their emoting Apple desktop robots or other emotional robots to the office. We could be facing a bring-your-own-emotional-robot movement in the workplace.

BYOER beware!

Your new Android notification superpower

It may seem like a paradox, but notifications are both the best and the worst part of owning an Android device.

On the one hand, notifications let us stay on top of important incoming info — be it a critical Slack message, a personal family text, or an email from a high-priority client or colleague.

On the other hand, man alive, can they be menacing — both distracting and also sometimes ineffective, when something significant comes in and you don’t notice it right away.

To be fair, Android’s got all sorts of smart systems for taming your notifications and making ’em more manageable and effective — both official and by way of crafty workaround. The software’s oft-overlooked notification channels make it easy to control specific sorts of notifications and turn down the noise on less important stuff. And just last week, we talked about a creative way to bring custom vibration patterns to any Android device so you can tell what type of info is alerting you without even having to glance at your screen.

But there’s still the issue of especially important info coming in and falling through the cracks. After all, it’s all too easy to miss a single incoming notification and then fail to notice it until hours later — when it might be too late.

Today, I’ve got a scrumptiously slick ‘n’ simple tool that can help. It’s a new Android notification superpower, and all you’ve gotta do is embrace it.

[Don’t stop here: Get my free Android Notification Power-Pack next and send your Android notification intelligence to soaring new heights.]

Android notifications, amplified

The tool I want to tell you about is an easy-as-can-be way to amplify especially important notifications and make sure you always see ’em right away.

It does that primarily by creating a custom alarm of sorts for your highest-priority notifications — those coming from specific apps and/or with specific keywords in their bodies. When those conditions are met, the system vibrates your phone continuously until you acknowledge it and optionally makes an ongoing sound, too. That way, there’s zero chance you’ll overlook it.

You can even get incredibly nuanced with how and when those actions happen, if you want, and have the alarm active only during certain days and times. If you’re really feeling saucy, you can also have the app read certain notifications aloud when they come in as another way to ensure they catch your attention.

The app that makes all of this happen is a cool little creation called, fittingly enough, NotiAlarm. It’s a free download that’ll work on any Android device.

Now, notably, NotiAlarm does overlap with another tool we’ve talked about before — an extremely versatile power-user tool called BuzzKill that lets you create all sorts of crafty custom filters for your phone’s notifications. If you’re already using BuzzKill, you can accomplish these same sorts of feats with it, and you don’t need NotiAlarm in addition.

But fantastic as it is, BuzzKill is a bit complex. It falls more in the power-user camp, and it also costs four bucks to use. So all in all, it isn’t for everyone.

NotiAlarm, in contrast, is super-simple and also free. Even if you aren’t inclined to create an entire array of custom filters for your notifications, it does this one thing and does it well — and it’s remarkably easy to get going.

The app does have some mildly annoying ads throughout its configuration interface, but that’s it. You can opt to disable those and support the developer with a one-time $10 upgrade, if you want, but you don’t have to do that in order to put it to work.

Capisce? Capisce. Lemme show you how to get it up and running now, in a matter of minutes.

Your 2-minute Android notification upgrade

All right — here’s all there is to it:

  • First, download NotiAlarm from the Play Store (obviously, right?).
  • Open ‘er up, then follow the prompts to grant the app the various forms of access it needs.
    • NotiAlarm requires permissions to manage your notifications, display over other apps, and run in the background — for reasons that should all be fairly obvious and are absolutely necessary for what it needs to do. Its privacy policy is clear about the fact that it doesn’t collect or store any personal data or share any manner of info with any third parties.
  • Once you’re on its main screen, tap the circular plus icon in the lower-right corner to configure your first alarm. That’ll take you to a screen that looks a little somethin’ like this:
Android notification alarm — NotiAlarm (1)
NotiAlarm’s configuration screen doesn’t take long at all to get through.

JR Raphael, IDG

  • Tap the plus sign next to the word “Keyword,” then type in whatever keyword you want to act as a trigger for your notification alarm. Maybe it’s a specific person’s name, a specific email address, or some specific term that you know demands your immediate attention. Whatever it is, type it in there, then tap the word “Add” to confirm and save it.
    • By default, NotiAlarm will trigger your alarm for any notifications that include your keyword. You can also, however, ask it to trigger the alarm for any notifications that don’t include the keyword — so in other words, for all notifications except those containing that keyword. If you’d rather go that route, tap the toggle next to “Keyword Filter Type” to switch its behavior.
Android notification alarm — NotiAlarm (2)
The “Keyword” field is the key to making your most important notifications unmissable.

JR Raphael, IDG

  • Next, tap the plus sign alongside the word “App” and select which app or apps you want to be included — Messages, Slack, Gmail, Calendar, or whatever the case may be.
Android notification alarm — NotiAlarm (3)
Once you’ve selected an app (or multiple apps), you’ll see the final setup for your new notification rule.

JR Raphael, IDG

  • Now, in the next box down, tap the toggle next to “Alarm” and configure exactly how you want your alarm to work.
    • You can activate and select a specific sound, via the “Alarm Sound” toggle.
    • Or you can stick solely with an ongoing vibration, via the active-by-default “Vibration” toggle.
    • If you want to limit the alarm to certain times, tap the toggle next to “Do Not Disturb Time Range.” And if you want to limit it to certain days, tap the day names under “Repeat Days.” Otherwise, just ignore those fields.
Android notification alarm — NotiAlarm (4)
You’ve got ample options for exactly how and when you want your notification alarm to activate.

JR Raphael, IDG

And hey, how ’bout that? For most purposes and scenarios, you should now be set! If you want to explore some other options — such as having a notification automatically read aloud, automatically marking a notification as read, or automatically replying to a message-oriented notification with some prewritten response — look a little lower on that same screen.

Otherwise, just tap the “Save” text in the upper-right corner, and that’s it: Your new alarm is now active. And you’ll see it with an active toggle on NotiAlarm’s main screen.

Android notification alarm — NotiAlarm (5)
A NotiAlarm notification alarm in its final, fully configured state.

JR Raphael, IDG

Now, anytime a notification comes in that meets the conditions you specified, your phone will do exactly what you asked — and an important alert will never go unnoticed again.

👉 NEXT: Snag my free Android Notification Power-Pack to discover six especially awesome enhancements that’ll take your Android notification intelligence to the next level.

Adobe Firefly expands with ‘commercially safe’ video generator

Adobe has released a video generator in public beta in its generative AI (genAI) tool, Adobe Firefly. The company calls the tool the first “commercially safe” video generator on the market. It has been trained on licensed content and public domain material, meaning it should not be able to generate material that could infringe someone else’s copyright.

Firefly can generate clips either from text instructions or by combining a reference image with text instructions. There are also settings to customize things such as camera angles, movements, and distances.

A paid subscription is required to use the video generator. Firefly Standard, which costs about $11 a month, gives access to 2000 credits; that should be enough for 20 five-second videos with a 1080p picture resolution and a frame rate of 24 frames per second.

Firefly Pro, which costs three times more than the standard version, allows a user 7000 credits, which should be enough for 70 five-second clips in 1080p at 24 frames per second.

Adobe plans to eventually release a model for videos with lower resolution but faster image updates, as well as a model with 4k resolution for Pro users.

AI chatbots outperform doctors in diagnosing patients, study finds

Chatbots quickly surpassed human physicians in diagnostic reasoning — the crucial first step in clinical care — according to a new study published in the journal Nature Medicine.

The study suggests physicians who have access to large language models (LLMs), which underpin generative AI (genAI) chatbots, demonstrate improved performance on several patient care tasks compared to colleagues without access to the technology.

The study also found that physicians using chatbots spent more time on patient cases and made safer decisions than those without access to the genAI tools.

The research, undertaken by more than a dozen physicians at Beth Israel Deaconess Medical Center (BIDMC), showed genAI has promise as an “open-ended decision-making” physician partner.

“However, this will require rigorous validation to realize LLMs’ potential for enhancing patient care,” said Dr. Adam Rodman, director of AI Programs at BIDMC. “Unlike diagnostic reasoning, a task often with a single right answer, which LLMs excel at, management reasoning may have no right answer and involves weighing trade-offs between inherently risky courses of action.”

The conclusions were based on evaluations about the decision-making capabilities of 92 physicians as they worked through five hypothetical patient cases. They focused on the physicians’ management reasoning, which includes decisions on testing, treatment, patient preferences, social factors, costs, and risks.

When responses to their hypothetical patient cases were scored, the physicians using a chatbot scored significantly higher than those using conventional resources only. Chatbot users also spent more time per case — by nearly two minutes — and they had a lower risk of mild-to-moderate harm compared to those using conventional resources (3.7% vs. 5.3%). Severe harm ratings, however, were similar between groups.

“My theory,” Rodman said, “[is] the AI improved management reasoning in patient communication and patient factors domains; it did not affect things like recognizing complications or medication decisions. We used a high standard for harm — immediate harm — and poor communication is unlikely to cause immediate harm.”

An earlier 2023 study by Rodman and his colleagues yielded promising, yet cautious, conclusions about the role of genAI technology. They found it was “capable of showing the equivalent or better reasoning than people throughout the evolution of clinical case.”

That data, published in Journal of the American Medical Association (JAMA), used a common testing tool used to assess physicians’ clinical reasoning. The researchers recruited 21 attending physicians and 18 residents, who worked through 20 archived (not new) clinical cases in four stages of diagnostic reasoning, writing and justifying their differential diagnoses at each stage.

The researchers then performed the same tests using ChatGPT based on the GPT-4 LLM. The chatbot followed the same instructions and used the same clinical cases. The results were both promising and concerning.

The chatbot scored highest in some measures on the testing tool, with a median score of 10/10, compared to 9/10 for attending physicians and 8/10 for residents. While diagnostic accuracy and reasoning were similar between humans and the bot, the chatbot had more instances of incorrect reasoning. “This highlights that AI is likely best used to augment, not replace, human reasoning,” the study concluded.

Simply put, in some cases “the bots were also just plain wrong,” the report said.

Rodman said he isn’t sure why the genAI study pointed to more errors in the earlier study. “The checkpoint is different [in the new study], so hallucinations might have improved, but they also vary by task,” he said. “ Our original study focused on diagnostic reasoning, a classification task with clear right and wrong answers. Management reasoning, on the other hand, is highly context-specific and has a range of acceptable answers.”

A key difference from the original study is the researchers are now comparing two groups of humans — one using AI and one not — while the original work compared AI to humans directly. “We did collect a small AI-only baseline, but the comparison was done with a multi-effects model. So, in this case, everything is mediated through people,” Rodman said.

Researcher and lead study author Dr. Stephanie Cabral, a third-year internal medicine resident at BIDMC, said more research is needed on how LLMs can fit into clinical practice, “but they could already serve as a useful checkpoint to prevent oversight.

“My ultimate hope is that AI will improve the patient-physician interaction by reducing some of the inefficiencies we currently have and allow us to focus more on the conversation we’re having with our patients,” she said.

The latest study involved a newer, upgraded version of GPT-4, which could explain some of the variations in results.

To date, AI in healthcare has mainly focused on tasks such as portal messaging, according to Rodman. But chatbots could enhance human decision-making, especially in complex tasks.

“Our findings show promise, but rigorous validation is needed to fully unlock their potential for improving patient care,” he said. “This suggests a future use for LLMs as a helpful adjunct to clinical judgment. Further exploration into whether the LLM is merely encouraging users to slow down and reflect more deeply, or whether it is actively augmenting the reasoning process would be valuable.”

The chatbot testing will now enter the next of two follow-on phases, the first of which has already produced new raw data to be analyzed by the researchers, Rodman said. The researchers will begin looking at varying user interaction, where they study different types of chatbots, different user interfaces, and doctor education about using LLMs (such as more specific prompt design) in controlled environments to see how performance is affected.

The second phase will also involve real-time patient data, not archived patient cases.

“We are also studying [human computer interaction] using secure LLMs — so [it’s] HIPAA complaint — to see how these effects hold in the real world,” he said.

OpenAI revamps AI roadmap, merging models for a leaner future

OpenAI will integrate “o3” into GPT-5 instead of releasing it separately, streamlining adoption while signaling a shift toward fewer, more controlled AI models amid rising competition and cost pressures.

“In both ChatGPT and our API, we will release GPT-5 as a system that integrates a lot of our technology, including o3,” CEO Sam Altman said in a post on X.

The decision marks a departure from OpenAI’s recent strategy of offering multiple model variants, suggesting the company is prioritizing ease of deployment and product clarity for enterprise users.

“We want AI to ‘just work’ for you; we realize how complicated our model and product offerings have gotten,” Altman said. “We hate the model picker as much as you do and want to return to magic unified intelligence.”

With enterprises facing rising costs for AI adoption and competitors like DeepSeek introducing lower-cost alternatives, OpenAI’s move could also be a response to market pressures.

A single, more comprehensive model may help justify AI investments by reducing the complexity of integrating multiple systems while ensuring compatibility with OpenAI’s broader ecosystem.

OpenAI will also launch GPT-4.5, codenamed “Orion,” as its final model without chain-of-thought reasoning, Altman added, without providing a timeline.

A change of approach


The rapid proliferation of AI models has intensified competition among research labs, each striving to develop smarter, more efficient systems with larger context windows and specialized functions.

While this innovation has expanded capabilities, it has also introduced complexity, making it harder for users to choose the right model.

“The burgeoning list of models has added complexity for the average user who just wants chat to work without having to figure out which model to use,” said Abhishek Sengupta, practice director at Everest Group. “For developers, it’s a mixed bag – on one hand it takes away the need to incessantly check which model is best suited for which task (at least for OpenAI) but on the other hand you are outsourcing your choice of optimal model to OpenAI.”

While model selection may still occur, OpenAI could handle the process rather than users. Analysts suggest this could also be an attempt to avoid the race between model performance and cost by bundling all AI capabilities under a single system.

“Maybe the consolidation of models into a single source of intelligence is a move toward creating an intelligence platform,” Sengupta added. “Maybe that’s the differentiation they are placing their bets on. Time will tell.”

Rising competition and open-source threats

This shift could also reshape the economics of AI, giving OpenAI greater control over costs, deployment, and market positioning.

“I believe merging it has multiple benefits, not just in terms of costs related to training, go-to-market strategies, and customer delivery, but also in giving OpenAI more leverage to drive it as a ‘system’ and extract more value through a simplified business model,” said Neil Shah, partner and co-founder at Counterpoint Research. “This will change the economics on both ends, which investors will be keen to monitor and measure.”

This comes at a time when AI competition is intensifying, with DeepSeek disrupting the market with cost-effective models, highlighting the pressure on OpenAI to refine its strategy.

“One cannot rule out this move being triggered by competitive models like DeepSeek, which are highly cost-effective,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “Of course, there shall be many other models out there that will be more cost-effective and innovative, and most importantly, will be made open source and not proprietary like OpenAI.”

Importantly, not all organizations have the resources, need, or strategic planning to navigate complex, tiered pricing structures. “Despite the rise of SaaS, many large enterprises prefer EULA contracts since they are incubated from any risk associated with sudden and unplanned need for resources,” Gogia added. “In the same breath, not all organizations require a customized model and the flexibility that comes along with it. Many of their use cases are simplistic enough to use a model that keeps the billing and the use simple.”

EU pulls back – for the moment – on privacy and genAI liability compliance regulations

When the EU on Tuesday said it was not, at this time, moving ahead with critical legislation involving privacy and genAI liability issues, it honestly reported that members couldn’t agree. But the reasons why they couldn’t agree get much more complicated.

The EU decisions involved two seemingly unrelated pieces of legislation: One dealing with privacy efforts, often called the cookie law, and the other dealing with AI liability. 

The EU decisions are in the annexes to the Commission’s  work programme for 2025, in Annex IV, items 29 and 32. For the AI liability section (“on adapting non-contractual civil liability rules to artificial intelligence”), the EU found “no foreseeable agreement. The Commission will assess whether another proposal should be tabled or another type of approach should be chosen.”

For the privacy/cookie item (“concerning the respect for private life and the protection of personal data in electronic communications”), the EU said, “No foreseeable agreement – no agreement is expected from the colegislators. Furthermore, the proposal is outdated in view of some recent legislation in both the technological and the legislative landscape.”

Various EU specialists said those explanations were correct, but the reasons behind the decisions from those member countries were more complex. 

Andrew Gamino-Cheong, CTO at AI company Trustible, said different countries had different, and incompatible, positions.

“The EU member states have started to split on their own attitudes related to AI. On one extreme is France, which is trying to be pro-innovation and [French President Emmanuel] Macron used the [AI summit] this past week to emphasize that,” Gamino-Cheong said. “Others, including Germany, are very skeptical of AI still and were pushing for these regulations. If France and Germany are at odds, as the economic heavyweights in the EU, nothing will get done.”

But Gamino-Cheong, along with many others, said there is a fear that the global AI arms race may hurt countries that impose too many compliance requirements. 

The EU is seen as “being too aggressive, overregulating” and “the EU takes a 2-sentence description and writes 14.5 pages about it and then contradicts itself in multiple areas,” Gamino-Cheong said. 

Ian Tyler-Clarke, an executive counselor at the Info-Tech Research Group, said he was not happy that the two proposed bills did not go forward because he fears how those moves will influence other countries. 

“Beyond the EU, this decision could have broader geopolitical consequences. The EU has long been a global leader in setting regulatory precedents, particularly with GDPR, which influenced privacy laws worldwide. Without new AI liability rules, other regions may hesitate to introduce their own regulations, leading to a fragmented global approach,” Tyler-Clarke said. “Conversely, this could trigger a regulatory race to the bottom, where jurisdictions with the least restrictions attract AI development at the cost of oversight and accountability.”

A very different perspective comes from Enza Iannopollo, a Forrester principal analyst based in London. 

Asked about the failure to move forward on the privacy bill, Iannopollo said, “Thank God that they have withdrawn that one. There are more pressing priorities to address.”

She said the privacy effort suffered from the rapid advances in web controls, including some changes made by Google. “Regulators were not convinced that they would improve things,” Iannopollo said.

Regarding the AI liability rules, Iannopollo said that she expects to see those come back in a revised form. “I don’t think this is a final call. They are just buying time.”

The critical factor is that another, much larger piece of legislation, called simply the EU AI Act, is just about to kick in, and regulators wanted to see how that enforcement went before expanding it. “They want to see how these other pieces of the framework are going to work. There are a lot of moving parts so (delaying) is wise.”

Another analyst, Anshel Sag, VP and principal analyst with Moor Insights & Strategy, said that EU members are very concerned with how they are perceived globally.

“The real challenge is that applying regulations too early, without the industry being mature enough, risks hurting European companies and European competitiveness, which I believe is a major factor in why these regulations have been paused for now,” Sag said. “Especially when you consider the current rate of change within AI, there’s just a chance that they could spend a long time on this regulation and by the time it’s out, it’s already well out of date. They will have to act fast, though, when the time is right.”

Added Vincent Schmalbach, an independent AI engineer in Munich, “The most interesting part is how this represents a major shift in EU thinking. It went from being the world’s strictest tech regulator to acknowledging they need to focus on not falling further behind in the AI race.”

Michael Isbitski, principal application security architect for genAI at ADT, the $19 billion HR and payroll enterprise, and also a former Gartner analyst, sees the two proposed EU legislative efforts as potentially having had a massive impact on data strategies.

The proposed AI rule, he said, involved the retention of AI-generated data logs. “Everywhere there is some kind of AI transaction, you need to retain those logs, for every query, anywhere,” Isbitski said. “Think about what needs to be done to secure your requirements and controls systems, along with your cloud security. Logging seems simple, but if you look at a complete AI interaction, there are an awful lot of interconnects.”

However, Flavio Villanustre, global chief information security officer of LexisNexis Risk Solutions, said the pausing of these two EU potential rules will likely have no significant impact on enterprises.

“This means you can continue to do everything you were doing before. There will be no new constraints on top of anything you were doing,” Villanustre said.

But the broader issue of genAI liability absolutely needs to be addressed because the current mechanisms are woefully inadequate, he said. 

That is because the very nature of genAI, especially in its stochastic and probabilistic attributes, makes liability attribution virtually impossible.

Let’s say something bad happens, for example, with an LLM deployment where a company loses billions of dollars or there is a loss of life. 

There are typically going to be three possible groups to blame: the model-maker, which creates the algorithm and trains the model; the enterprise, which finetunes the models and adapts it to that enterprise’s needs; and the user base, which would be either employees, partners, or customers who pose the queries to the model.

Overwhelmingly, when a problem happens, it will be because of the interactions of efforts by two or three of those groups. Without the new legislation being proposed by the EU, the only means of determining liability will be via legal contracts. 

But genAI is a different kind of system. It can be asked the identical question five times and offer five different answers. That being the case, if its developers cannot accurately predict what it will do in different situations, Villanustre wondered what chance attorneys have at anticipating all problems.

“That is a challenge: determining who has the responsibility,” Villanustre said. “This legislation was meant to define the liability outside of contracts.”

It’s really happening: Are you prepared for the sunsetting of Exchange Server 2016 and 2019?

Enterprises still hosting on-premises Exchange mail, it’s time to face reality: Microsoft will soon no longer support your infrastructure.

Earlier this week, the tech giant released its final roll-up patch for Exchange Server 2019 (Cumulative Update 15). On October 14, Microsoft will officially stop supporting both Exchange Server 2016 and Exchange Server 2019 — meaning no more updates, technical help, bug fixes, or security patches.

With end of life (EOL) pending, Microsoft customers must now either move to the fully-hosted Exchange Online or Microsoft 365, or pay for on premises Exchange Server Subscription Edition (SE) licenses to receive continued support and updates.

“Every software company out there — Microsoft, Oracle — are all trying to nudge gently, and sometimes not so gently, their customer base to the cloud,” said Matt Kimball, a principal data center analyst with Moor Insights & Strategy. “They’re doing everything but forcing companies to move to cloud-based or SaaS-based models.”

Upgrade to Exchange Server SE, or stay out of date at your own risk

Microsoft will roll out Exchange Server SE early in the second half of 2025, and release the first CU for the platform later in the year. Once that happens, all other versions of Exchange Server will be out of support.

It’s not unexpected. Adam Preset, VP analyst at Gartner, pointed out that there have been 10 versions of Exchange Server since 1996. “This is just what happens,” he said.

The final cumulative update for Exchange 2019 integrates all prior security patches and introduces server-side components for Feature Flighting, an optional cloud-based service that allows for immediate updating once new features are available, he said. This will help ensure stability and security up to the EOL date.

“Post-EOL, organizations can operate existing installations at their own risk,” he said. “Email is an essential and business critical workload, though, so staying on Exchange Server 2019 is unwise.”

To install Exchange Server SE CU1 (or later), organizations will have to first decommission and remove all older versions of Exchange, according to Microsoft. Organizations have two options when moving to the new subscription  model: A legacy upgrade (introducing new servers and uninstalling old servers); or (only for 2019) an “in-place” upgrade (downloading and installing the latest upgrade package).

In addition to purchasing required licenses, customers must also maintain an active Microsoft subscription, which means purchasing either cloud subscription licenses for all users and devices, or buying Exchange Server SE licenses with software assurance (SA).

Preset pointed out that “there’s no substitute” for checking on licensing agreements and consulting with Microsoft if an organization needs to transition to Exchange Server SE. Also, the new model will accept Exchange 2019 product keys to help simplify the upgrade process.

It’s time for enterprises to embrace the future

To some, on-prem email hosting in 2025 seems like a quaint notion.

“Who the heck is still running Exchange on premises?,” Kimball asked. “I say that jokingly but I kind of mean it too.”

Cloud computing has grown massively over the last 15 years, and email has been typically one of the first candidates to be moved up into the cloud, he noted.

“Outside of super-high privacy reasons, I can’t see the efficiency of running on-premises,” said Kimball. However, he noted, “there’s always going to be a laggard customer base that is slow to adopt not even new technology, but current technology.”

These “corner cases” typically have unique privacy or regulatory requirements — or it might simply come down to company culture. “That points to something bigger: You’re hosting your own email and managing it a certain way because you’ve always done it that way,” said Kimball.

But that’s typically not best for a business’ users and partners, not to mention its IT staff, who want to be doing more exciting and challenging kinds of work. Enterprises should be focused on staying current, quickly gaining access to the latest capabilities (in Microsoft’s case, think access to Copilot) and the “absolute resiliency” of the cloud, said Kimball.

However, he did question Microsoft’s rather abrupt end-of-service for Exchange Server 2019. In other cases, even after EOL, Microsoft has been known to continue to support legacy infrastructures. “Five years is not a long time to have a product in the market,” said Kimball. “End of support is a big thing.”

Important migration strategies

Analysts emphasize that organizations on Exchange 2019 must build a strategy for migration, performing extensive planning and assessing infrastructure for a seamless transition.

The first thing to do, Kimball said: Perform a cost-benefit analysis or ROI study that takes into account all direct and indirect costs, infrastructure, software, people costs, financial impact of downtime — and what valuable IT staff have to do to maintain legacy environments “day after day after day, the care and feeding.” Then compare that to moving to hosted Exchange.

“Unless there’s a real hard and fast regulatory requirement, I would be willing to wager that the cost benefit analysis is going to lead to a migration to the cloud,” Kimball noted.

Overall, enterprises, whether Microsoft customers or not, should consider migrating to the cloud when on-premises maintenance costs rise, advanced security features are required, or they want more (and better) cloud integration, said Preset.

“Meticulous planning is essential to ensure a smooth transition,” he said.

To reduce the risks of disruption when migrating, Preset suggested having IT personnel skilled in Exchange and cloud technologies, project managers, and support from vendors with migration experience. Enterprises also need to allocate budget for new licenses, potential third-party migration tools, training expenses for IT staff, and new systems.

“However,” Preset emphasized, “if you know you need to transition to cloud anyway, Microsoft is not the only game in town. If you’re ready for a bigger change, it might be time to look at alternatives such as Google Workspace or other vendors with email services.”

AI company Ross Intelligence loses copyright fight with Thomson Reuters

A US judge has ruled in favor of Thomson Reuters in a AI training fight against Ross Intelligence, a legal AI startup, according to The Verge. Thomson Reuters sued Ross Intelligence in 2020 for using the company’s legal research platform Westlaw to train Ross Intelligence’s AI without permission. Westlaw indexes large amounts of non-copyrighted material, but mixes it with its own content.

Ross Intelligence argued that the training should be classified under “fair use” practices, but the judge disagreed. Instead, the court held that Ross Intelligence’s use of the copyrighted material affected its original value because the company intended to develop a direct competitor.

The ruling is significant because it could have implications for future cases where copyrighted material is used for AI training. One wrinkle: this particular case concerned non-generative AI, which is not the same as generative AI used in large language models to create new material based on previous training data.

BBC: Chatbots distort the facts about news

It’s already known that today’s generative AI (genAI) tools often have trouble with basic facts. Now, it’s clear they don’t well with current events either.

That’s the upshot of a test by the BBC, which asked ChatGPT, Copilot, Gemini and Perplexity to answer 100 questions using BBC articles as a source; more than half of the answers (51%) were wrong.

One in five answers (19%) were based on directly incorrect facts — and 13% of quotes had been modified from the source.cFor example, the AI tools believe that Rishi Sunak is still the UK’s Prime Minister, and they gave the wrong death date for TV personality Michael Mosley.

“The price of AI’s extraordinary benefits must not be a world where people searching for answers are served distorted, faulty content that appears to be fact,” Deborah Turness, managing director of BBC News, wrote. “In what can feel like a chaotic world, it really can’t be right that consumers seeking clarity are met with yet more confusion.”

Apple’s Chinese AI problem (perhaps) solved with Alibaba

Reflecting the erosion of universality, Apple Intelligence will now be coming to China, but rather than working with a US AI partner, the company will use Chinese-made AI tech from Alibaba.

According to The Information, Apple and Alibaba have already “submitted the co-developed features for approval to regulators.” The claim hasn’t yet been confirmed. If true, it would mean Alibaba’s Qwen model will replace OpenAI’s ChatGPT as the go-to third-party service integrated with Apple Intelligence for China.

Why does this matter?

Due to local regulations, Apple needs a Chinese partner to offer its AI services in China. It had been expected to work with Baidu, but chose not to do so. Deepseek was also considered, but as a small start-up there were concerns it would not scale to meet Apple-driven demand.

Achieving an AI deal in China is strategically important to Apple, as the lack of Apple Intelligence has quelled demand for its iPhones. Morgan Stanley analyst Erik Woodring told clients: “Our survey work shows that Chinese iPhone users are not only more interested in access to genAI technology than US or European iPhone owners, but over 50% of Chinese iPhone owners cited the staggered rollout of Apple Intelligence as having a moderate to significant impact on their decision not to upgrade to a new iPhone this cycle.”

Apple still maintains strong customer loyalty and relatively consistent switching rates, and government subsidies may help stimulate sales, Morgan Stanley said. 

The hope is that the introduction of Apple Intelligence (for China) will unleash pent-up demand for iPhone among Chinese consumers who might have delayed purchases pending this introduction.

So, when is Apple Intelligence coming to China?

News of the deal emerges just weeks before April, when Apple has already suggested it will introduce Simplified Chinese localization for Apple Intelligence. While both Apple and Alibaba must now gain regulatory approval for their plan, the signal within the smoke suggests a Q2 introduction of Chinese Apple Intelligence support. This could prove a shrewd move that might yet stimulate iPhone sales there — particularly with the introduction of the more affordable iPhone SE. 

Together, Apple could gain twin benefits — hardware sales and Apple Intelligence proliferation. Of course, the mass market AI that wins the platform wars will be the version people use, and Apple is likely optimistic that its huge and loyal user base will use its generative AI tools. 

Alibaba might be the first Chinese AI service supplier to gain integration on the iPhone, but perhaps not be the last. “Based on Apple’s broader rhetoric about providing users AI choice, we could also see a scenario where Apple creates an initial AI partnership with Alibaba that eventually expands to other local Chinese cloud players over time,” said Morgan Stanley in a client note.

Bugs in the lotion

One thing worth watching is the extent to which Apple is able to maintain privacy in China — will the company still offer private cloud servers for some tasks or offload all complex requests to its third-party partner? Will it limit the features it makes available to Chinese users? Or has it reached a deal with Chinese regulators in which the private services it does offer are seen to be relatively non-threatening?

Whatever the cut of the cloth, the seeming simplicity of Apple Intelligence masks numerous decisions around weft and weave, and not all of these will be evident on first glance.

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe