Page 2 of 119

Court ban on Google AI stakes would hurt Anthropic clients, say analysts

Anthropic has asked a US court for permission to intervene in the remedy phase of an antitrust case against Google, arguing that the US government’s call for a ban on Google investing in AI developers could hurt it.

Analysts suggest the AI startup’s fears are founded, and that it risks losing customers if the government’s proposal is adopted.

“Its enterprise clients might face uncertainties regarding the continuity of services and support, potentially affecting their operations,” said Charlie Dai, principal analyst at Forrester.

The government’s proposed remedies including the ban on AI investments after the US District Court for the District of Columbia found the search giant guilty of maintaining a monopoly in online search and text advertising markets in August 2024.

The proposed investment ban is aimed at stopping Google from gaining control over products that deal with or control consumer search information, and in addition to preventing further investment in any AI startup would also force it to sell stakes it currently holds, including the $3 billion one in Anthropic.

On Friday, Anthropic filed a request to participate in the remedy phase of the trial as an amicus curiæ or friend of the court.

“A forced, expedited sale of Google’s stake in Anthropic could depress Anthropic’s market value and hinder Anthropic’s ability to raise the capital needed to fund its operations in the future, seriously impacting Anthropic’s ability to develop new products and remain competitive in the tight race at the AI frontier,” the AI startup said in a court filing justifying the request.

It said it had contacted representatives for the plaintiffs in the case — the US government and several US states — seeking to influence the proposal.

Remedy wouldn’t just affect Google

While Anthropic’s primary concern is that the proposed investment ban could hurt the value of the company, it is also worried that it could put it on the back foot against rivals.

“This would provide an unjustified windfall to Anthropic’s much larger competitors in the AI space —including OpenAI, Meta, and ironically Google itself, which (through its DeepMind subsidiary) markets an AI language model, Gemini, that directly competes with Anthropic’s Claude line of products,” the company said in the filing.

Abhivyakti Sengar, senior analyst at Everest Group also shares Anthropic’s view on the effect of the proposed ban.

“Forcing Google to sell its stake in Anthropic throws a wrench into one of the AI industry’s most significant partnerships,” Sengar said, adding that while it might not cause an immediate loss of customers, any disruption to the performance or reliability of Anthropic’s models or its innovation speed could drive business towards its rivals.

The AI startup, additionally, tried to differentiate itself with rivals, such as OpenAI, by pointing out that unlike its competitors it is not owned or dominated by a single technology giant.

“While both Amazon and Google have invested in Anthropic, neither company exercises control over Anthropic. Google, in particular, owns a minority of the company and it has no voting rights, board seats, or even board observer rights,” it said in the filing.

Further, it said that Google doesn’t have any exclusive rights to any of its products despite investing nearly $3 billion since 2022 in two forms, direct equity purchase and purchases of debt instruments that can be converted into equity.

AI was “never part of the case”

Among the arguments that Anthropic makes against the proposed remedy, it notes that neither it nor Google’s other AI investments were ever a part of the case.

“Neither complaint alleged any anticompetitive conduct related to AI, and neither mentioned Anthropic. The only mention of AI in either complaint was a passing reference in the US Plaintiffs’ complaint to AI ‘voice assistants’ as one of several ‘access points’ through which mobile-device users could access Google’s search services,” it said in the filing.

In addition, it claimed that forcing Google to sell its stake could diminish Anthropic’s “ability to fund its operations and potentially depress its market value” as alternative investors deal in millions and not the billions Google invested.

“Forcing Google to sell its entire existing stake in Anthropic within a short period of time would flood the market, sating investors who would otherwise fund Anthropic in the future,” it said in the filing.

Analysts too warned that the future of Anthropic’s operations and its ability to retain customers will depend on the startup’s ability to secure investment if the proposal is adopted.

That, said Everest’s Sengar, “will determine whether it will be a setback or an opportunity for greater independence in the AI race.”

Forrester’s Dai agreed, adding that if Anthropic can quickly reassure its customers and demonstrate a clear plan for continuity and innovation, it may retain their trust and loyalty.

Why enterprises are choosing smart glasses that talk — not overwhelm

Meta’s Ray-Ban smart glasses have quietly achieved a milestone that its enterprise-focused competitors could only dream of — selling over two million pairs since their debut in October 2023.

EssilorLuxottica, the eyewear giant that manufactures glasses for Meta, has recently announced that two million pairs of Meta Ray-Bans have been sold since their October 2023 launch. The company also aims to produce 10 million Meta glasses annually by the end of 2026.

In contrast, Microsoft’s HoloLens and Apple’s Vision Pro have struggled to gain traction despite their advanced mixed-reality capabilities.

The answer may lie not just in features or branding but in the fundamental user interface itself — Meta’s lightweight, audio-focused design seems to align more with enterprise needs than fully immersive mixed-reality headsets.

“The biggest barriers to AR headset adoption have been cost, efficiency, and battery life, all of which become more challenging with higher levels of immersivity,” said Neil Shah, VP for research and partner at Counterpoint Research. “Additionally, the lack of a standardized OS or UI has made enterprise integration more fragmented.”

“Rather than pushing an entirely new wearable concept, Meta retrofitted VR capabilities into an existing accessory that people were already comfortable with,” said Faisal Kawoosa, founder and lead analyst at Techarc. “The partnership with Ray-Ban also played a key role in making these smart glasses more socially acceptable.”

Enterprise adoption: simplicity over immersion?

While Microsoft’s HoloLens and Apple’s Vision Pro pushed the boundaries of augmented and virtual reality, their enterprise adoption remained limited due to cost, complexity, and user resistance. HoloLens found some traction in industrial training and fieldwork, and Vision Pro positioned itself as the future of spatial computing, but neither saw mass adoption.

“The failure of AR-heavy wearables such as HoloLens and Vision Pro highlights a fundamental mismatch with workplace needs,” said Riya Agrawal, senior analyst at Everest Group. “High costs, complexity of use, and extensive training requirements have slowed deployment. Furthermore, frontline workers—especially in field services—typically need quick, hands-free AI assistance rather than distracting digital overlays.”

Meta’s smart glasses, in contrast, take a different approach. They offer an audio-centric interface with a discreet camera, enabling hands-free communication, real-time guidance, and live transcription without overwhelming users with AR overlays.

This approach fits naturally into enterprise workflows where workers need digital assistance without obstructing their physical environment.

“Enterprise users ideally seek more immersion for use cases like design and development, but current AR/VR limitations make mainstream adoption difficult,” Shah pointed out. “While immersive headsets promise to overlay the digital world onto the physical, limited app integrations and power-hungry designs hinder their viability in real-world enterprise settings.”

“In the enterprise space, VR applications tend to be highly specialized and customized to specific business needs,” Kawoosa added. “Unlike consumer VR, which benefits from broad applications, enterprises see AR as a layer within their existing tech stack rather than a standalone solution. This means generic, one-size-fits-all AR/VR products may struggle in the long run.”

Why do enterprise users prefer audio-centric wearables?

Seamless integration into daily workflows has been a major reason for the success of Meta’s smart glasses. Unlike bulky AR headsets, they resemble traditional eyewear, making them more socially and professionally acceptable in meetings, fieldwork, and customer interactions. Open-ear speakers allow users to receive AI-powered insights, instructions, or language translations while staying engaged with their surroundings.

“In many enterprise use cases, HoloLens and Vision Pro offer more computational power than necessary, which only drives up costs without delivering proportional benefits,” Agrawal said. “Smart glasses or audio-driven interfaces solve this by being more cost-effective and practical, aligning better with enterprise workflows.”

The cost has been another decisive factor.

Vision Pro and HoloLens come at steep prices — Apple’s headset costs $3,499, and HoloLens 2 starts at around $3,500. Meanwhile, Meta’s Ray-Ban smart glasses start at a fraction of that price – less than $380, making them more viable for enterprise deployment at scale. Lower costs encourage broader experimentation, allowing businesses to deploy smart glasses across departments rather than limiting them to niche applications.

For field workers, hands-free assistance is critical. Remote guidance and real-time AI-driven instructions are invaluable in sectors like logistics, healthcare, and maintenance.

“For frontline agents, minimizing visual overload is key,” Agrawal said. “The lightweight design and better battery life of smart glasses make them truly wearable all day, unlike bulkier AR headsets that drain power quickly.”

Meta’s smart glasses enable professionals to stream video to remote experts without interrupting their workflow. In contrast, Vision Pro and HoloLens often require users to engage with floating screens or hand gestures, which may not be practical for workers who need to stay focused on manual tasks.

“Simple, AI-driven smart glasses—such as Meta’s Ray-Ban models—offer a hands-free and ear-free approach that feels natural,” said Shah. “Features like real-time guidance for warehouse workers, last-mile delivery directions, and field service assistance make them useful in enterprise settings without the complexity of AR overlays.”

Another key advantage is the ease of adoption. Employees are less likely to resist using audio-centric glasses compared to full-fledged AR headsets, which can feel intrusive or overwhelming.

“The appeal of smart glasses extends beyond cost—they also offer faster adoption and return on investment,” Agrawal pointed out. “Compared to full AR headsets, they require minimal training, making enterprise-wide deployment easier and more scalable.”

Training time is minimal, as users can interact naturally through voice commands and AI-based responses, making enterprise adoption smoother.

“Audio-based interfaces make even more sense in enterprise settings, where they function like an AI-powered assistant — essentially a ‘machine colleague’ that can provide real-time guidance, transcriptions, and hands-free instructions,” Kawoosa pointed out.

The future: will more enterprises embrace smart audio glasses?

With plans to scale up production to 10 million units annually by 2026, Meta’s strategy suggests that audio-first smart glasses could become a staple in enterprise environments.

Meanwhile, reports indicate that Meta is working on a version with an integrated display, potentially bringing a hybrid approach that balances visual AR with the audio-first experience that has proven successful.

“While AR and VR can augment meaningful enterprise use cases, their economic and ergonomic limitations have slowed adoption,” Counterpoint’s Shah said. “Simpler AI-powered glasses are serving as an entry point, building familiarity before AR technology matures.”

As immersive AR headsets struggle to find their footing, the rapid success of Meta’s smart glasses may signal a shift in how enterprises perceive wearable technology. Instead of seeking full virtual immersion, businesses may prioritize frictionless, real-world interactions — an area where audio-first smart glasses appear to have the upper hand. “While enterprises currently prefer augmentation over full immersion, AI-driven advancements could accelerate VR adoption in the long term,” Kawoosa said, adding, “However, we are still in the early stages of that transition.”

Why enterprises are choosing smart glasses that talk — not overwhelm

Meta’s Ray-Ban smart glasses have quietly achieved a milestone that its enterprise-focused competitors could only dream of — selling over two million pairs since their debut in October 2023.

EssilorLuxottica, the eyewear giant that manufactures glasses for Meta, has recently announced that two million pairs of Meta Ray-Bans have been sold since their October 2023 launch. The company also aims to produce 10 million Meta glasses annually by the end of 2026.

In contrast, Microsoft’s HoloLens and Apple’s Vision Pro have struggled to gain traction despite their advanced mixed-reality capabilities.

The answer may lie not just in features or branding but in the fundamental user interface itself — Meta’s lightweight, audio-focused design seems to align more with enterprise needs than fully immersive mixed-reality headsets.

“The biggest barriers to AR headset adoption have been cost, efficiency, and battery life, all of which become more challenging with higher levels of immersivity,” said Neil Shah, VP for research and partner at Counterpoint Research. “Additionally, the lack of a standardized OS or UI has made enterprise integration more fragmented.”

“Rather than pushing an entirely new wearable concept, Meta retrofitted VR capabilities into an existing accessory that people were already comfortable with,” said Faisal Kawoosa, founder and lead analyst at Techarc. “The partnership with Ray-Ban also played a key role in making these smart glasses more socially acceptable.”

Enterprise adoption: simplicity over immersion?

While Microsoft’s HoloLens and Apple’s Vision Pro pushed the boundaries of augmented and virtual reality, their enterprise adoption remained limited due to cost, complexity, and user resistance. HoloLens found some traction in industrial training and fieldwork, and Vision Pro positioned itself as the future of spatial computing, but neither saw mass adoption.

“The failure of AR-heavy wearables such as HoloLens and Vision Pro highlights a fundamental mismatch with workplace needs,” said Riya Agrawal, senior analyst at Everest Group. “High costs, complexity of use, and extensive training requirements have slowed deployment. Furthermore, frontline workers—especially in field services—typically need quick, hands-free AI assistance rather than distracting digital overlays.”

Meta’s smart glasses, in contrast, take a different approach. They offer an audio-centric interface with a discreet camera, enabling hands-free communication, real-time guidance, and live transcription without overwhelming users with AR overlays.

This approach fits naturally into enterprise workflows where workers need digital assistance without obstructing their physical environment.

“Enterprise users ideally seek more immersion for use cases like design and development, but current AR/VR limitations make mainstream adoption difficult,” Shah pointed out. “While immersive headsets promise to overlay the digital world onto the physical, limited app integrations and power-hungry designs hinder their viability in real-world enterprise settings.”

“In the enterprise space, VR applications tend to be highly specialized and customized to specific business needs,” Kawoosa added. “Unlike consumer VR, which benefits from broad applications, enterprises see AR as a layer within their existing tech stack rather than a standalone solution. This means generic, one-size-fits-all AR/VR products may struggle in the long run.”

Why do enterprise users prefer audio-centric wearables?

Seamless integration into daily workflows has been a major reason for the success of Meta’s smart glasses. Unlike bulky AR headsets, they resemble traditional eyewear, making them more socially and professionally acceptable in meetings, fieldwork, and customer interactions. Open-ear speakers allow users to receive AI-powered insights, instructions, or language translations while staying engaged with their surroundings.

“In many enterprise use cases, HoloLens and Vision Pro offer more computational power than necessary, which only drives up costs without delivering proportional benefits,” Agrawal said. “Smart glasses or audio-driven interfaces solve this by being more cost-effective and practical, aligning better with enterprise workflows.”

The cost has been another decisive factor.

Vision Pro and HoloLens come at steep prices — Apple’s headset costs $3,499, and HoloLens 2 starts at around $3,500. Meanwhile, Meta’s Ray-Ban smart glasses start at a fraction of that price – less than $380, making them more viable for enterprise deployment at scale. Lower costs encourage broader experimentation, allowing businesses to deploy smart glasses across departments rather than limiting them to niche applications.

For field workers, hands-free assistance is critical. Remote guidance and real-time AI-driven instructions are invaluable in sectors like logistics, healthcare, and maintenance.

“For frontline agents, minimizing visual overload is key,” Agrawal said. “The lightweight design and better battery life of smart glasses make them truly wearable all day, unlike bulkier AR headsets that drain power quickly.”

Meta’s smart glasses enable professionals to stream video to remote experts without interrupting their workflow. In contrast, Vision Pro and HoloLens often require users to engage with floating screens or hand gestures, which may not be practical for workers who need to stay focused on manual tasks.

“Simple, AI-driven smart glasses—such as Meta’s Ray-Ban models—offer a hands-free and ear-free approach that feels natural,” said Shah. “Features like real-time guidance for warehouse workers, last-mile delivery directions, and field service assistance make them useful in enterprise settings without the complexity of AR overlays.”

Another key advantage is the ease of adoption. Employees are less likely to resist using audio-centric glasses compared to full-fledged AR headsets, which can feel intrusive or overwhelming.

“The appeal of smart glasses extends beyond cost—they also offer faster adoption and return on investment,” Agrawal pointed out. “Compared to full AR headsets, they require minimal training, making enterprise-wide deployment easier and more scalable.”

Training time is minimal, as users can interact naturally through voice commands and AI-based responses, making enterprise adoption smoother.

“Audio-based interfaces make even more sense in enterprise settings, where they function like an AI-powered assistant — essentially a ‘machine colleague’ that can provide real-time guidance, transcriptions, and hands-free instructions,” Kawoosa pointed out.

The future: will more enterprises embrace smart audio glasses?

With plans to scale up production to 10 million units annually by 2026, Meta’s strategy suggests that audio-first smart glasses could become a staple in enterprise environments.

Meanwhile, reports indicate that Meta is working on a version with an integrated display, potentially bringing a hybrid approach that balances visual AR with the audio-first experience that has proven successful.

“While AR and VR can augment meaningful enterprise use cases, their economic and ergonomic limitations have slowed adoption,” Counterpoint’s Shah said. “Simpler AI-powered glasses are serving as an entry point, building familiarity before AR technology matures.”

As immersive AR headsets struggle to find their footing, the rapid success of Meta’s smart glasses may signal a shift in how enterprises perceive wearable technology. Instead of seeking full virtual immersion, businesses may prioritize frictionless, real-world interactions — an area where audio-first smart glasses appear to have the upper hand. “While enterprises currently prefer augmentation over full immersion, AI-driven advancements could accelerate VR adoption in the long term,” Kawoosa said, adding, “However, we are still in the early stages of that transition.”

GenAI can make us dumber — even while boosting efficiency

Generative AI (genAI) tools based on deep learning are quickly gaining adoption, but their use is raising concerns about how they affect human thought.

A new survey and analysis by Carnegie Mellon and Microsoft of 319 knowledge workers who use genAI tools (such as ChatGPT or Copilot) at least weekly showed that while the technology improves efficiency, it can also reduce critical thinking engagement, could lead to over-reliance, and might diminish problem-solving skills over time.

“A key irony of automation is that by mechanizing routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise,” the study found.

Overall, workers’ confidence in genAI’s abilities correlates with less effort in critical thinking. The focus of critical thinking shifts from gathering information to verifying it, from problem-solving to integrating AI responses, and from executing tasks to overseeing them. The study suggests that genAI tools should be designed to better support critical thinking by addressing workers’ awareness, motivation, and ability barriers.

The research specifically examines the potential impact of genAI on critical thinking and whether “cognitive offloading” could be harmful. Cognitive offloading, or the process of using external devices or processes to reduce mental effort, is not new; it’s been used for centuries.

For example, something as simple as writing things down, or relying on others to help with remembering, problem-solving, or decision-making is a form of cognitive offloading. So is using a calculator instead of mental math.

The paper examined how genAI’s cognitive offloading, in particular, affects critical thinking among workers across various professions. The focus was on understanding when and how knowledge workers perceive critical thinking while using genAI tools and whether the effort required for critical thinking changes with their use.

The researchers classified critical thinking into six categories: knowledge, comprehension, application, analysis, synthesis, and evaluation. Each of those six cognitive activities was scored with a one-item, five-point scale, as has been done in similar research.

The study found that knowledge workers engage in critical thinking primarily to ensure quality, refine AI outputs, and verify AI-generated content. However, time pressures, lack of awareness, and unfamiliarity with domains can hinder reflective thinking.

At college, signs of a decline in thinking abilities

David Raffo, a professor at the Maseeh College of Engineering and Computer Science at Portland State University, said he noticed over a six-year-period that students’ writing skills were dropping.

“Year after year, the writing got worse,” he said. “Then, during Covid, I noticed that papers started getting better. I thought, maybe staying at home had a positive effect. Maybe students were putting more energy and effort into writing their papers and getting better at their communication skills as a result.”

Raffo met with one student to discuss their A- grade on a paper. During the Zoom meeting, however, the student struggled to form grammatically correct sentences. Raffo began to question whether they had written the paper themselves, considering their communication skills didn’t match the quality of their work.

“I wondered if they had used a paid service or generative AI tools. This experience, about three years ago, sparked my interest in the role of technology in academic work and has motivated my ongoing study of this topic,” said Raffo, who is also editor-in-chief of the peer-reviewed Journal of Software Evolution and Process.

The difference between using genAI compared to the use of calculators and Internet search engines lies in which brain functions are engaged and how they affect daily life, said Raffo, who was not involved in the latest study.

GenAI tools offload tasks that involve language and executive functions. The “use it or lose it” principle applies: engaging our brains in writing, communication, planning, and decision-making improves these skills.

“When we offload these tasks to generative AI and other tools, it deprives us of the opportunity to learn and grow or even to stay at the same level we had achieved,” Raffo said.

How AI rewires our brains

The use of technology, in general, rewires brains to think in new ways — some good, some not so good, according to Jack Gold, principal analyst at tech industry research firm J. Gold Associates. “It’s probably inevitable that AI will do the same thing as past rewiring from technology did,” he said. “I’m not sure we know yet just what that will be.”

As Agentic AI becomes common, people may come to rely on it for problem-solving — but how will we know it’s doing things correctly, Gold said. People might accept its results without questioning, potentially limiting their own skills development by allowing technology to handle tasks.

Lev Tankelevitch, a senior researcher with Microsoft Research, said not all genAI use is bad. He said there’s clear evidence in education that it can enhance critical thinking and learning outcomes. “For example, in Nigeria, an early study suggests that AI tutors could help students achieve two years of learning progress in just six weeks,” Tankelevitch said. “Another study showed that students working with tutors supported by AI were 4% more likely to master key topics.”

The key, he said, is that it was teacher-led. Educators guided the prompts and provided context, showing how a collaboration between humans and AI can drive real learning outcomes, according to Tankelevitch.

The Carnegie Mellon/Microsoft study determined the use of genAI tools shifts knowledge workers’ critical thinking skills in three main ways: from information gathering to verification, from problem-solving to integrating AI responses, and from task execution to task stewardship.

While genAI automates tasks such as information gathering, it also introduces new cognitive tasks, such as assessing AI-generated content and ensuring accuracy. That shift changes the role of workers from doing the work of research to overseeing results, with the responsibility for quality still resting on the human.

Pablo Rivas, assistant professor of Computer Science at Baylor University, while it’s true if a machine’s output goes unchecked, you risk skipping the hard mental work that sharpens problem-solving skills, AI doesn’t have to undermine human intelligence.

“It can be a boost if individuals stay curious and do reality checks. One simple practice is to verify the AI’s suggestions with outside sources or domain knowledge. Another is to reflect on the reasoning behind the AI’s output rather than assuming it’s correct,” he said. “With healthy skepticism and structured oversight, generative AI can increase productivity without eroding our ability to think on our own.”

A right way to use genAI?

To support critical thinking, organizations training workforces should focus on information verification, response integration, and task stewardship, while maintaining foundational skills to avoid overreliance on AI. The study highlights some limitations, such as potential biases in self-reporting and the need for future research to consider cross-linguistic and cross-cultural perspectives and long-term studies to track changes in AI use and critical thinking.

Research on genAI’s impact on cognition is key to designing tools that promote critical thinking. Deep reasoning models are helping by making AI processes more transparent, allowing users to better review, question, and learn from its insights, he said.

“Across all of our research, there is a common thread: AI works best as a thought partner, complementing the work people do,” Tankelevitch said. “When AI challenges us, it doesn’t just boost productivity; it drives better decisions and stronger outcomes.”

The Carnegie Mellon-Microsoft study isn’t alone in its findings. Verbal reasoning and problem-solving skills in the US have been steadily dropping, according to a paper published in June 2023 by US researchers Elizabeth Dworak, William Revelle and David Condon. And while IQ scores had been increasing steadily since the beginning of the 20th century — as recently as 2012, IQ scores were rising about 0.3 points a year — a study by Northwestern University in 2023 showed a decline in three key intelligence testing categories.

All technology affects our abilities in various ways, according to Gold. For example, texting undermines the ability to write proper sentences, calculators reduce long division and multiplication skills, social media affects communication, and a focus on typing has led to neglecting cursive and signature skills, he noted.

“So yes, AI will have effects on how we problem solve, just like Google did with our searches,” Gold said. “Before Google, we had to go to the library and actually read multiple source materials to come up with a concept, which required our brain to process ideas and form an opinion. Now it’s just whatever Google search shows. AI will be the same, only accelerated.”

Net neutrality under Trump? Not so neutral

Even before President Donald J. Trump returned to office last month, net neutrality took a punch to the jaw. On Jan. 2, the US Court of Appeals for the Sixth Circuit struck down the Federal Communications Commission’s (FCC) net neutrality rules

Oh well, it was nice while it lasted.

The latest set of rules, the FCC’s 2024 “Safeguarding and Securing the Open Internet Order,” would have established the three rules of net neutrality:

  • No blocking: Broadband providers may not block access to legal content, applications, services, or non-harmful devices.
  • No throttling: Broadband providers may not impair or degrade lawful Internet traffic on the basis of content, applications, services, or non-harmful devices.
  • No paid prioritization: Broadband providers may not favor some lawful Internet traffic over other lawful traffic in exchange for consideration — in other words, no “fast lanes.” This rule also bans ISPs from prioritizing the content and services of their partners.

There’s nothing new about these rules. They’ve been the cornerstone of the internet you’ve known and used for decades. In 1992, the Commercial Internet Exchange (CIX) brought the first Internet Service Providers (ISPs) together to agree to let traffic run back and forth between them without restrictions. The rules they adopted would become what we call net neutrality. 

It only makes sense, right? As Jessica Rosenworcel, former chairperson of the Federal Communications Commission (FCC) and a Democrat, said: “Consumers across the country have told us again and again that they want an internet that is fast, open, and fair.”

In a way, the court decision doesn’t matter. With Trump back in charge, there was no way net neutrality would survive. 

After all, the Republicans argue, we can trust ISPs to do the right thing for their customers. As Brendan Carr, current FCC chairperson and a Republican, crowed: “[The January] decision is a good win for the country. Over the past four years, the Biden Administration has worked to expand the government’s control over every feature of the Internet ecosystem. You can see it in the Biden Administration’s efforts to pressure social media companies into censoring the free speech rights of everyday Americans.”

Funny that. Since Carr took over as chairperson, he’s launched investigations of American-led media companies and organizations such as NPR, PBS, Disney, CBS, NBC, and Comcast. Why? Because they’re not kowtowing to Trump and they’ve broadcast news that annoys him.

Nothing is surprising about this. Before Trump was elected again, he and his pack of billionaire buddies were already threatening to revoke network TV broadcast licenses because they didn’t like their news coverage. Carr, of course, is all in favor of this; as he said in a pre-election interview, “The law is very clear. The Communications Act says you have to operate in the public interest. And if you don’t, yes, one of the consequences is potentially losing your license.” 

He then listed ABC, NBC, and CBS — but not Fox for some curious reason — as potentially running afoul of his take on the Communications Act of 1934, from which the FCC derives its authority. 

As Nilay Patel, editor-in-chief of The Verge, recently wrote: “The FCC is pretty much the only government agency with some authority to directly regulate speech in America because it controls the spectrum used to broadcast radio and television. Carr has started using that authority to punish broadcasters for speech Trump doesn’t like or even for having internal business practices that don’t align with the administration.”

Aside from the national networks, there’s nothing saying Carr, directed by Trump’s sidekick Elon Musk, couldn’t restrict independent social networks such as BlueskyCounter.social, and Mastodon while leaving XThreads, and Truth.Social to do what they want. 

This could be done, for example, by abusing Section 230 of the Communications Decency Act. In Project 2025‘s FCC section, which Carr authored, he stated: “FCC should work with Congress to ensure that anti-discrimination provisions are applied to Big Tech — including ‘back-end’ companies that provide hosting services and DDoS protection. Reforms that prohibit discrimination against core political viewpoints are one way to do this.” 

Core political viewpoints, in this case, means, of course, pro-Trump speech. What this might look like is charging Universal Service Fund fees to non-Trump-friendly network owners

Speaking of money and networks, Carr also happens to be a big satellite internet supporter. We all know, of course, that Musk’s Starlink is the only major satellite ISP.   

What all this means for you is you can expect ISP fees to go ever higher and for there to be even less choice between ISPs in your neighborhood. Of course, that’s mostly the same old, same old, I’m sorry to say. The internet under Trump will come with more restrictions on news and, in all likelihood, even what you can say about the news.

Freedom of news and speech depends on a free Internet; under the current regime, we’re already losing it. 

For February’s Patch Tuesday, Microsoft rolls out 63 updates

Microsoft released 63 patches for Windows, Microsoft Office, and developer platforms in this week’s Patch Tuesday update. The February release was a relatively light update, but it comes with significant testing requirements for networking and remote desktop environments. 

Two zero-day Windows patches (CVE-2025-21391 and CVE-2025-21418) have been reported as exploited and another Windows update (CVE-2025-21377) has been publicly disclosed — meaning IT admins get a “Patch Now” recommendation for this month’s Windows updates. (All other Microsoft platforms can be handled with a standard update schedule — and there were no updates for Microsoft Exchange and SQL Server.)

To navigate these changes, the team from Readiness has provided a detailed infographic exploring the deployment risks.

(For information on the last six months of Patch Tuesday releases, see our round-up here.)

Known issues 

Microsoft identified three ongoing issues affecting users of Windows 10, Citrix, and Windows Server 2022 this month, including:

  • Windows 10/11 and Sever 2022: Enterprise Windows customers are still reporting SSH connection issues since the October 2024 update. Microsoft is investigating the issue, but has no published fixes or mitigating actions. It’s a challenge for Microsoft since the service failure does not generate logs or error messages.
  • Citrix: Microsoft’s January updates — and potentially this month’s releases — are still affected by the Citrix Session Recording Agent (SRA) preventing the successful installation of Microsoft patches. This is an ongoing issue with no fixes yet, though we expect the number of users affected is much lower than the SSH service issue.
  • Microsoft’s System Guard Runtime Monitor Broker Service (SGMBS) may be causing system level crashes and telemetry issues with the event viewer log since last month’s Patch Tuesday release. Microsoft technical support has offered a registry level change to update the service and mitigate the issue. We expect an update from Microsoft later this month on a more permanent resolution. 

Major revisions and mitigations

As of Feb. 14, the Readiness team has not received any published revisions or updates. Microsoft did offer a mitigation for a serious vulnerability in Microsoft Outlook (CVE-2025-21298). Perhaps less helpful than you’d expect, Microsoft recommends viewing emails in plain text to mitigate this critical remote code execution (RCE) vulnerability, which could otherwise grant attackers control over the target system.

Windows lifecycle and enforcement updates

Microsoft published no enforcement updates this month, but the following products are nearing  their end-of-service life cycles:

  • Windows 11 Enterprise and Education, Version 22H2 — Oct. 14, 2025
  • Windows Server Annual Channel, Version 23H2 — Oct. 24, 2025
  • Windows 11 Home and Pro, Version 23H2 — Nov. 11, 2025

Each month, the Readiness team provides detailed, actionable testing guidance for the latest Patch Tuesday updates based on assessing a large app portfolio and a offering comprehensive analyses of the patches and their potential impact on Windows and application deployments.

For this cycle, we grouped the critical updates and required testing efforts into different functional areas, including:

Networking and Remote Desktop services

  • Winsock: Microsoft advises that a multipoint socket (type c_root) is created and employed with the following operations: bind, connect, and listen. The socket should close successfully.
  • DHCP: Create test scenarios to validate Windows DHCP client operations (discover, offer, request, and acknowledgment (ACK)).
  • RDP: Ensure that you can configure Microsoft RRAS servers through netsh commands.
  • ICS: Ensure that Internet Connection Sharing (ICS) can be configured over Wi-Fi.
  • FAX/Telephony: Ensure that your test scenarios include TAPI (Telephony Application Programming Interface) initialization and shutdown operations. Since these tests require an extended runtime, allocate extra time for them.

Local Windows File System and storage

  • Ensure that File Explorer correctly renders URL file icons. Microsoft recommends testing the Storage Sense clean-up tool. If disk quotas are enabled, confirm that all I/O workloads function as expected.

Local and domain security

  • Domain controllers should continue to support certificate logons after applying the updates.
  • Kerberos: Microsoft recommends creating authentication scenarios for domain-joined systems, using local and encrypted login methods.

If you have the time and resources (VMs and networking), the Readiness team strongly recommends building a test Remote Desktop environment that includes a connection broker, remote desktop gateway, and remote desktops on virtual machines. After setting up each component, verify that all RDP connections are established successfully.

This month, testing Microsoft’s ICS functionality requires an extended test plan covering the following areas:

  • Usability testing: Create test scenarios to verify that the process of enabling/disabling ICS functions as expected.
  • Validation: Microsoft recommends confirming that Network Address Translation (NAT) correctly translates private IP addresses to that of the shared connection.
  • Security: Ensure that ICS traffic adheres to existing firewall rules and does not create unintended security risks.

Each month, we break down the update cycle into product families (as defined by Microsoft) with the following basic groupings: 

  • Browsers (Microsoft IE and Edge) 
  • Microsoft Windows (both desktop and server) 
  • Microsoft Office
  • Microsoft Exchange and SQL Server 
  • Microsoft Developer Tools (Visual Studio and .NET)
  • Adobe (if you get this far) 

Browsers

Microsoft released a larger-than-normal number of patches for the Edge browser this month — 10, all rated important. These updates are a mix of Chromium (CVE-2025-0444CVE-2025-0445 and CVE-2025-0451) and Edge patches that deal with memory related security vulnerabilities. All of these low-profile changes can be added to your standard release calendar.

Microsoft Windows

These areas have been updated with two critical patches and 35 important patches this patch cycle:

  • Win32 and Kernel Services
  • Remote Desktop, RAS  and Internet Connection Sharing (ICS)
  • Kerberos, DHCP  and Windows Networking
  • Microsoft Active Directory and Windows Installer

Though the Windows NTLM patch (CVE-2025-21377) has been rated important, it has been publicly disclosed. Two more updates (both rated important) affecting storage (CVE-2025-21391) and networking (CVE-2025-21418) have reportedly been exploited in the wild. These reports raise the stakes for an otherwise low-profile Windows update, so the Readiness team recommends a “Patch Now” schedule for these.

Microsoft Office

Microsoft released a single critical update for Microsoft Excel and nine more rated as important for Microsoft Office and the SharePoint platforms. None of these  vulnerabilities have been reported as exploited or publicly disclosed. So, add these Office updates to your standard release calendar.

Microsoft Exchange and SQL Server

No updates were released for either Microsoft Exchange or SQL Server this month. 

Developer Tools

Microsoft released four updates to Microsoft Visual Studio, all of which are rated important. One of these updates (CVE-2023-32002) may look a little odd as the date refers to 2023, not 2025. However, it appears legitimate. Though it has been categorized under Microsoft’s Visual Studio product grouping, this patch attempts to resolve a vulnerability in Node.js. Add these updates (even the funny looking ones) to your standard developer release schedule.

Adobe (and 3rd party updates)

Microsoft did not push out any Adobe updates. However, HackerOne required a patch to the developer framework Node.js to resolve a network related vulnerability (CVE-2025-21418). 

Arm secures Meta as first customer in chip push, challenging industry giants

In a landmark shift, Arm has secured Meta as the first major customer for its internally designed server CPUs, a move that signals its entry into direct chip sales and places it in direct competition with its biggest customers, including Qualcomm and Nvidia.

The company, known for licensing its chip designs to industry heavyweights like Apple, Nvidia, and Qualcomm, is now stepping directly into the silicon market, a move that could put it in direct competition with the very customers it once served.

This strategic shift marks one of the most significant transformations in Arm’s history, potentially destabilizing long-standing partnerships and reshaping the power dynamics within the semiconductor industry, reported Finacial Times.

Meta bets on Arm’s first server chip, raising questions for IT buyers

Arm’s first internally designed semiconductor is expected to be a server CPU aimed at the data center market, with Meta as its first major customer. This would mark a direct challenge to Intel and AMD, the long-standing leaders in server chip manufacturing. If successful, Arm’s entry into the data center CPU space could disrupt the traditional x86-based server ecosystem, which has historically been dominated by Intel.

Arm is now directly competing with one of its biggest customers, Qualcomm, for data center CPU deals. Qualcomm was in discussions with Facebook’s parent company, Meta, to supply processors based on Arm’s architecture. However, Arm has already secured at least part of the deal, marking a major shift from its role as a neutral IP supplier to an active market player, reported Reuters.

Arm’s first internally designed semiconductor is expected to be a server CPU aimed at the data center market, with Meta as its first major customer, reported Financial Times. This marks a direct challenge to Intel and AMD, the long-standing leaders in server chip manufacturing. If successful, Arm’s entry into the data center CPU space could disrupt the traditional x86-based server ecosystem, which has historically been dominated by Intel.

While discussions between Meta and Qualcomm are ongoing, Arm’s move raises concerns among enterprise customers who now face the possibility of competing with the very company they rely on for chip designs.

A spokesperson for Arm declined to comment on the matter.

Hiring from customers and entering the market

Arm has started recruiting executives from its own licensees, signaling a strategic transformation. Arm is actively hiring talent to expand beyond designing processor architecture to also selling its own silicon, with a focus on AI-powered data center chips and other applications, reported Reuters, citing sources familiar with the matter.

Arm’s strategic transformation is not just about hiring from licensees, it represents a fundamental shift in its business model. The company, which has long dominated the smartphone processor market, is now focusing on high-performance computing (HPC) and AI-driven chips for data centers. While Arm will design its own semiconductors, it will continue outsourcing production to foundries like TSMC, a move that aligns with the business models of fabless chip companies like Nvidia, the FT report added.

A shift that could reshape the industry

“Near-term mass migration away from Arm seems unlikely due to its established ecosystem and the complexity of shifting architectures,” said Rachita Rao, senior analyst at Everest Group. “However, companies like Qualcomm are already exploring alternatives such as RISC-V, and some firms have begun in-house efforts to reduce reliance on Arm. While some players might transition away, Arm remains the primary architect of these chips, with differentiation largely occurring at the SoC design level.”

“SoftBank’s potential acquisition of Oracle-backed chip designer Ampere could further accelerate Arm’s efforts in this segment,” Rao added. “The Meta deal lends credibility to Arm’s push into chip manufacturing, but while the company has financial and technical backing, it will take time to reach the level of established competitors. Even existing players are struggling to keep up with Nvidia.”

Arm’s business shift mirrors Nvidia’s model, where chip designs are developed in-house but actual manufacturing is outsourced to foundries like TSMC. This approach allows Arm to enter new markets while reducing capital expenditure on chip fabrication. However, the move could create tensions with long-time partners like Apple, Qualcomm, and Nvidia, who must now consider whether their reliance on Arm’s technology puts them in direct competition with it.

“Arm already holds a near-monopoly in certain semiconductor IP segments, and regulators closely scrutinize its licensing policies and potential acquisitions,” Rao noted. “If Arm expands further into direct chip sales, regulators may require a clear separation between its IP licensing and chip manufacturing divisions. Any perceived preference for its own products or sudden licensing fee hikes could invite antitrust investigations.”

“As AI chip development accelerates, chipset makers will likely pursue both backward and forward integration, aiming to control more of the design and development process while still relying on foundries for manufacturing,” said Faisal Kawoosa, founder and lead analyst at Techarc. “Arm’s move into chipmaking is a natural response to this trend, but it also introduces challenges. Competing with firms like Nvidia and Qualcomm requires more than just strong design expertise—it demands deep market knowledge, customer relationships, and extensive front-end integration, areas where its competitors currently have an edge.”

Arm’s business shift mirrors Nvidia’s model, where chip designs are developed in-house but actual manufacturing is outsourced to foundries like TSMC. This approach allows Arm to enter new markets while reducing capital expenditure on chip fabrication. However, the move could create tensions with long-time partners like Apple, Qualcomm, and Nvidia, who must now consider whether their reliance on Arm’s technology puts them in direct competition with it.

Potential challenge to Nvidia in AI chips

Beyond server CPUs, Arm is setting its sights on the booming AI chip market, where Nvidia is currently the dominant player. Arm is also a part of SoftBank’s Stargate initiative, a large-scale project to develop AI-focused data centers in the US in collaboration with OpenAI and Oracle.

If Arm moves aggressively into AI hardware, it could challenge Nvidia’s stronghold on AI-specific GPUs, a sector currently experiencing exponential demand due to advancements in generative AI. Arm may also be positioning itself to compete with Nvidia, a dominant force in the AI chip market.

Besides, Arm and its parent company, SoftBank, are working with Broadcom to develop a custom-built AI chip for SoftBank’s data centers. The project is estimated to be worth as much as $30 billion in revenue for Broadcom, reported Reuters citing a research note from JP Morgan analyst Harlan Sur.

The note further suggests that if Arm aggressively moves into AI hardware, it could position itself as a direct competitor to chip giants such as Nvidia and AMD. While Arm has not publicly confirmed these ambitions, its efforts to recruit top chip executives and win strategic deals suggest a clear intent to expand beyond its traditional licensing business.

For decades, Arm has been seen as a neutral provider of chip design technology, licensing its IP to major semiconductor firms without directly competing with them. That model is now changing, and the ripple effects could be significant.

If Arm continues to expand into chip sales, enterprise customers may need to rethink their reliance on its technology. Companies that once viewed Arm as a partner may now see it as a rival, potentially reshaping the semiconductor market and altering supply chain strategies across the industry.

Queries seeking comment from Qualcomm and Meta remain unanswered.

Apple and the big store

Apple has made a small but significant move by introducing its Apple TV app to Android. It might seem like a minor step, but it marks a major pivot in strategy as the company expands its services beyond its own ecosystem. Across the years, rumors and recruitment ads show it has been putting this plan together for some time.

Outside the walled garden

That all this speculation has become reality shouldn’t be a big surprise. It’s not as if Android is the first platform to see Apple services support. Windows has many, including iCloud, Music, TV — and Apple Music is already available on Android.

What this means for most of us is limited: It means all the movies and TV shows you’ve purchased from Apple can be accessed on your Android device, which will also stream the full TV+ catalog. It also opens the doors to potential new subscribers to Apple’s growing selection of sports content, at present including Major League Soccer and Friday Night Baseball. Given that Apple was also in the running to pick up streaming rights for key soccer leagues, you should not underestimate the breadth of its ambition in sports entertainment. 

What Apple has also done with this move is weaken arguments against its traditional “walled garden” for services.

  • It isn’t forcing vendor lock-in through your purchased movie collection anymore. 
  • It means switchers can access the Apple services they have become accustomed to. 
  • It means potential Android to iPhone switchers can dip into Apple’s content services during their migration.

Content is king — and available for a fee

Apple TV on Android also hints at the future. You see, as Apple is forced to open its own ecosystem to competitors, it is also being forced to intensify the degree to which it competes against those competitors.

That means Apple Music is now in an all-platform competition with Spotify; and in the future it will also mean Apple TV+ has to compete with other streaming services. 

While TV+ arguably lacks a deep enough library of content to compete effectively, it’s plausible Apple might choose to widen its content library now that its service is available on multiple devices and platforms.

Licensed content could bolster the company’s own unique offerings and be made immediately available to a potential audience of billions. Apple has experimented with this – it licensed a catalog of 50 movies for showing in the US last year, and now has a licensing team in place.

Roblox for tiny humans

While doing so would be highly complex from a development point of view, Apple has another service it could potentially bring across to Android: Arcade.

Apple Arcade is a collection of casual games made available free to subscribers, built to work across Apple’s platforms (including Apple TV). Its big advantages include a distinct lack of built-in data trackers and info stealers and a sensible approach to advertising that means parents aren’t forever claiming refunds or coughing up cash as their kids “accidentally” purchase in-game currencies.

Combined with a decent selection of professionally produced content, Arcade has plenty of potential — all it needs is its Ted Lasso or iPhone moment, a game so popular and pervasive gamers on all platforms want it. Think about something better than Roblox, but more wholesome.

While Apple waits for that game to appear, it could offer up Arcade to other platforms, creating an ecosystem for game discovery and purchase that competes directly with those forcing it to open its platforms up to them. It could then be in the cat bird seat once it finds its pervasive gaming hit.

One more thing

For all the criticism it gets, the enduring success of the App Store shows there is a substantial public appetite for curated apps and services. People are hungry for games, apps, and services that meet trust and quality standards.

Given this is true, perhaps Apple could expand its App Store to distribute strictly vetted software and services for other platforms, including those from competitors. 

While unlikely, one day the most popular version of Fortnite might be the one sold via the App Store with an Apple imprimatur to denote verified trust and security. Perhaps you’ll visit the Apple App Store to get your Windows and Android software, confident it has been put through strict quality and security testing. I imagine IT would be pleased with that extra layer of verification, particularly in regulated industries.

After all, as Apple’s entire history shows, if you can’t beat them, you join them.  You just do it better.

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe

Apple’s emotional lamp and the future of robots

Pixar Animation Studios has an unusual logo. The basic logo is the word “Pixar.” But sometimes, an animated lamp named Luxo Jr. hops into the frame and jumps on the letter “i.” The lap exudes personality and represents Pixar’s ability to turn any object into a compelling character. 

Inspired by Luxo Jr., Apple’s Machine Learning Research division decided to create a personality-expressive lamp of their own. Apple’s ELEGNT research project explores what’s possible with an expressive physical user interface for non-humanoid robots

Based on the situation and context of the user, as well as voice interaction, gestures and touch, the lamp can appear to express itself through a variety of movements, including nodding or shaking its “head,” lowering its head to convey sadness, “tail wagging” to signify excitement, “sitting down” to imply relaxation, head tilting to show curiosity, leaning forward to show interest, gazing to direct attention, adjusting speed and pausing to communicate attitudes and emotions, and moving forward or away to show interest or disinterest. 

It can do some of the things smartphone apps can do but with a greater sense of fun. For example, smartphone apps can remind you to drink water, but the ELEGNT can do this by physically pushing a cup of water toward you. 

As you can see in this video, Apple’s project is fascinating. But as with all robot makers in Silicon Valley, as far as I can tell, the company loses the plot when dealing with any robot designed to simulate human communication. 

In their paper, they say: “The framework integrates function-driven and expression-driven utilities, where the former focuses on finding an optimal path to achieve a physical goal state, and the latter motivates the robot to take paths that convey its internal states —s uch as intention, attention, attitude, and emotion — during human-robot interactions.”

Did you catch the lie (or worse, a possibly self-delusional claim)? They’re falsely saying that their expression-driven utilities “motivate” the lamp to convey its “internal states,” and among those internal states is “emotion.” 

They toss out the falsehood with shocking casualness, considering how big the statement is and how formal the research paper is. If Apple had actually invented a lamp that can feel emotions, that would be the computer science event of the century, a singularity of world-historic import. It would challenge our laws and our definition of sentience, throwing into question religious and philosophical questions that have been settled for 10,000 years. 

(I’ve reached out to Apple for comment on this point, but haven’t heard back.) 

It’s clear that Apple’s lamp is programmed to move in a way that deludes users into believing that the it has internal states that it doesn’t actually have. 

(I admire Apple’s research; I don’t understand why companies lie about humanoid robotics and play make-believe in their research papers about what’s going on with their robots. In the future, it will be hard enough for people to understand the nature of AI and robotics without the researchers lying in formal, technical research papers.)

But if you ignore the lie, Apple’s lamp research definitely sheds light on where our interaction with robots may be heading—a new category of appliance that might well be called the “emotional robot.” 

A key component of the research was a user study comparing how people perceived a robot using functional and expressive movements versus one that uses only functional movements. 

The study found that movements incorporating expressive qualities boosted user “ratings,” especially during social-oriented tasks. But when users wanted some specific useful action to take place — for example, to shine light on an object so the user could take a picture of it — study participants found the lamp’s “personality” distracting. 

The researchers drew upon the concept of Theory of Mind, the human ability to attribute mental states to others, to help design the lamp’s movements. Those movements were intended to simulate intention, attention, attitude, and emotion. 

The movements aren’t specifically human but rather the body language of a person, a monkey, or a dog — a sentient mammal generally.

The biggest takeaway from Apple’s ELEGNT research is likely that neither a human-like voice nor a human-like body, head, or face is required for a robot to successfully trick a human into relating to it as a sentient being with internal thoughts, feelings, and emotions. 

ELEGNT is not a prototype product; it is instead a lab and social experiment. But that doesn’t mean a product based on this research will not soon be available on a desktop near you. 

Apple’s emotional robot 

Apple is developing a desktop robot project, codenamed J595, and is targeting a launch within two years. According to reports based on leaks, the robot might look a little like Apple’s iMac G4, which was a lamp-like form factor featuring a screen at the end of a moveable “arm.” The device would function like an Apple HomePod with a screen but with additional intelligence courtesy of large language model-based generative AI. 

The estimated $1,000 robot would provide a user interface for home smart products and doorbell cams, answer questions, display photos and incoming messages, and function as a camera and screen for FaceTime calls. 

But here’s the most interesting part. Although there’s no direct evidence for this claim, it makes sense for Apple to incorporate ELEGNT research into the desktop robot project. The robot is expected to move, lean, and tilt as part of its interaction with users. 

Apple’s next appliance might be an emotional robot. 

The consumer market for emotional robots

The idea of a consumer electronics product advertising “personality” through physical movements isn’t new. Among others, there’s:

  • Jibo: A social robot with expressive movements and a rotating body.
  • Anki’s Cozmo: A small robot toy with a movable arm and LED eyes for emotional expression.
  • Sony Aibo: A robotic dog using its entire body to express emotions.
  • Kuri: A home robot using head tilts, eye expressions, and sounds for communication.
  • Lovot: A companion robot from Japan expressing affection through body movements.
  • Amazon Astro: A home robot with a periscope camera and digital eyes for engagement.

The latter product is worthy of an update since I first mentioned it in 2021.

Amazon discontinued its Astro for Business program on July 3, 2024, less than a year after launch. The business robots were remotely deactivated by Amazon last Sept. 25, and now Amazon is exclusively focusing on Astro for consumers. 

The $1,599 consumer version of Astro, introduced in 2021, is still available (by invitation only).

The business market for emotional robots

No major company has tried emotional robots for business except Amazon, and it killed that program. 

Meanwhile, the European Union’s AI Act prohibits the use of AI systems for emotion recognition in workplaces or educational settings, except in cases of medical or safety necessity. This ban became effective on Feb. 2.

So, from a business, legal, and cultural standpoint, it appears that appliances that can read your emotions and respond with gestures expressing fake emotions are not imminent. 

We’ll see whether users bring their emoting Apple desktop robots or other emotional robots to the office. We could be facing a bring-your-own-emotional-robot movement in the workplace.

BYOER beware!

Your new Android notification superpower

It may seem like a paradox, but notifications are both the best and the worst part of owning an Android device.

On the one hand, notifications let us stay on top of important incoming info — be it a critical Slack message, a personal family text, or an email from a high-priority client or colleague.

On the other hand, man alive, can they be menacing — both distracting and also sometimes ineffective, when something significant comes in and you don’t notice it right away.

To be fair, Android’s got all sorts of smart systems for taming your notifications and making ’em more manageable and effective — both official and by way of crafty workaround. The software’s oft-overlooked notification channels make it easy to control specific sorts of notifications and turn down the noise on less important stuff. And just last week, we talked about a creative way to bring custom vibration patterns to any Android device so you can tell what type of info is alerting you without even having to glance at your screen.

But there’s still the issue of especially important info coming in and falling through the cracks. After all, it’s all too easy to miss a single incoming notification and then fail to notice it until hours later — when it might be too late.

Today, I’ve got a scrumptiously slick ‘n’ simple tool that can help. It’s a new Android notification superpower, and all you’ve gotta do is embrace it.

[Don’t stop here: Get my free Android Notification Power-Pack next and send your Android notification intelligence to soaring new heights.]

Android notifications, amplified

The tool I want to tell you about is an easy-as-can-be way to amplify especially important notifications and make sure you always see ’em right away.

It does that primarily by creating a custom alarm of sorts for your highest-priority notifications — those coming from specific apps and/or with specific keywords in their bodies. When those conditions are met, the system vibrates your phone continuously until you acknowledge it and optionally makes an ongoing sound, too. That way, there’s zero chance you’ll overlook it.

You can even get incredibly nuanced with how and when those actions happen, if you want, and have the alarm active only during certain days and times. If you’re really feeling saucy, you can also have the app read certain notifications aloud when they come in as another way to ensure they catch your attention.

The app that makes all of this happen is a cool little creation called, fittingly enough, NotiAlarm. It’s a free download that’ll work on any Android device.

Now, notably, NotiAlarm does overlap with another tool we’ve talked about before — an extremely versatile power-user tool called BuzzKill that lets you create all sorts of crafty custom filters for your phone’s notifications. If you’re already using BuzzKill, you can accomplish these same sorts of feats with it, and you don’t need NotiAlarm in addition.

But fantastic as it is, BuzzKill is a bit complex. It falls more in the power-user camp, and it also costs four bucks to use. So all in all, it isn’t for everyone.

NotiAlarm, in contrast, is super-simple and also free. Even if you aren’t inclined to create an entire array of custom filters for your notifications, it does this one thing and does it well — and it’s remarkably easy to get going.

The app does have some mildly annoying ads throughout its configuration interface, but that’s it. You can opt to disable those and support the developer with a one-time $10 upgrade, if you want, but you don’t have to do that in order to put it to work.

Capisce? Capisce. Lemme show you how to get it up and running now, in a matter of minutes.

Your 2-minute Android notification upgrade

All right — here’s all there is to it:

  • First, download NotiAlarm from the Play Store (obviously, right?).
  • Open ‘er up, then follow the prompts to grant the app the various forms of access it needs.
    • NotiAlarm requires permissions to manage your notifications, display over other apps, and run in the background — for reasons that should all be fairly obvious and are absolutely necessary for what it needs to do. Its privacy policy is clear about the fact that it doesn’t collect or store any personal data or share any manner of info with any third parties.
  • Once you’re on its main screen, tap the circular plus icon in the lower-right corner to configure your first alarm. That’ll take you to a screen that looks a little somethin’ like this:
Android notification alarm — NotiAlarm (1)
NotiAlarm’s configuration screen doesn’t take long at all to get through.

JR Raphael, IDG

  • Tap the plus sign next to the word “Keyword,” then type in whatever keyword you want to act as a trigger for your notification alarm. Maybe it’s a specific person’s name, a specific email address, or some specific term that you know demands your immediate attention. Whatever it is, type it in there, then tap the word “Add” to confirm and save it.
    • By default, NotiAlarm will trigger your alarm for any notifications that include your keyword. You can also, however, ask it to trigger the alarm for any notifications that don’t include the keyword — so in other words, for all notifications except those containing that keyword. If you’d rather go that route, tap the toggle next to “Keyword Filter Type” to switch its behavior.
Android notification alarm — NotiAlarm (2)
The “Keyword” field is the key to making your most important notifications unmissable.

JR Raphael, IDG

  • Next, tap the plus sign alongside the word “App” and select which app or apps you want to be included — Messages, Slack, Gmail, Calendar, or whatever the case may be.
Android notification alarm — NotiAlarm (3)
Once you’ve selected an app (or multiple apps), you’ll see the final setup for your new notification rule.

JR Raphael, IDG

  • Now, in the next box down, tap the toggle next to “Alarm” and configure exactly how you want your alarm to work.
    • You can activate and select a specific sound, via the “Alarm Sound” toggle.
    • Or you can stick solely with an ongoing vibration, via the active-by-default “Vibration” toggle.
    • If you want to limit the alarm to certain times, tap the toggle next to “Do Not Disturb Time Range.” And if you want to limit it to certain days, tap the day names under “Repeat Days.” Otherwise, just ignore those fields.
Android notification alarm — NotiAlarm (4)
You’ve got ample options for exactly how and when you want your notification alarm to activate.

JR Raphael, IDG

And hey, how ’bout that? For most purposes and scenarios, you should now be set! If you want to explore some other options — such as having a notification automatically read aloud, automatically marking a notification as read, or automatically replying to a message-oriented notification with some prewritten response — look a little lower on that same screen.

Otherwise, just tap the “Save” text in the upper-right corner, and that’s it: Your new alarm is now active. And you’ll see it with an active toggle on NotiAlarm’s main screen.

Android notification alarm — NotiAlarm (5)
A NotiAlarm notification alarm in its final, fully configured state.

JR Raphael, IDG

Now, anytime a notification comes in that meets the conditions you specified, your phone will do exactly what you asked — and an important alert will never go unnoticed again.

👉 NEXT: Snag my free Android Notification Power-Pack to discover six especially awesome enhancements that’ll take your Android notification intelligence to the next level.