Month: September 2024

Now that Qualcomm’s interested, will Apple buy (a little more) Intel?

Qualcomm is allegedly sniffing at the beleaguered remains of Intel and may try to acquire parts of the company. Will Apple make its own counterproposal?

Apple’s decision to abandon its processors in favor of Apple Silicon reflected a wider malaise. Try as it might, Intel found itself unable to accelerate processor development to the same extent as Apple found it could with ARM-based Apple Silicon chips, and what began as a jubilant relationship expired. Intel had saved Apple from the PowerPC chip disaster but couldn’t keep the pace with modern mobile processors.

The rest, as they say, is history.

Intel shudders, Qualcomm ascends

Qualcomm, meanwhile, knows a good idea when it sees one and has been dancing in Apple’s shadow with its own move to manufacture ARM-based processors. Those fast chips are picking up vendor sales at Intel’s expense. 

But it seems Qualcomm wants to take things a step further, which is why it has been exploring the possibility of acquiring parts of Intel’s design business, particularly the PC design business. 

A deal has not been reached — Reuters tells us Intel says it is “deeply committed” to its PC business — and Qualcomm hasn’t approached Intel to discuss its plans. In other words, all or none of this could be true.

The Apple connection

In context, Intel is encountering tough headwinds, prompting deep layoffs and a pause in dividend payments. The company’s PC client business declined 8% last year, reflecting weak PC sales across the board.

Apple’s Mac sales kicked against this trend, increasing 20.8% year on year in Q2 against a PC industry average 3% growth — mostly attributable to Apple’s extra million Mac sales.

With Apple expected to introduce incredibly performant M4 Macs this side of Christmas, all of which will be capable of running Apple Intelligence with built-in AI support, Cupertino is counting on its PC sales growth to continue. The company is unique in that it offers a completely compatible range of AI-supporting products in every key form factor (Mac, tablet, smartphone). No one else has this.

Qualcomm competes

Qualcomm wants a slice of that market, too. Its newly introduced Snapdragon processors are winning praise across the PC media for their low power use and high performance (though these still lag behind Apple Silicon in many respects).

All the same, as a business it may well have learned from Apple’s integrated approach to product design. A strategic Intel acquisition would give it an opportunity to begin building its own platform ecosystem, or at least make additional cash through hardware sales on its own account. Though it must be noted that most PCs capable of running AI cost more than some of Apple’s systems (e.g., the Mac mini) that are also capable of doing so.

The myth that PCs are cheaper is an enduring one, but you get what you pay for, and Crowdstrike showed us the risks of that platform.

But why wouldn’t Qualcomm want to grab a larger slice of the PC industry pie?

More than a modem

There is a clear competitive relationship between Apple and Qualcomm. Not so long ago, Apple settled outstanding litigation between the two companies in order to begin using Qualcomm’s 5G chips in its devices. The iPhone manufacturer had hoped to build its own 5G modems with the help of Intel, but that plan didn’t bear fruit.

In the end, Apple acquired Intel’s modem design unit and a big bucket full of related mobile patents for a billion dollars. It’s fair to say modem development has proved a struggle, but Apple is now expected to introduce its first 5G modems as soon as 2025

When it does, Apple will no longer be dependent on Qualcomm.

But, given that Qualcomm has its own chip design talent, will Apple want it to emerge as a hardware competitor? What is the risk that some key patents used by Apple may suddenly migrate from Intel to Qualcomm if such a deal does take place? After all, one of the key disputes between both firms has been around patent licensing costs.

All of this is speculation, of course: Qualcomm may never bid for Intel’s PC business. But if it does, and if Apple doesn’t like it, then it may be instructional to note that Qualcomm’s market cap currently stands at $182 billion, in contrast to Intel’s $82.7 billion and Apple’s eye-watering $3.38 trillion.

That difference in financial capital hints at what could be a dramatic bidding war, but almost certainly suggests regulatory investigation whoever seals the deal, if such an event happens at all. 

Please follow me on LinkedInMastodon, or join me in the AppleHolic’s bar & grill group on MeWe.

European AI treaty adds uncertainty for CIOs, but few specifics

An AI usage treaty, negotiated by representatives of 57 countries, was unveiled Thursday, but its language is so overarching that it’s unclear if enterprise CIOs will need to do anything differently to comply.

This mostly European effort adds to a lengthy list of AI global compliance efforts on top of many new legal attempts to govern AI in the United States. The initial signatories were Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, and the United Kingdom, as well as Israel, the United States of America, and the European Union.

In its announcement, the Council of Europe said, “there are serious risks and perils arising from certain activities within the lifecycle of artificial intelligence such as discrimination in a variety of contexts, gender inequality, the undermining of democratic processes, impairing human dignity or individual autonomy, or the misuses of artificial intelligence systems by some States for repressive purposes, in violation of international human rights law.”

What the treaty says

The treaty, dubbed Framework Convention on artificial intelligence and human rights, democracy, and the rule of law, did emphasize that companies must make it clear to users whether or not they are communicating with a human or an AI.

Companies under the treaty must give “notice that one is interacting with an artificial intelligence system and not with a human being” as well as “carry out risk and impact assessments in respect of actual and potential impacts on human rights, democracy and the rule of law.”

Entities must also document everything they can about AI usage and be ready to make it available to anyone who asks about it. The agreement says that entities must “document the relevant information regarding AI systems and their usage and to make it available to affected persons. The information must be sufficient to enable people concerned to challenge the decision(s) made through the use of the system or based substantially on it, and to challenge the use of the system itself” and to be able to “lodge a complaint to competent authorities.”

Double standard

One observer in the treaty negotiation process, Francesca Fanucci, a legal specialist at ECNL (European Center for Not-for-Profit Law Stichting), described the effort as having been “watered down”, mostly in dealing with private companies and national security. 

“The formulation of principles and obligations in this convention is so overbroad and fraught with caveats that it raises serious questions about their legal certainty and effective enforceability,” she told Reuters.

The final document does explicitly exclude national securities matters: “Matters relating to national defence do not fall within the scope of this Convention.”

In an interview with Computerworld, Fanucci said that the final version of the treaty treats businesses very differently than governments.

The treaty “establishes obligations for State Parties, not for private actors directly. This treaty imposes on the State Parties to apply its rules to the public sector, but to choose if and how to apply them in their national legislation to the private sector. This is a compromise reached with the countries who specifically asked to have the private sector excluded, among these were the US, Canada, Israel and the UK,” Fanucci said. “They are practically allowed to place a reservation to the treaty.”

“This double standard is disappointing,” she added.

Lack of specifics

Tim Peters, an officer of compliance firm Enghouse Systems in Canada, was one of many who applauded the idea and intent of the treaty while questioning its specifics.

“The Council of Europe’s AI treaty is a well-intentioned but fundamentally flawed attempt to regulate a rapidly evolving space with yesterday’s tools. Although the treaty touts itself as technology-neutral, this neutrality may be its Achilles’ heel,” Peters said. “AI is not a one-size-fits-all solution, and attempting to apply blanket rules that govern everything from customer service bots to autonomous weapons could stifle innovation and push Europe into a regulatory straitjacket.”

Peters added that this could ultimately undermine enterprise AI efforts. 

“Enterprise IT executives should be concerned about the unintended consequences: stifling their ability to adapt, slowing down AI development, and driving talent and investment to more AI-friendly regions,” Peters said. “Ultimately, this treaty could create a competitive divide between companies playing it safe in Europe and those pushing boundaries elsewhere. Enterprises that want to thrive need to think critically about the long-term impact of this treaty, not just on AI ethics, but on their ability to innovate.”

Another industry executive, Trustible CTO Andrew Gamino-Cheong, also questioned the agreement’s lack of specifics.

“The actual contents of the treaty aren’t particularly strong and are mostly high level statements of principles. But I think it’s mostly an effort for countries to unify in asserting their rights as sovereign entities over the digital world. For some context on what I mean, I see what’s happening with Elon Musk and Brazil as a good example of the challenges governments face with tech,” Gamino-Cheong said. “It is technologically difficult to block Starlink in Brazil, which can in turn allow access to X, which is able to set its own content rules and dodge what Brazil wants them to do. Similarly, even though Clearview AI doesn’t legally operate in the EU, their having EU citizens’ data is enough for GDPR lawsuits against them there.”

Ernst & Young managing director Brian Levine addressed questions about the enforceability of this treaty, especially with companies in the United States, even though the US was one of the signatories. It is not uncommon for American companies to ignore European fines and penalties

“One step at a time. You can’t enforce shared rules and norms until you first reach agreement on what the rules and norms are,” Levine said. “We are rapidly exiting the ‘Wild West’ phase of AI. Get ready for the shift from too little regulation and guidance to too much.”

The treaty will enter into force “on the first day of the month following the expiration of a period of three months after the date on which five signatories, including at least three Council of Europe member states, have ratified it,” the announcement said. 

GenAI could make the Apple Watch a powerful healthcare tool

Generative AI (genAI) features added to an existing Apple Watch health app may light the path toward personalized and data-led healthcare for patients with Parkinson’s disease. The StrivePD app is made by Rune Labs, a California-based entity focused on delivering next-generation care for people with neurological disorders.

StrivePD has been enhanced with new genAI-created clinical reporting tools that provide in-depth data on a patient and the progression of their disease, and it delivers personalized educational content to patients, caregivers, and clinicians to improve outcomes.

What is StrivePD and how does it help?

The reports, allegedly HIPAA-compliant and shared with patients via email, are structured so patients get good insight into where they are with the disease, including summaries of their medication compliance, exercise, and symptom fluctuations. The app also delivers coaching in the form of exercise suggestions and tips around sleep patterns, and draws on data gathered by the Apple Watch (along with information shared by the patient).

In theory, the combined solution should help patients while also equipping medical professionals with deeper information they can use to guide treatment. 

It could even enable Parkinson’s patients to access care in the first place “The unfortunate reality is there is a structural shortage of specialists who can treat Parkinson’s, and the problem is getting worse,” said Rune Labs CEO Brian Pepin. “Most Parkinson’s patients struggle to get adequate access to care.”

Changing lives, one focused LLM at a time

It should be noted that the Rune Labs solution was in 2022 given the go ahead by the US Food and Drug Administration (FDA) to collect patient symptom data through measurements made by Apple Watch.

This makes it a recognized solution that could in the future become a poster child for the potential of genAI to deliver life-changing health benefits when deployed in such focused domains. (Turns out there’s a lot more to genAI than automating job applications and creating amusing images — data analysis at this level could yield profound benefits in terms of healthcare results and patient autonomy.)

Apple should be looking at this

I’d be very, very surprised if Apple’s health teams were not themselves already exploring ways in which to combine the data gathered by their own sensors and services with focused large language models (LLMs) to provide similar benefits. It’s a natural progression from the accurate exercise tracking tools the company has already deployed, including but not limited to swimming and wheelchair activity sensors.

The existence of that kind of highly personalized data and the also existing connection between Apple’s devices and patient medical data opens up interesting possibilities for LLM-augmented health and services that extend beyond Apple Fitness.

In that sense, the Rune Labs announcement could prophesize future health-related services that combine genAI with the vast quantity of personal data Apple’s ecosystem already gathers.

What’s happening in Apple R&D?

Apple CEO Tim Cook has frequently claimed that Apple will in the end be remembered for the work it is doing in health. Given the entire company is now shoulders to the wheel in the push to put AI in everything, it is unlikely its health teams aren’t at least trying to book some internal R&D time to explore how it can be applied in that sector.

If the Rune Labs solution actually delivers on its promises, Apple’s health teams will at least have an argument to justify that investment. But Apple aside, tools like these that empower better patient care and encourage personal autonomy are among the bright spots for a technology so many people fear may be a dystopian fin de siècle. 

Please follow me on LinkedInMastodon, or join me in the AppleHolic’s bar & grill group on MeWe.

Copilot+ PCs that use Intel and AMD chips coming in November

Copilot+ PCs — special Windows computers equipped with the latest AI functions — first arrived in May. But only hardware using Qualcomm Snapdragon X Elite or X Plus chips were designated as ready for Copilot+ by Microsoft

As of November, new PCs that run processors from Intel and AMD will also be covered. Specifically, they’ll be equipped with Intel Core Ultra 200V or AMD Ryzen AI 300 chips, which are considered powerful enough to run compute-intensive AI functions.

More information about the upcoming Copilot+ PCs can be found on the Windows Experience blog .

The source code for Android 15 is now available to developers

Google has posted the source code for Android 15 on the Android Open Source Project (AOSP), which means developers around the world can now produce their own versions of the operating system if they wish.

More information, including the latest details on the Android Studio and Jetpack Compose tools, is available on the Android Developers Blog. (For those who prefer video, Spotlight Week offers a new clip about Android 15 daily.)

Android 15 is due to roll out to Pixel users in the next few weeks, followed by Samsung, Honor, iQOO, Lenovo, Motorola, Nothing, Oneplus, Oppo, Realme, Sharp, Sony, Tecno, Vivo and Xiaomi.

Australia pushes for AI rules, focusing on oversight and accountability

Australia has outlined plans for new AI regulations, focusing on human oversight and transparency as the technology spreads rapidly across business and everyday life.

The country’s Industry and Science Minister, Ed Husic, on Thursday, introduced ten voluntary AI guidelines and launched a month-long consultation to assess whether these measures should be made mandatory in high-risk areas.

Apple’s planned chatbot should have no ‘personality’

Apple is reportedly developing a new AI digital assistant expected to be integrated into its upcoming robotic devices. Based on generative AI (genAI) and more advanced than Siri, the new assistant will have a “human-like” AI “personality.”

The new assistant could replace Siri on HomePod, iPhones, iPads, or Macs and, most likely and intriguingly, on a new robotic desktop screen that follows and faces you while interacting or while you’re using it for a FaceTime call, according to Bloomberg’s Mark Gurman. Speech might be the main or sole interface.

The prospect fills me with dread.

The history of personality” failures

Personal computing’s past is littered with the virtual corpses of chatbots and assistants with “personality.” Microsoft, for example, has never stopped trying.

In 1995, it introduced the Microsoft Bob assistant, which conspicuously tried too hard to be personable; users mostly found it condescending and irritating.

Microsoft tried again in 1997 with Clippy, an anthropomorphic paper clip designed to have a personality. It landed like a thud, and critics slammed it for its irritating personality and intrusive interruptions.

Microsoft engineers in China released the experimental Xiaoice (pronounced “Shao-ice,” meaning “Little Bing”) in 2014. The chatbot prioritizes “emotional intelligence” and “empathy.” It uses advanced natural language processing and deep learning to continuously improve its conversational abilities. Microsoft built Xiaoice on what the company calls an “Empathetic Computing Framework.”

As of 2020, Xiaoice had attracted over 660 million active users globally, making it the world’s most popular personality chatbot. It’s been deployed on more than 40 platforms in countries such as China, Japan, and Indonesia, as well as previously in the US and India.

Microsoft researchers modeled Xiaoice to present as a teenage girl, leading many Chinese users to form strong emotional connections with it. Disturbingly, some 25% of Xiaoice users have told the chatbot, “I love you,” with millions of users forming what they think is a “relationship” with Xiaoice — at the expense of pursuing relationships with other people.

In 2016, Microsoft launched a chatbot called “Tay.” It was targeted at 18- to 24-year-olds and trained on social media posts, mainly Twitter. Within 24 hours of launch, the chatbot started posting racist, sexist, and anti-Semitic remarks and content favoring conspiracy theories and genocidal ideologies. (Again, trained on Twitter.)

Microsoft apologized and pulled the plug on “Tay.”

Other personality-centric chatbots have emerged over the years:

  • Replika: An AI chatbot that learns from interactions to become a personalized friend, mentor, or even romantic partner. Critics have slammed Replika for sexual content, even with minors, and also for claiming bonkers experiences, such as seeing supernatural entities.
  • Kuki (Mitsuku): Known for its conversational abilities, Kuki has won multiple Loebner Prize Turing Tests. It is designed to engage users in natural dialogues, but can also spout random nonsense.
  • Rose: A chatbot with a backstory and personality developed to provide engaging user interactions, but the conversation is fake, inconsistent, and unrelated to previous conversations.
  • BlenderBot: Developed by Meta, BlenderBot is designed to blend different conversational skills and engage users in meaningful conversations, but has tended to lie and hallucinate.
  • Eviebot: An AI companion with emotional understanding capabilities designed to engage users in meaningful conversations. Responses can be cryptic, unsettling, and even manipulative.
  • SimSimi: One of the earliest chatbots, SimSimi engages users in casual conversations and supports multiple languages, but can be vulgar and highly inappropriate.
  • Chai AI: Allows users to create and interact with personalized chatbot companions, offering a stream of AI personalities based on user preferences. The chatbot has offended many users with sexualized or dark content.
  • Inworld: Provides tools for users to create distinct personality chatbots, including those based on celebrities. This tool has often been used for creative, deceptive, and harmful personas.
  • AIBliss: A virtual girlfriend chatbot that develops different characteristics as users interact. Experts have warned that, like Xiaoice, some users have obsessed over their relationship with the bot at the expense of real, human relationships.

Pi in the sky

Closer to home, AI chatbots vary in the degree to which they prioritize “personality.” You’ll find a chatbot called Pi at the maximum “personality” end of the spectrum.

You can leave Pi running on your phone and start conversations with it whenever you like. The chatbot is chatty and conversational to the extreme. It also uses a lot of natural-sounding pauses, and it even takes breaths as it speaks. Most of the time, it will respond to your question or comment and end its wordy monologue with a question of its own. Pi comes with a variety of voices you can choose from. I pick voice #4, which sounds like a very California woman, complete with vocal fry.

Though I’m amazed with Pi, I don’t use it much. While the voice is natural, the conversationality feels forced and tone-deaf. It just won’t shut up, and I end up just turning it off after the 10th question it asks. In truth, I want a chatbot that answers my questions, not one that tries to get me to answer its questions.

Pi is also overly ingratiating, constantly telling me how insightful, thoughtful, or funny my inane responses are.

Why, Apple? Why?

I’m prepared to conclude that every single personality-centric chatbot ever produced has failed. So why does Apple think it can succeed?

Many already dislike Siri because of how the company has implemented the assistant’s personality. Specific prompts can elicit corny jokes and other useless content.

While writing this column, I asked Siri, “What are the three laws of robotics?” Its reply was: “Something about obeying people and never hurting them. I would never hurt anyone.”

In this case, Siri responded with a canned personality instead of answering the question. This doesn’t always happen, but it’s an example of how Apple might approach its generative AI chatbot personality.

I can’t imagine Apple thinks Siri’s personality is popular, nor do I believe the company has seen personality-focused chatbots in the wild and found something worth emulating. “Personality” in chatbots is a novelty act, a parlor trick, that can be fun for 10 minutes but then grates on the nerves after a few encounters.

What we need instead of personality

Natural, casual human conversation is far beyond the capacity of today’s most advanced AI. It requires nuance, compassion, empathy, subtly, and a capacity for perceiving and expressing “tone.”

Writing a formal missive, letter, scientific paper, or essay is far, far easier for AI than casual chit-chat with a friend.

Another problem is that personality chatbots are liars. They express emotions they don’t have, make vocal intonations based on thoughts they don’t have, and often claim experiences they never had.

People don’t like to be lied to. What we need instead of a profane, inappropriate, ingratiating, boring liar is something useful.

The human factor in elocution and tone should be calibrated to be unnoticeable — neither overly “real” nor robotic-sounding. If you can program for empathy, empathize with my situation and objectives, not my emotions.

We want personalization, not personality. We want agency, not tone-deafness. We want a powerful tool that magnifies our abilities, not a “friend.”

Who knows? Apple might surprise everyone with a popular “personality” robot voice that doesn’t repel or confuse people. But I doubt it.

Nobody’s ever done it. Nobody should attempt it.

GenAI vendors’ self-destructive habit of overpromising

One of the ongoing issues in enterprise IT is the gap between perception and reality. But when it comes to generative AI (genAI), vendors are about to discover that there is a big price to pay for overpromising. 

Not only are corporate execs dealing with disappointment and a lack of meaningful ROI, but the same senior non-tech leaders (think CFOs, CEOs, COOs, and some board members) who pushed for the technology before it was ready are the ones who will quickly resist deployment efforts down the road. The irony is that those later rollouts will more likely deliver on long-promised benefits. A little “AI-sales-rep-who-cried-wolf” goes a long way.

There are plenty of enterprise examples of genAI ROI not happening, but perhaps the best illustration of the conundrum involves Apple’s upcoming iPhone rollout and consumers.

Apple will add AI (it’s branded Apple Intelligence) to some of its iPhone 16 line to function on-device. In theory, on-device access might accelerate AI responses (compared with the cloud), and it could allow Siri to grab information seamlessly from all installed apps. 

If you buy into the argument, this setup could eventually change the dynamics of apps. Why wait for a weather app to launch and tell you the hourly forecast if Siri can do it easier and faster? 

For example, I have an app solely to tell me the humidity level and no fewer than a half-dozen communications apps (WhatsApp, Webex, Signal, etc.), plus apps that can directly message me (LinkedIn, X, and Facebook) —  all in addition to text messages, emails, and transcribed voicemails. Why should I have to mess with all of that?

In theory, Apple Intelligence could consolidate all of those bits and bytes of information and deliver my communications and updates in a consistent format.

But this is where reality gets in the way of tech dreams. As friend and fellow tech Jason Perlow writes, Apple is delivering a slimmed-down version of genAI in a way that could fuel more disappointment.

“Unlike typical iOS or MacOS feature upgrades, Apple Intelligence loads a downsized version of Apple’s Foundation Models, a home-grown large language model (LLM) with approximately 3 billion parameters,” Perlow wrote. “While impressive, this is tiny compared to models like GPT-3.5 and GPT-4, which boast hundreds of billions of parameters. Even Meta’s open source Llama 3, which you can run on a desktop computer, has 8 billion parameters.”

On top of that, Apple Intelligence will grab as much as 2GB of RAM, which means users will either need more RAM than they want or deal with performance slowdowns in other iPhone functions. Then there’s the potential drag on battery performance which, again, threatens to undermine everything else on the device.

Bottom line: Not only will this initial rollout likely eat battery and RAM for breakfast, but it will be smaller and therefore less powerful than most other genAI deployments. That’s a recipe for buyer remorse.

Then there is the issue of app developers. First, it will take some time for them to work with the Apple API and deliver versions of their apps that play nicely with Apple Intelligence. Other developers may question whether it is even in their interest to embrace Apple Intelligence. Once they enable Apple to effortlessly grab their data and deliver it via Siri, doesn’t the value of their standalone app diminish? And doesn’t that undermine their monetization strategies? 

Why look at ads on a movie-ticket or concert venue app when Siri can deliver the needed info directly?

Research firm IDC looked recently at those Apple-promised capabilities and predicted they could initally boost phone sales. “Initially” is the key word. People often buy based on tech promises, then talk things over with others and decide whether to buy a future phone (or keep the one they just got) based on their actual experience.

This brings us back to enterprise IT and genAI. Business execs who pushed for genAI rollouts before the technology was ready are unlikely to be patient and realistic waiting for the solid results to surface.

And then, just when meaningful ROI is likely to arrive (roughly two or three years from now), they’ll have moved on, feeling burned by early deployments and unwilling to be fooled again. 

GenAI has great potential to push near-term sales with unrealistic promises — a self-destructive marketing approach, whether you’re OpenAI, Microsoft, Google, or Amazon selling to enterprise CIOs or Apple selling to consumers. 

Overpromising is a dangerous and foolhardy strategy. And yet, with enterprise genAI sales these days, overpromising isn’t a side course — it’s the main course. It’s unlikely to prove appetizing for anyone.

Microsoft-Inflection deal is a merger, but that’s OK, says UK

The UK’s antitrust regulator has concluded its investigation into Microsoft’s hiring of the majority of staff from Inflection and its licensing of the company’s technology.

The Competition and Markets Authority (CMA) published a summary of its decision Wednesday, finding that while the Microsoft’s actions constituted a “relevant merger situation” and thus fall under its purview, they did not result in what it called “a realistic prospect of a substantial lessening of competition (SLC).”

In other words, the deal — which didn’t involve Microsoft buying the company — is a merger, but the regulator is OK with that.

This means that the CMA will not pursue a full-scale investigation into the deal, which poured an estimated US$650 million into Inflection’s coffers.

Not this time, anyway: Similar deals may, however, come under scrutiny for their effect on competition.

This was one of many regulatory looks at investments in AI startups by big tech companies hoping to escape regulatory scrutiny with what some have dubbed a quasi-merger: strategic investments and/or hiring key team members that gain the investor influence or control over the startup without actually buying the company.

At the same time the UK investigation into Microsoft was announced, the US Federal Trade Commission (FTC) began a look into Amazon’s hiring of key executives, including the CEO, from AI startup Adept, and its plan to license some of Adept’s technology. And in early August, the CMA announced that it is launching an inquiry into Amazon’s relationship with Anthropic to determine whether it, too, warrants a full investigation. A CMA inquiry into Google’s relationship with Anthropic is also underway.

When is a merger not a merger?

In its summary of the Microsoft-Inflection decision, the CMA said that that it assessed the criteria for a relevant merger situation under the Enterprise Act 2022, noting, “There is no particular combination of assets that constitutes an enterprise. As set out in the CMA’s guidance, it may include a group of employees and their know-how where this enables a particular business activity to be continued.”

Furthermore, it said, “In addition to hiring the core former Inflection team, Microsoft also acquired additional assets, including access to Inflection IP. The combination of acquiring the core team together with these assets was key to the value of the Transaction, as it enabled the former Inflection team to continue the pre-Transaction Inflection roadmap for consumer-facing AI product development within Microsoft.

“On this basis, the CMA believes that Microsoft has substantively acquired Inflection’s pre-Transaction FM [Foundation model] and chatbot development capabilities. Accordingly, the CMA has found that at least part of the activities of pre-Transaction Inflection has been brought under the control of Microsoft and, as a result, that two enterprises have ceased to be distinct such that the Transaction falls within the CMA’s merger control jurisdiction for review.”

The CMA said that the full text of its decision will be published “shortly” on the web page for the case.

Anthropic launches the Claude Enterprise plan

Anthropic has launched the Claude Enterprise subscription plan, enabling businesses to securely leverage their own corporate data in their interactions with its Claude large language model. It’s a complement to Claude Work, Anthropic’s product aimed at small organizations, and a competitor for OpenAI’s ChatGPT Enterprise, released last year.

“The goal for us, for Claude Work and Claude for Enterprise, is really to enable and empower every team within an enterprise so that you can really become the most creative and most productive version of yourself,” said Nicholas Lin, Claude Enterprise product lead, in an interview.

Claude Enterprise features an expanded context window — 500,000 tokens, more than double the 200,000 previously offered — which Anthropic said is the equivalent of hundreds of sales transcripts, dozens of 100+ page documents, or 200,000 lines of code.

Dynamic workspaces

“Artifacts” — dynamic workspaces that, Lin said, let users “really see what’s going on in Claude’s head and to really iterate on outputs with Claude” — will assist users in creating data visualizations, documentation, presentations, and more.

“One thing I love to use artifacts for is a great way to just brainstorm with Claude, and using Claude to think about drawing diagrams and helping it visualize concepts,” he said. For example, a marketer could create an artifact in the form of a marketing calendar for a campaign, or generate content for the campaign, or a strategy document. In sales, Claude could analyze sales data, forecast trends, and generate collateral for a sales meeting.

The activity feed, he said, lets users draw inspiration from others in their organization. “Activity feed really enables you and others around you in the organization to share the most insightful pieces of feedback that you’re working with Claude on, so this is pieces of knowledge insights from your conversations with Claude in artifact outputs through the organization,” he said.

And, since manually uploading data is not sustainable at scale, the company is introducing native data integrations. The first, with GitHub, is now in public beta.

“We want to make sure that Claude is really well integrated into your everyday workflow,” Lin said. “This is the first of our native integrations. Many more will be coming in the coming months, and this is also the first of our software developer focused features. Many more will be also coming in the coming months.”

And, he promised, the uploaded data will not be used to train models.

Granular permissions

Anthropic says that, along with its AI features, Claude Enterprise contains enterprise-grade security controls including single sign-on (SSO) and domain capture, and role-based access with granular permissioning. Within a few weeks, Lin said, audit logs for security and compliance monitoring, and automated user provisioning and access control, known as the System for Cross-domain Identity Management (SCIM), will be available as well.

These features are long overdue, said Jeremy Roberts, senior research director at Info-Tech Research Group.

“It’s high time we got some general-purpose AI SaaS to compete with the likes of Microsoft Copilot,” he said. “When we think about new software, we focus a lot on its capabilities, but to be an enterprise solution, it must integrate nicely into the broader ecosystem. The announcements around SSO, RBAC, and audit logs are essential for this. Anyone worried about consumer technology in their businesses should be greatly heartened by the increasing competition in this space.”

Another analyst is curious whether the user experience will surpass those of other AI products.

“I’m excited to see the release of the Enterprise version of Anthropic,” said Terra Higginson, principal research director at Info-Tech Research Group. “Just like we saw in the search engine race of the early 2000s, the product with the best user experience and functionality dominated. Will Claude by Anthropic be the winner of the LLM race? Many of these systems are still offering subpar user experiences, and, to make matters even worse, the companies put a ton of restrictions that just make users lean towards private alternatives.”

Claude Enterprise is available today. Pricing was not announced; Lin said that each organization will be given a customized price based on its needs.