Month: July 2024

RCS now works on iPhones running the iOS 18 beta

Since the release of Apple’s iOS 18 developer beta 2, Rich Communication Suite (RCS) support has come to messaging on iPhones. That means you can look forward to a more platform-agnostic messaging experience than before, making messaging between work colleagues, partners, and friends better than before — sometimes by satellite.

What is Rich Communication Suite (RCS)?

The Global System for Mobile Communications Association (GSMA)-defined RCS standard aims to improve on standard SMS messaging with the addition of a suite of features you usually find on platforms like iMessage or WhatsApp. That means support for group chat, file transfers, typing notifications and more.

Initial work by the GSMA identified some successful customer engagement, marketing, and event communications usage scenarios for enterprise users. While Apple was highly resistant to implementing the standard on its devices, it has now changed its mind, partly as regulators began to question the decision not to offer such support. 

Apple has made one recent reference to RCS. “When messaging contacts who do not have an Apple device, the iMessages app now supports RCS for richer media and more reliable group messaging compared to SMS and MMS,” the company said in June.

What does RCS support on iPhone?

At present, RCS promises support for higher quality photos and videos, audio messages, and larger file sizes for attachments. It also provides read receipts and typing indicators, cross platform emoji reactions, and location sharing. Users can expect:

  • Group chat.
  • File transfers.
  • Typing notifications.
  • Higher resolution photos and video. 
  • Audio messages.
  • Read receipts.
  • Typing indicators.
  • Location sharing.
  • Cross platform emoji.

You will know when you’re in an RCS chat with an Android user because you’ll see a small grey label that says RCS Message in the text field.

Is RCS safe to use?

RCS is not as secure as iMessage but does provide better encryption than you’ll get using SMS. It is possible that Apple will implement a more secure version of RCS in time, but as things stand, the most secure messaging option remains iMessage because it delivers end-to-end encryption.

What this means for iPhone users

The first thoughts on how RCS works between iPhones and Android devices are pretty positive. The images you share will be high-res rather than deeply compressed. Read receipts and typing indicators flow between both platforms. Standard Tapback responses also work, meaning you can send reactions to messages using that system. 

You won’t get access to text formatting or some of the other new iMessage features — and RCS messages remain encased in green bubbles with an accompanying label that tells you this was a Text Message in the RCS format.

Apple’s hierarchy of texts

There is a hierarchy to how messaging is handled. That means if two Apple devices are used to communicate, they will use Apple’s iMessage, which continues to be the best messaging experience on iPhones. 

If an Apple device is communicating with an Android device, the exchange will take place over RCS, and if the carrier doesn’t support RCS or there is no active data connection the messaging all takes place over SMS. At the risk of sounding obvious, SMS lacks the more advanced messaging features you will find in either of the other standards, and Apple’s approach still means iMessage is the best option.

What is the road map for RCS improvement?

The RCS experience will improve over time. The GSMA Association last month finalized the latest update to the standard, adding support for replies and reactions and the ability to edit, recall, and delete messages sent earlier for both parties. 

The update also includes a tool to report spam messages and additional support for Custom Reactions, which may mean that Genmoji and Photomoji will become more cross platform. Apple is working with Google and members of the GSMA to improve the standard worldwide, which implies features such as the ability to edit and delete messages should be available via RCS at some point.

How do I enable RCS on my iPhone?

If you are running the latest iOS 18 beta you can enable RCS in Settings>Apps>Messages, where you should find an RCS toggle. If you don’t see that, it’s likely your carrier doesn’t yet support RCS on iPhones. To support the feature, carriers need to update some of their own settings, which are usually bundled within iOS updates. It is likely more carriers will introduce support for this by the time the iOS 18 ships.

Where is RCS available?

Apple only enabled RCS support on iPhones in the second iOS 18 beta and only on some US networks. That support has now been extended to other nations and some networks, including those in Canada, Spain, France, Germany.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Are AI posts on social media protected free speech?

A recent landmark decision from the US Supreme Court has put content created by generative artificial intelligence (genAI) at the forefront of free speech rights as states grapple with how to regulate social media platforms.

Specifically, the decision calls into question whether textural and video content created by genAI can be considered free speech because human beings were involved in crafting the algorithms that produced that content.

Two Supreme Court cases (Moody v. NetChoice and NetChoice v. Paxton) specifically challenged the state laws passed in Florida and Texas that aimed to prevent social-media platforms from silencing conservative content. In its decision, the Supreme Court combined both cases to decide whether Florida and Texas had unfairly interfered with social media companies’ ability to remove or moderate potentially offensive content.

The cases are about a very specific type of expressive activity, content curation.

“So, in terms of AI, they’re mainly focused on recommender systems and systems that automatically identify, remove, or down-rank content for content moderation purposes,” said Tom McBrien, counsel for the Electronic Privacy Information Center (EPIC), a non-profit research agency whose aim is to protect privacy rights.

The Fifth Circuit Court of Appeals upheld a Texas law allowing the state to regulate social media platforms, while the Eleventh Circuit Court blocked the Florida statute, saying it overburdened editorial discretion. The Supreme Court ultimately ruled that the lower courts had not examined legal precedents and cases closely enough and sent the cases back for reconsideration.

At first blush, neither case appears to involve AI’s use. But the high court emphasized current law be applied — no matter the technology at issue — and that social media platforms be treated like any other entity (such as newspapers) because they curate content and curation is protected speech.

While the decision doesn’t give AI free rein, it did require the lower courts to fully consider all potential applications of the state statutes; the Florida law, in particular, is likely to apply to certain AI platforms, according to Daniel Barsky, an intellectual property attorney in Holland & Knight’s Miami office.

Can GenAI outputs be thought of as speech?  The outputs are supposed to be unique, but they are not spontaneous, as all GenAI output at present is a response to a prompt,” Barsky said.

The First Amendment cases cited by the Supreme Court all involved some sort of human involvement, whether it is writing or speaking the content, making editorial decisions, or selecting content. AI platforms that have arguably have no human involvement would be less likely to be entitled to First Amendment protections, which would affect whether states or the US government can pass laws to ban certain outputs.

Conversely, the decision raises a question aboutwhether AI can commit defamation and if so, who would be liable? It also raises questions about whether the government can regulate social media if that content is produced and selected entirely by AI with no human involvement. And if humans are involved in creating the large language models (LLMs) behind AI, would the resulting content then be considered free speech?

“This is the critical question, but [it] has not yet been addressed by any court; this is an issue that might come up in the continued NetChoice proceedings,” Barksy said. “It is certainly an argument I would consider making if I was arguing a case involving AI and First Amendment issues.”

If AI is considered nothing more than a computer algorithm, laws could be passed to restrict or censor AI outputs; but when humans become involved in the creation of those algorithms, things become complex.

“Basically, this is a big, tangled mess,” Barksy said.

EPIC’s McBrien said it’s unlikely, even if the cases go back up to the Supreme Court, that the Justices will announce a broad rule such as “generative AI outputs are protected expression” or the opposite.

“It’s going to be situational. In the Moody/Paxton cases, NetChoice was angling for them to say that newsfeed generation is always expressive, but the Court rejected this overbroad strategy,” McBrien said. “It remanded the case for the lower courts to parse through the arguments more granularly: what exact newsfeed-construction activities are implicated by the laws, which are claimed to be expressive, are they really expressive, etc.”

The Justices, however, were open to the idea that using algorithms to do something expressive might receive less First Amendment protection, depending on the specifics of the algorithm such as how closely and faithfully it carries out the human being’s message, according to McBrien.

Specifically, the majority thought when content curators (social media) enforce content and community guidelines, such as prohibitions on harassment or pro-Nazi content, those activities receive First Amendment protections. “So, when an algorithm is used to enforce those guidelines, the majority said it might receive First Amendment protections,” he said.

McBrien noted that Justices Amy Coney Barrett and Samuel Alito questioned whether using “black-box algorithms” should receive the same amount of protection, an issue that will be pivotal in the reexamining the case. “Since Justice Barrett’s vote was necessary to form the majority opinion, she will likely be the swing vote in the future,” McBrien said.

The Supreme Court also cited an earlier case, Turner Broadcasting v the FCC; adjudicated in the 1990s, it resolved that cable television companies are protected under First Amendment free speech rights when determining what channels and content to carry on their networks.

“The majority and concurrences pointed to the Turner Broadcasting case where the Court found that the regulation at issue did restrict speech, but because it was passed for competition reasons, not speech-regulating reasons, it was constitutional,” McBrien said. “One could imagine something similar in the realm of generative AI.”

Where does Apple Intelligence come from?

Apple Intelligence isn’t entirely Apple’s intelligence; just like so many other artificial intelligence (AI) tools, it also leans into all the human experience shared on the internet because all that data informs the AI models the company builds.

That said, the company explained where it gets the information it uses when it announced Apple Intelligence last month: “We train our foundation models on licensed data, including data selected to enhance specific features, as well as publicly available data collected by our web-crawler, AppleBot,” Apple explained.

Your internet, their product

Apple isn’t alone in doing this. In using the public internet this way, it is following the same approach as others in the business. The problem: that approach is already generating arguments between copyright holders and AI firms, as both sides grapple with questions around copyright, fair use, and the extent to which data shared online is commodified to pour even more cash into the pockets of Big Tech firms. 

Getty Images last year sued Stability AI for training its AI using 12 million images from its collection without permission. Individual creatives have also taken a stance against these practices. The concern is the extent to which AI firms are unfairly profiting from the work humans do, without consent, credit, or compensation.

In a small attempt to mitigate such accusations, Apple has told web publishers what they have to do to stop their content being used for Apple product development

Can you unmake an AI model?

What isn’t clear is the extent to which information already scraped by Applebot for use in Apple Intelligence (or any generative AI service) can then be winnowed out of the models Apple has already made. Once the model is created using your data, to what extent can your data be subsequently removed from it? The learning — and potential for copyright abuse — has already been baked in.

But where is the compensation for those who’ve made their knowledge available online? 

In most cases, the AI firms argue that what they are doing can be seen as fair use rather than being any violation of copyright laws. But, given that what constitutes fair use differs in different nations, it seems highly probable that the evolving AI industry is heading directly toward regulatory and legal challenges around their use of content.

That certainly seems to be part of the concern coming from regulators in some jurisdictions, and we know the legal framework around these matters is subject to change. This might also be part of what has prompted Apple to say it will not introduce the service in the EU just yet.

Move fast and take things

Right now, AI companies are racing faster than government regulation. Some in the space are attempting to side-step such debates by placing constraints around how data is trained. Adobe, for example, claims to train its imaging models only using legitimately licensed data. 

In this case, that means Adobe Stock images licensed content and older content that is outside of copyright.

Adobe isn’t just being altruistic in this — it knows customers using its generative AI (genAI) tools will be creating commercial content and recognizes the need to ensure its customers don’t end up being sued for illegitimate use of images and other creative works. 

What about privacy?

But when it comes to Apple Intelligence, it looks like the data you’ve published online has now become part of the company product, with one big exception: private data.

“We never use our users’ private personal data or user interactions when training our foundation models, and we apply filters to remove personally identifiable information like social security and credit card numbers that are publicly available on the Internet,” it said. 

Apple deserves credit for its consistent attempts to maintain data privacy and security, but perhaps it should develop a stronger and more public framework toward the protection of the creative endeavors of its customer base.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

AI chip battleground shifts as software takes center stage

The AI landscape is undergoing a transformative shift as chipmakers, traditionally focused on hardware innovation, are increasingly recognizing the pivotal role of software.

This strategic shift is redefining the AI race, where software expertise is becoming as crucial as hardware prowess.

AMD’s recent acquisitions: a case study

AMD’s recent acquisition of Silo AI, Europe’s largest private AI lab, exemplifies this trend. Silo AI brings to the table a wealth of experience in developing and deploying AI models, particularly large language models (LLMs), a key area of focus for AMD.

This acquisition not only enhances AMD’s AI software capabilities but also strengthens its presence in the European market, where Silo AI has a strong reputation for developing culturally relevant AI solutions.

“Silo AI plugs important capability gap [for AMD] from software tools (Silo OS) to services (MLOps) to helping tailor sovereign and open source LLMs and at the same time expanding its footprint in the important European market,” said Neil Shah, partner & co-founder at Counterpoint Research.

AMD’s move follows its previous acquisitions of Mipsology and Nod.ai, further solidifying its commitment to building a robust AI software ecosystem. Mipsology’s expertise in AI model optimization and compiler technology, coupled with Nod.ai’s contributions to open-source AI software development, provides AMD with a comprehensive suite of tools and expertise to accelerate its AI strategy.

“These strategic moves strengthen AMD’s ability to offer open-source solutions tailored for enterprises seeking flexibility and interoperability across platforms,” said Prabhu Ram, VP of industry research group at Cybermedia Research. “By integrating Silo AI’s capabilities, AMD aims to provide a comprehensive suite for developing, deploying, and managing AI systems, appealing broadly to diverse customer needs. This aligns with AMD’s evolving market position as a provider of accessible and open AI solutions, capitalizing on industry trends towards openness and interoperability.”

Beyond AMD: A broader industry trend

This strategic shift towards software is not limited to AMD. Other chip giants like Nvidia and Intel are also actively investing in software companies and developing their own software stacks.

“If you look at the success of Nvidia, it is driven not by silicon but by software (CUDA) and services (NGC with MLOps, TAO, etc.) it offers on top of its compute platform,” Shah said. “AMD realizes this and has been investing in building software (ROCm, Ryzen Aim, etc.) and services (Vitis) capabilities to offer an end-to-end solution for its customers to accelerate AI solution development and deployment.”

Nvidia’s recent acquisition of Run:ai and Shoreline.io, both specializing in AI workload management and infrastructure optimization, also underscores the importance of software in maximizing the performance and efficiency of AI systems.

But this doesn’t mean chipmakers follow similar trajectories toward their goals. Manish Rawat, semiconductor analyst at Techinsights pointed out that for a large part, Nvidia’s AI ecosystem has been established through proprietary technologies and a robust developer community, giving it a strong foothold in AI-driven industries.

“AMD’s approach with Silo AI signifies a focused effort to expand its capabilities in AI software, positioning itself competitively against Nvidia in the evolving AI landscape,” Rawat added.

Another relevant example in this regard is Intel’s acquisition of Granulate Cloud Solutions, a provider of real-time continuous optimization software. Granulate assists cloud and data center clients in optimizing compute workload performance while lowering infrastructure and cloud expenses.

Software to drive differentiation

The convergence of chip and software expertise is not just about catching up with competitors. It’s about driving innovation and differentiation in the AI space.

Software plays a crucial role in optimizing AI models for specific hardware architectures, improving performance, and reducing costs. Eventually, software could decide who rules the AI chip market.

“The bigger picture here is that AMD is obviously competing with NVIDIA for supremacy in the AI world,” said Hyoun Park, CEO and chief analyst at Amalgam Insights. “Ultimately, this is not just a question of who makes the better hardware, but who can actually back the deployment of enterprise-grade solutions that are high-performance, well-governed, and easy to support over time. And although Lisa Su and Jensen Huang are both among the absolute brightest executives in tech, only one of them can ultimately win this war as the market leader for AI hardware.” 

The rise of full-stack AI solutions

The integration of software expertise into chip companies’ offerings is leading to the emergence of full-stack AI solutions. These solutions encompass everything from hardware accelerators and software frameworks to development tools and services.

By offering a comprehensive suite of AI capabilities, chipmakers can cater to a wider range of customers and use cases, from cloud-based AI services to edge AI applications.

For instance, Silo AI, first and foremost, brings an experienced talent pool, especially working on optimizing AI models, tailored LLMs, and more, according to Shah. Silo AI’s SIloOS particularly is a very powerful addition to AMD’s offerings allowing its customer to leverage advanced tools and modular software components to customize AI solutions to their needs. This was a big gap for AMD.

“Thirdly, Silo AI also brings in MLOps capabilities which are a critical capability for a platform player to help its enterprise customers deploy, refine and operate AI models in a scalable way,” Shah added. “This will help AMD develop a service layer on top of the software and silicon infrastructure.”

Implications for enterprise tech

The shift of chipmakers from purely hardware to also providing software toolkits and services has significant ramifications for enterprise tech companies.

Shah stressed that these developments are crucial for enabling enterprise and AI developers to fine-tune their AI models for enhanced performance on specific chips, applicable to both training and inference phases.

This advancement not only speeds up product time-to-market but also aids partners, whether they are hyperscalers or manage on-premises infrastructures, in boosting operational efficiencies and reducing total cost of ownership (TCO) by improving energy usage and optimizing code.

“Also, it’s a great way for chipmakers to lock these developers within their platform and ecosystem as well as monetize the software toolkits and services on top of it. This also drives recurring revenue, which chipmakers can reinvest and boost the bottom line, and investors love that model,” Shah said.

The future of AI: a software-driven landscape

As the AI race continues to evolve, the focus on software is set to intensify. Chipmakers will continue to invest in software companies, develop their own software stacks, and collaborate with the broader AI community to create a vibrant and innovative AI ecosystem.

The future of AI is not just about faster chips — it’s about smarter software that can unlock the full potential of AI and transform the way we live and work.

AI chip battleground shifts as software takes centerstage

The AI landscape is undergoing a transformative shift as chipmakers, traditionally focused on hardware innovation, are increasingly recognizing the pivotal role of software.

This strategic shift is redefining the AI race, where software expertise is becoming as crucial as hardware prowess.

AMD’s recent acquisitions: a case study

AMD’s recent acquisition of Silo AI, Europe’s largest private AI lab, exemplifies this trend. Silo AI brings to the table a wealth of experience in developing and deploying AI models, particularly large language models (LLMs), a key area of focus for AMD.

This acquisition not only enhances AMD’s AI software capabilities but also strengthens its presence in the European market, where Silo AI has a strong reputation for developing culturally relevant AI solutions.

“Silo AI plugs important capability gap [for AMD] from software tools (Silo OS) to services (MLOps) to helping tailor sovereign and open source LLMs and at the same time expanding its footprint in the important European market,” said Neil Shah, partner & co-founder at Counterpoint Research.

AMD’s move follows its previous acquisitions of Mipsology and Nod.ai, further solidifying its commitment to building a robust AI software ecosystem. Mipsology’s expertise in AI model optimization and compiler technology, coupled with Nod.ai’s contributions to open-source AI software development, provides AMD with a comprehensive suite of tools and expertise to accelerate its AI strategy.

“These strategic moves strengthen AMD’s ability to offer open-source solutions tailored for enterprises seeking flexibility and interoperability across platforms,” said Prabhu Ram, VP of industry research group at Cybermedia Research. “By integrating Silo AI’s capabilities, AMD aims to provide a comprehensive suite for developing, deploying, and managing AI systems, appealing broadly to diverse customer needs. This aligns with AMD’s evolving market position as a provider of accessible and open AI solutions, capitalizing on industry trends towards openness and interoperability.”

Beyond AMD: A broader industry trend

This strategic shift towards software is not limited to AMD. Other chip giants like Nvidia and Intel are also actively investing in software companies and developing their own software stacks.

“If you look at the success of Nvidia, it is driven not by silicon but by software (CUDA) and services (NGC with MLOps, TAO, etc.) it offers on top of its compute platform,” Shah said. “AMD realizes this and has been investing in building software (ROCm, Ryzen Aim, etc.) and services (Vitis) capabilities to offer an end-to-end solution for its customers to accelerate AI solution development and deployment.”

Nvidia’s recent acquisition of Run:ai and Shoreline.io, both specializing in AI workload management and infrastructure optimization, also underscores the importance of software in maximizing the performance and efficiency of AI systems.

But this doesn’t mean chipmakers follow similar trajectories toward their goals. Manish Rawat, semiconductor analyst at Techinsights pointed out that for a large part, Nvidia’s AI ecosystem has been established through proprietary technologies and a robust developer community, giving it a strong foothold in AI-driven industries.

“AMD’s approach with Silo AI signifies a focused effort to expand its capabilities in AI software, positioning itself competitively against Nvidia in the evolving AI landscape,” Rawat added.

Another relevant example in this regard is Intel’s acquisition of Granulate Cloud Solutions, a provider of real-time continuous optimization software. Granulate assists cloud and data center clients in optimizing compute workload performance while lowering infrastructure and cloud expenses.

Software to drive differentiation

The convergence of chip and software expertise is not just about catching up with competitors. It’s about driving innovation and differentiation in the AI space.

Software plays a crucial role in optimizing AI models for specific hardware architectures, improving performance, and reducing costs. Eventually, software could decide who rules the AI chip market.

“The bigger picture here is that AMD is obviously competing with NVIDIA for supremacy in the AI world,” said Hyoun Park, CEO and chief analyst at Amalgam Insights. “Ultimately, this is not just a question of who makes the better hardware, but who can actually back the deployment of enterprise-grade solutions that are high-performance, well-governed, and easy to support over time. And although Lisa Su and Jensen Huang are both among the absolute brightest executives in tech, only one of them can ultimately win this war as the market leader for AI hardware.” 

The rise of full-stack AI solutions

The integration of software expertise into chip companies’ offerings is leading to the emergence of full-stack AI solutions. These solutions encompass everything from hardware accelerators and software frameworks to development tools and services.

By offering a comprehensive suite of AI capabilities, chipmakers can cater to a wider range of customers and use cases, from cloud-based AI services to edge AI applications.

For instance, Silo AI, first and foremost, brings an experienced talent pool, especially working on optimizing AI models, tailored LLMs, and more, according to Shah. Silo AI’s SIloOS particularly is a very powerful addition to AMD’s offerings allowing its customer to leverage advanced tools and modular software components to customize AI solutions to their needs. This was a big gap for AMD.

“Thirdly, Silo AI also brings in MLOps capabilities which are a critical capability for a platform player to help its enterprise customers deploy, refine and operate AI models in a scalable way,” Shah added. “This will help AMD develop a service layer on top of the software and silicon infrastructure.”

Implications for enterprise tech

The shift of chipmakers from purely hardware to also providing software toolkits and services has significant ramifications for enterprise tech companies.

Shah stressed that these developments are crucial for enabling enterprise and AI developers to fine-tune their AI models for enhanced performance on specific chips, applicable to both training and inference phases.

This advancement not only speeds up product time-to-market but also aids partners, whether they are hyperscalers or manage on-premises infrastructures, in boosting operational efficiencies and reducing total cost of ownership (TCO) by improving energy usage and optimizing code.

“Also, it’s a great way for chipmakers to lock these developers within their platform and ecosystem as well as monetize the software toolkits and services on top of it. This also drives recurring revenue, which chipmakers can reinvest and boost the bottom line, and investors love that model,” Shah said.

The future of AI: a software-driven landscape

As the AI race continues to evolve, the focus on software is set to intensify. Chipmakers will continue to invest in software companies, develop their own software stacks, and collaborate with the broader AI community to create a vibrant and innovative AI ecosystem.

The future of AI is not just about faster chips – it’s about smarter software that can unlock the full potential of AI and transform the way we live and work.

OpenAI reportedly stopped staffers from warning about security risks

A whistleblower letter obtained by The Washington Post accuses OpenAI of illegally restricting employees from communicating with authorities about the risks their technology may pose. The letter was reportedly sent to the US Securities and Exchange Commission (SEC) — the agency that oversees the trading of securities — urging the regulators to review OpenAI.

According to the letter, OpenAI allegedly used illegal non-disclosure agreements to, among other things, force employees to refrain from whistle-blowing incentives and it required them to state whether they had contact with authorities.

OpenAI has come under previous criticism for the restrictive design of its non-disclosure agreements, which it said it would modify. In a statement to The Washington Post, OpenAI spokesperson Hannah Wong said: “Our whistleblower policy protects employees’ rights to make protected disclosures.”

OpenAI is working on new reasoning AI technology

ChatGPT developer OpenAI is developing a new kind of reasoning AI models with the project name “Strawberry” that can be used for research, according to a report by Reuters. Strawberry was apparently earlier known by the name “Q” and would be considered a breakthrough within OpenAI.

The plan is for the new Strawberry models to not only be able to generate answers based on instructions, but also to be able to plan ahead by navigating the internet independently and reliably perform what OpenAI calls “deep research.”

How Strawberry works under the hood remains unclear; it is also unknown how long the technology may be from completion. In a comment to Reuters, an OpenAI spokesperson said continued research into new AI opportunities is ongoing within the industry. However, the spokesperson did not say anything specific about Strawberry in particular.

More OpenAI news:

OpenAI whistleblowers seek SEC probe into ‘restrictive’ NDAs with staffers

Some employees of ChatGPT-maker OpenAI have reportedly written to the US Securities and Exchange Commission (SEC) seeking a probe into some employee agreements, which they term restrictive non-disclosure agreements (NDAs).

These staffers-turned-whistleblowers have written to the SEC alleging that the company forced their employees to sign agreements that were not in compliance with SEC’s regulations.

“Given the well-documented potential risks posed by the irresponsible deployment of AI, we urge the commissioners to immediately approve an investigation into OpenAI’s prior NDAs, and to review current efforts apparently being undertaken by the company to ensure full compliance with SEC rules,” read the letter shared with Reuters by the office of Senator Chuck Grassley.

The same letter alleges that OpenAI made employees sign agreements that curb their federal rights to whistleblower compensation and urges the financial watchdog to impose individual penalties for each such agreement signed.

Further, the whistleblowers have alleged that OpenAI’s agreements with employees restricted them from making any disclosure to authorities without checking with the management first and any failure to comply with these agreements would attract penalties for the staffers.

The company, according to the letter, also did not create any separate or specific exemptions in the employee non-disparagement clauses for disclosing securities violations to the SEC.

An email sent to OpenAI about the letter went unanswered.

The Senator’s office also cast doubt about the practices at OpenAI. “OpenAI’s policies and practices appear to cast a chilling effect on whistleblowers’ right to speak up and receive due compensation for their protected disclosures,” the Senator was quoted as saying.

Experts in the field of AI have been warning against the use of the technology without proper guidelines and regulations.

In May, more than 150 leading artificial intelligence (AI) researchers, ethicists, and others signed an open letter calling on generative AI (genAI) companies to submit to independent evaluations of their systems to maintain basic protection against the risks of using large-scale AI.

Last April, the who’s who of the technology industry called for AI labs to stop training the most powerful systems for at least six months, citing “profound risks to society and humanity.”

That open letter, which now has more than 3,100 signatories including Apple co-founder Steve Wozniak, called out San Francisco-based OpenAI Lab’s recently announced GPT-4 algorithm in particular, saying the company should halt further development until oversight standards were in place. OpenAI, on the other hand, in May formed a safety and security committee led by board members as they started researching their next large language models.

Analysts expect weak demand for Apple Vision Pro

On Friday, Apple Vision Pro was launched in Europe. But the analysts do not expect any major sales success.

According to research firm IDC, fewer than 500,000 units of the mixed reality headset will be sold in 2024, which is partly due to the high price. Apple’s headset costs $3,499; that corresponds, for example, to almost 50,000 Swedish kronor with included VAT and other fees.

By comparison, Facebook’s parent company Meta sells its Meta Quest 3 headset for $499, and its predecessor, the Meta Quest 2, retails for $299.

According to rumors, a cheaper variant of the Apple Vision Pro will be launched in 2025, but release dates and details remain unclear.

More on Apple Vision Pro:

The promise and peril of ‘agentic AI’

Amazon last week made an unusual deal with a company called Adept in which Amazon will license the company’s technology and also poach members of its team, including the company’s co-founders.

The e-commerce, cloud computing, online advertising, digital streaming and artificial intelligence (AI) giant is no doubt hoping the deal will propel Amazon, which is lagging behind companies like Microsoft, Google and Meta in the all-important area of AI. (In fact, Adept had previously been in acquisition talks with both Microsoft and Meta.) 

Adept specializes in the hottest area of AI that hardly anyone is talking about, but which some credibly claim is the next leap forward for AI technology. 

But wait, what exactly is agentic AI? The easiest way to understand it is by comparison to LLM-based chatbots. 

How agentic AI differs from LLM chatbots

We know all about LLM-based chatbots like ChatGPT. Agentic AI systems are based on the same kind of large language models, but with important additions. While LLM-based chatbots respond to specific prompts, trying to deliver what’s asked for literally, agentic systems take that further by incorporating autonomous goal-setting, reasoning, and dynamic planning. They’re also designed to integrate with applications, systems, and platforms. 

While LLMs, such as ChatGPT, reference huge quantities of data and hybrid systems, like Perplexity AI, combine that with real-time web searches, agentic systems further incorporate changing circumstances and contexts to pursue goals, causing them to reprioritize tasks and change methods to achieve those goals. 

While LLM chatbots have no ability to make actual decisions, agentic systems are characterized by advanced contextual reasoning and decision-making. Agentic systems can plan, “understand” intent, and more fully integrate with a much wider range of third-party systems and platforms. 

What’s it good for?

One obvious use for agentic AI is as a personal assistant. Such a tool could — based on natural-language requests — schedule meetings and manage a calendar, change times based on others’ and your availability, and remind you of the meetings. And it could be useful in the meetings themselves, gathering data in advance, creating an agenda, taking notes and assigning action items, then sending  follow-up reminders. All this could theoretically begin with a single plain-language, but vague, request.

It could read, categorize and answer emails on your behalf, deciding which to answer and which to leave for you to respond to. 

You could tell your agentic AI assistant to fill out forms for you or subscribe to services, entering the requested information and even processing any payment. It could even theoretically surf the web for you, gathering information and creating a report.

Like today’s LLM chatbots, agentic AI assistants could use multimodal input and could receive verbal instructions along with audio, text, and video inputs harvested by cameras and microphones in your glasses. 

Another obvious application for agentic AI is for customer service. Today’s interactive voice response (IVR) systems seem like a good idea in theory — placing the burden on customers to navigate complex decision trees while struggling with inadequate speech recognition so that a company doesn’t need to pay humans to interface with customers — but fail in practice. 

Agentic AI promises to transform automated customer service. Such technology should be able to function as if it not only understands the words but also the problems and goals of a customer on the phone, then perform multi-step actions to arrive at a solution.

They can do all kinds of things a lower-level employee might do — qualify sales leads, do initial outreach for sales calls, automate fraud detection and loan application processing at a bank, autonomously screen candidates applying for jobs, and even conduct initial interviews and other tasks. 

Agentic AI should be able to achieve very large-scale goals as well — optimize supply chains and distribution networks, manage inventory, optimize delivery routes, reduce operating costs, and more.

The risk of agentic AI

Let’s start with the basics. The idea of AI that can operate “creatively” and autonomously — capable of doing things across sites, platforms and networks, directed by a human-created prompt with limited human oversight — is obviously problematic.

Let’s say a salesperson directs agentic AI to set up a meeting with a hard-to-reach potential client. The AI understands the goal and has vast information about how actual humans do things, but no moral compass and no explicit direction to conduct itself ethically.  

One way to reach that client (based on the behavior of real humans in the real world) could be to send an email, tricking the person into clicking on self-executing malware, which would open a trojan on the target’s system to be used for exfiltrating all personal data and using that data to find out where that person would be at a certain time. The AI could then place a call to that location, and say there’s an emergency. The target would then take the call, and the AI would try to set up a meeting.

This is just one small example of how an agentic AI without coded or prompted ethics could do the wrong thing. The possibilities for problems are endless. 

Agentic AI could be so powerful and capable that there’s no way this ends well without a huge effort on the development and maintenance of AI governance frameworks that include guidelines, safety measures, and constant oversight by well-trained people.

Note: The rise of LLMs, starting with ChatGPT, engendered fears that AI could take jobs away from people; agentic AI is the technology that could really do that at scale.

The worst-case scenario would be for millions of people to be let go and replaced by agentic AI. The best case is that the technology  would be inferior to a human partnering with it. With such a tool, human work could be made far more efficient and less error-prone. 

I’m pessimistic that agentic AI can benefit humanity if the ethical considerations remain completely in the hands of Silicon Valley tech bros, investors, and AI technologists. We’ll need to combine expertise from AI, ethics, law, academia, and specific industry domains and move cautiously into the era of agentic AI.

It’s reasonable to feel both thrilled by the promise of agentic AI and terrified about the potential negative effects. One thing is certain: It’s time to pay attention to this emerging technology. 

With a giant, ambitious, capable, and aggressive company like Amazon making moves to lead in agentic AI, there’s no ignoring it any longer.