Page 55 of 125

Apple’s shifting Vision Pro priorities

Reports Apple is closing down Vision Pro production seem counter intuitive, given that in the last few days, Cisco and Vimeo have both released software for the high-tech headset, which is also now being used for pilot training.

At $3,500, the device was never expected to be a mass market product, as Apple CEO Tim Cook agrees. But it certainly continues to make its mark in enterprise computing — as well as offering big promise to entertainment.

The Information has a report in which it claims Apple has told its Vision Pro assembler, Luxshare, that it might need to wind down production in November. The report also cites sources from component suppliers that claim parts production for the device has also been reduced. The implication would suggest Apple thinks it has enough inventory on hand to meet demand, at least for a while.

Are sales slowing? The report suggests maybe, and the data points it provides include:

  • Production scale-backs began this past summer.
  • Enough components have been manufactured to create just over half a million headsets.
  • Some component suppliers ceased production in May.
  • Luxshare makes around 1,000 of the headsets each day, half the production peak.

All in all, the picture painted maintains the narrative we’ve seen since before the launch of Vision Pro — that it’s too expensive for the mass market. But, as Cook said just before the Information was published, Apple knows this already: “At $3,500, it’s not a mass-market product,” he said. “Right now, it’s an early-adopter product. People who want to have tomorrow’s technology today—- that’s who it’s for. Fortunately, there’s enough people who are in that camp that it’s exciting.” 

Who’s excited now?

We know the enterprise market is most excited about the potential of Vision Pro. We’ve heard multiple reports explaining its use in various industries, including during surgery. Just this week, Cisco introduced Cisco Spatial Meetings for the Vision Pro, which builds greatly on the Webex support the company already provides for the Apple product. 

For consumers, Vimeo this week did what YouTube has not, introducing an app that lets its users create, view, and share spatial videos. “This kind of spatial content is the future of storytelling, and we’re proud to be at the forefront of this revolution,” said Vimeo CEO Philip Moyer. 

Apple does seem to have succeeded in igniting interest among some unique usage scenarios. One that most caught my eye this week comes from CAE and is an immersive pilot training app for the device. This app has been designed so pilots can familiarize themselves with in-flight controls before they begin training flights. In recent weeks, we’ve also seen implementations in training, sales and marketingmedicine, engineering and beyond. Those are the people using tomorrow’s technology today at this time. While there are developers, Apple enthusiasts, and bored rich people spending on these devices, the biggest interest in them, as I’ve always said, is coming from the enterprise.

As the recent multi-million-dollar investment in a content delivery network for mass market spatial reality experiences shows, that’s going to change….

Waiting for tomorrow

Apple’s first calendar quarter is traditionally its slowest. With that in mind, it makes sense for Apple to slow manufacturing of its edgiest device in preparation for a muted sales season. Potentially shuttering production in November makes sense through that lens — particularly as Apple is expected to introduce a lower-cost device that also runs visionOS, which is what we’ve all anticipated for months. The Information once again confirms this theory — and also says development of a second generation “Pro” device has been delayed for a year, which was rumored before.

The report, however, also claims Apple might release what is described as an “incremental” updated to Vision Pro with limited changes, “such as a chip upgrade.”

Given the Vision Pro runs an M2 Apple processor, it makes sense to gloss it up a little with an M4, particularly as Apple is likely to introduce new Apple Intelligence features to visionOS next year. But 2025 is also when Apple is expected to introduce a smaller, more compact, and less expensive visionOS-powered system, one that potentially uses an iPhone as the CPU.

In other words, while there’s little doubt that introducing Vision Pro to a world battered by savage conflicts, accelerating energy costs, and political instability means Apple m ight not have met the lofty sales targets it originally aspired to meet, the idea that Apple is abandoning the product is far-fetched.

Production targets may have been lowered for now, but this is only the lull before the rollout of a more affordable product more of us can explore. I expect intimations of this as soon as WWDC 2025. 

Please follow me on LinkedInMastodon, or join me in the AppleHolic’s bar & grill group on MeWe.

2025: The year of the AI PC

As PC, chip, and other component makers unveil products tailored to generative artificial intelligence (genAI) needs on edge devices, users can expect to see far more task automation and copilot-like assistants embedded on desktops and laptops next year.

PC and chip manufacturers — including AMD, Dell, HP, Lenovo, Intel, and Nvidia — have all been touting AI PC innovations to come over the next year or so. Those announcements come during a crucial timeframe for Windows users: Windows 10 will hit its support end of life next October.

Forrester Research defines an AI PC as one that has an embedded AI processor and algorithms specifically designed to improve the experience of AI workloads across the central processing unit (CPU), graphics processing unit (GPU), and neural processing unit, or NPU. (NPUs allow the PCs to run AI algorithms at lightning-fast speeds by offloading specific functions.)

“While employees have run AI on client operating systems (OS) for years — think background blur or noise cancellation — most AI processing still happens within cloud services such as Microsoft Teams,” Forrester explained in a report. “AI PCs are now disrupting the cloud-only AI model to bring that processing to local devices running any OS.”

Forrester has tagged 2025 “the Year of the AI PC” — and if the number of recent product announcements is any indication, that’s likely to be the case.

Gartner Research projects PC shipments will grow by 1.6% in 2024 and by 7.7% in 2025. The biggest growth driver will be due, not the arrival of not AI PCs, but to the need by many companies and users to refresh their computers and move toward Windows 11.

“Our assumption is that [AI PCs] will not drive shipment growth, meaning that most end users won’t replace their PCs because they want to have the AI. They will happen to select [an AI PC] if they will replace their PCs for specific reasons — e.g., OS upgrade, aging PCs, or a new school or job, and most importantly, the price is right for them,” said Gartner analyst Mika Kitagawa.

The biggest impact of AI PCs on the industry will be revenue growth due to changes in the components, such as adding an NPU core and more considerable memory requirements. AI PCs are also likely to boost application service provider and end-user spending. Gartner predicts end-user spending will rise 5.4% this year and jump 11.6% in 2025, a growth rate that will outpace AI PC shipment growth. 

“In five years, the [AI PC] will become a standard PC configuration, and the majority of PCs will have an NPU core,” Kitagawa said.

Tom Mainelli, an IDC Research group vice president, noted that across the silicon ecosystem there are already systems-on-chips (SoCs) with NPUs from Apple, AMD, Intel, and Qualcomm. “Apple led the charge with its M Series, which has included an NPU since arriving on the Mac in 2020,” he said.

“To date, neither the operating systems — Windows or macOS — nor the apps people use have really leveraged the NPU,” Mainelli said. “But that is beginning to change, and we will see a big upswing in OSes and apps beginning to leverage the benefits of running AI locally, versus in the cloud, as the installed base of systems continues to grow.”

Windows 11 has several genAI features built in, and Apple is slated to roll out “Apple Intelligence” features next week.

Nvidia chips, particularly the company’s GPUs, are already widely used in PCs. They’re popular for gaming, graphic design, video editing, and machine learning applications. Its GeForce series is especially well-known among gamers, while the Quadro and Tesla series are often used in professional and scientific computing. Many PC builders and gamers choose Nvidia processors for their performance and advanced features such as ray tracing and AI-enhanced graphics.

Nvidia isn’t the only manufacturer trying to get into the AI PC game. Samsung Electronics has started mass production of its most powerful SSD for AI PCs — the PM9E1. Intel earlier this year announced its line of “Ultra” chips, which are also aimed at genAI PC operations, and Lenovo just introduced its “Smarter AI” line of tools that include agents and AI assistants across a number of devices. And AMD has touted new CPUs offering greater processing power tailored for AI operations.

“AMD is aggressively pursuing its strategy of being a full-breadth processor provider with a heavy emphasis on the emerging AI market,” said Jack Gold, principal analyst with tech industry research firm J. Gold Associates. “It is successfully positioning itself as an alternative to both Intel and Nvidia, as well as an innovator in its own right.”

Upcoming genAI features need sophisticated hardware

“Leaders foresee many use cases for genAI, from content creation to meeting transcription and code development,” Forrester said in its report. “While corporate-approved genAI apps such as Microsoft Copilot often run as cloud services, running them locally enables them to interact with local hardware, such as cameras and microphones, with less latency.”

For his part, Mainelli will be watching to see how Apple rolls out Apple Intelligence on the Mac — and how users respond to the new features.

Like Microsoft’s cloud-based Copilot chatbot, Apple Intelligence has automated writing tools including rewrite and proofread functionality. The onboard genAI tools can also generate email and document summaries, pull out key points and lists from an article or document, and generate images through the Image Playground app.

“And by the end of this year, we will see Microsoft’s Copilot+ features land on Intel and AMD systems with newer 40-plus TOPS NPUs in addition to systems already shipping with Qualcomm’s silicon,” Mainelli said.

Independent software vendors (ISVs) will also use AI chips to enable new use cases, especially for creative professionals. For example, Audacity — an open-source music production software company — is working with Intel to deliver AI audio production capabilities for musicians, such as text-to-audio creation, instrument separation, and vocal-to-text transcription.

Dedicated AI chipsets are also expected to improve the performance of classic collaboration features, such as background blur and noise, by sharing resources across CPUs, GPUs, and NPUs.

“Upset that your hair never looks right with a blurred background? On-device AI will fix that, rendering a much finer distinction between the subject and the blurred background,” Forrester said. “More importantly, the AI PC will also enable new use cases, such as eye contact correction, portrait blur, auto framing, lighting adjustment, and digital avatars.”

From the cloud to the edge

Experts see AI features and tools moving more to the edge — being embedded on smartphones, laptops and IoT devices — because AI computation is done near the user at the edge of the network, close to where the data is located, rather than centrally in a cloud computing facility or private data center. That means less lag time and better security.

Lenovo, for example, just released AI Now, an AI agent that leverages a local Large Language Model (LLM) built on Meta’s Llama 3, enabling a chatbot that’s able to run on PCs locally without an internet connection. And just last month, HP announced two AI PCs: theOmniBook Ultra Flip 2-in-1 laptop and the HP EliteBook X 14-inch Next-Gen AI laptop. The two PCs come with three engines (CPU, GPU, and NPU) to accelerate AI applications and include either an Intel Core Ultra processor with a dedicated AI engine or an AMD Ryzen PRO NPU processor enabling up to 55 TOPS (tera operations per second) performance.

The HP laptops come with features such as a 9-megapixel AI-enhanced webcam for more accurate presence detection and adaptive dimming, auto audio tuning with AI noise reduction, HP Dynamic Voice Leveling to optimize voice clarity, and AI-enhanced security features.

AI could do for productivity what search engines like Google once did for finding content online changed, according to Gold. “With AI and neural processing units, mundane tasks will get easier, with things like trying to find that email or document I remember creating but can’t figure out where it is,” he said. “It will also make things like videoconferencing much more intuitive and useful as it takes out many of the ‘settings’ we now have to do.”

With the arrival of specific AI “agents,” PC users will soon have the ability to have tasks done for them automatically because the operating system will be much smarter with AI assistance. “[That means] I don’t have to try and find that setting hidden six layers below the OS screen,” Gold said. “In theory, security could also get better as we watch and learn about malware practices and phishing.

“And we can use AI to help with simple tasks like writing better or summarizing the 200 emails I got yesterday I haven’t had time to read,” he said.

Apple, Samsung, and other smartphone and silicon manufacturers are rolling out AI capabilities on their hardware, fundamentally changing the way users interact with edge devices like smartphones, tablets, and laptops.

“While I am very excited to see how the OS vendors and software developers add AI features to existing apps and features, I’m most excited to see how they leverage local AI features to evolve how we interact with our devices and bring to market new experiences that we didn’t know we needed,” Mainelli said.

But caution remains the watchword

Gold cautioned there are also downsides to the sudden arrival of genAI and how quickly the technology continues to evolve.

“With AI looking at or recording everything that we do, is there a privacy concern? And if the data is stored, who has access?” he said. “This is the issue with Microsoft Recall, and it’s very concerning to track everything I do in a database that could be exposed. It also means that if we rely on AI, how do we know the models were properly trained and not making some critical mistakes?”

AI errors and hallucinations remain commonplace, meaning users will have to deal with what corporate IT has been wrestling with for two years.

“On-device generative AI could be susceptible to the same issues we encounter with today’s cloud-based generative AI,” Mainelli said. “Consumer and commercial users will need to weigh the risks of errors and hallucinations against the potential productivity benefits.”

“From the hardware side, more complex processing means we’ll have more ways for the processors to have faults,” Gold added. “This is not just about AI, but as you increase complexity, the chances of something going wrong gets higher. There is also an issue of compatibility. Do software vendors have programs specific to Intel, AMD, Nvidia, Arm? In all likelihood, at least for now, yes they do.”

As genAI tools and features increase, the level of software support will also have to grow — and companies will need to face the possibility of compatibility issues. AI features will also take a lot of processing power, and it’s not clear how heavy use of it on PCs and other devices might affect battery life, Gold noted.

“If I’m doing heavy AI workloads, what does that do to battery life — much like heavy graphics workloads affect battery life dramatically,” he said.

Traditional on-device security might not be able to prevent attacks that target AI applications and tools, which could result in data privacy vulnerabilities. Those cyberattacks can come in a variety of forms: prompt injection, AI model tampering, knowledge base poisoning, data exfiltration, personal data exposure, local file vulnerability, and even the malicious manipulation of specific AI apps.

“Regarding security, AI indeed represents some risk (as is true with any new technology),” Mainelli said, “but I expect it to be mostly positive when it comes to securing the PC. By leveraging the low-power NPU to run security persistently and pervasively on the system, security vendors should be able to use AI to make security less intrusive and bothersome to the end user, which means they’ll be less likely to try to circumvent it.”

Qualcomm’s license battle with Arm puts many AI-enabled Copilot+ PCs in peril

As its long-running dispute with Arm turned into a war of words this week, the stakes for chip giant Qualcomm and its technology partners, including Microsoft, couldn’t be higher.

Along with MediaTek and Apple, Qualcomm is one of the biggest suppliers of chips for use in smartphones and tablets.  As PCs, smartphones, and automobiles acquire more AI capabilities, the increasingly powerful Snapdragon Elite platform is supposed to be Qualcomm’s big move into those arenas.

Now the dispute with Arm, which has been rumbling on since 2022 over Qualcomm’s right to develop the ARM-based Oryon CPU core in its chips, threatens to derail the whole project. To clarify, ARM is the architecture, Arm is the company.

According to Bloomberg, and independently verified by PC World, Arm recently issued a 60-day notice of cancellation of Qualcomm’s Oryon license.

Disputes over intellectual property (IP) and licensing are common in tech, but the report garnered an unusually spiky response from a Qualcomm spokesperson, which suggests the confrontation might be more serious.

“This is more of the same from Arm — more unfounded threats designed to strongarm a longtime partner, interfere with our performance-leading CPUs, and increase royalty rates regardless of the broad rights under our architecture license.”

According to the spokesperson, the timing of Arm’s move was connected to an impending court date in December.

“Arm’s desperate ploy appears to be an attempt to disrupt the legal process, and its claim for termination is completely baseless. We are confident that Qualcomm’s rights under its agreement with Arm will be affirmed. Arm’s anticompetitive conduct will not be tolerated.”

Of course, it probably didn’t help Qualcomm’s mood that the news emerged smack in the middle of the company’s major Snapdragon Summit 2024, held in Maui this week.

Why does Oryon matter?

Qualcomm’s Snapdragon Elite platform comprises four system-on-a-chip (SoC) microprocessors aimed at different market segments: X Elite for Windows PCs, the 8 Elite for smartphones and tablets, the Elite Cockpit for automotive systems, and the Elite Ride for automated driving.

All use ARM-based cores in different configurations: the Oryon CPU as a general microprocessor, the Hexagon neural processing unit (NPU) to accelerate AI capabilities, and the Adreno graphics processing unit (GPU) for graphics.

It is the first of those, the Oryon CPU core originally acquired when Qualcomm bought Nuvia in 2021, that is at the heart of the dispute between the two companies. Arm claims that the agreement it had with Nuvia to develop Oryon did not transfer to Qualcomm, and that any agreements it had with Qualcomm were separate. Buying Nuvia didn’t alter this fact.

Could this affect business PCs?

The dispute is complicated by the involvement of Microsoft, which heads a list of PC makers using Qualcomm’s X Elite platform to push AI-enabled PCs.

That should be good news. After a long dip, the PC market has looked up in the last year, increasing 3.4% year-over-year in Q2 2024, according to figures from Canalys, the third quarter in a row to register growth.

Previous attempts to get Windows running on ARM floundered, but this time Qualcomm has made good with X Elite. Having decided that AI-enabled PCs are a new paradigm, Microsoft will want the dispute to be resolved as soon as possible.

The same goes for Qualcomm’s other PC partners, including Acer, Asus, Dell, HP, Lenovo, and Samsung, all of which have developed models using the same platform. In addition to Microsoft’s Surface Pro and Surface Laptop, models based on X Elite include Lenovo’s Yoga Slim 7x 14, HP’s OmniBook X, Acer’s Swift 14 AI, and various laptops in Dell’s XPS, Latitude, and Inspiron ranges.  

All feature Copilot+, each promotes the claimed performance boost and hugely improved battery life of the X Elite, and most are at the more expensive business end of the price charts.

PCs with Intel and AMD processors also qualify as Copilot+ certified, so any failure to resolve the dispute, or defeat for Qualcomm, won’t disrupt AI capabilities from appearing in Windows laptops, but the Oryon core is also used in high-end smartphones and tablets due for release in the coming weeks.

The market assumption is that the companies will resolve the dispute before it reaches court, not least because of the uncertainty its continuation would create for both. That’s how numerous other licensing and IP disputes in tech end up being quietly forgotten.

That didn’t stop the pair’s share prices from taking a dive as news of the squabble’s latest development became public, a sign that some think this dispute is bound to hurt at least one of the companies.

Ericsson’s return-to-office policy is causing trouble

During the pandemic, working from home was the order of the day at Ericsson. But as employees started to return, it was decided two years ago to have a 50 percent attendance in the office — a policy that was never really followed up, according to Jessica Nygren, vice chairman of the Swedish Association of Graduate Engineers’ local branch at Ericsson.

Now the company wants to see more people in the office, and at the end of the summer it announced a new policy: 60 percent attendance.

The company’s press officer Ralf Bagner described it as “a minor adjustment in the guidelines to increase clarity.”

“Today, Ericsson has a hybrid guideline based on the fact that we believe in the human encounter. We also believe that there should be a purpose to where an individual or team chooses to work. This results in an office-first mindset among managers and employees,” he said via email.

Jessica Nygren describes the change very differently.

“The decision came very suddenly, without warning, which meant that many managers took it straight at their word. Every day we see horror examples where managers state that employees should come in three days a week, full days. But it is stated in the policy that it is about 60 percent of the working time over a year, which makes a fairly big difference. To follow it to the letter strangles flexibility,” she said.

Rules without reason

Bagner wrote, “Ericsson’s hybrid guideline has always given every manager, employee and team the opportunity to work in dialogue on how and where they work best and that everyone understands the importance and benefits of meeting, from an individual and team perspective, and from a social and cultural perspective.”

But this is not what it looks like in practice, according to the union.

“We also believe that the company needs a greater presence in the office — developers need to brainstorm to find new products going forward — but no motive has been presented for us to use this particular model.

As it looks now, many are in a bind according to Jessica Nygren. Some of the employees come from other locations where Ericsson has previously had operations that have downsized or disappeared. They have then been offered a position in Kista but have remained in Örebro or Gävle. Now they are suddenly required to commute five, six hours a day, three days a week.

And when they arrive, the office may be full already.

“If you’ve been commuting for several hours, you want an adequate workplace: We’re supposed to be inside 60 percent of the time, but there are only seats for 50 percent of the staff, so you have to puzzle. In some places there is a lot of space left, in others not. Yesterday, for example, two of my colleagues sat in the toilet and took meetings because there was no room. Others go out and get in the car,” she says.

Not opposed

At the same time, Jessica Nygren emphasizes that the union is not against more people coming into the office, but that it is a process that should be allowed to take some time and be better adapted to different individuals.

“If we had been told that they wanted to increase the presence in the offices, we would have given the thumbs up. But then perhaps they should have announced that it would be launched after the turn of the year and that they took in feedback in the meantime. Are there parking spaces? What is commuting? How can we attract people — instead of with “push” as today, it would be better with “pull”.

Managers should also avoid having a harsh policy imposed on them and be able to decide what works in their particular work group, according to Jessica Nygren — someone may need to be in the office for more than three days while someone else can work more from home.

From the union’s side, however, they think they have a good dialogue with CEO Börje Ekholm, who has clarified in his weekly newsletters that “one size” does not fit all.

“Now the company management just has to get the other managers to understand it — as it looks now, the employees do not feel safe and they do not feel welcome in the office in cases where they are squeezed by the new policy,” says Jessica Nygren.

Cisco’s new AI agents and assistants aim to ease customer service headaches

Even in today’s modern age, call center customer service continues to be a nightmare. Wait times can be long, customers get frustrated when caught in loops with limited and robotic chatbots, human agents are overworked and don’t always have the visibility into information they need, calls are dropped, information is lost … and the list goes on.

But AI agents are growing ever more sophisticated, showing promise in nearly every area of the enterprise, including customer service.

Cisco is angling to be a leader in AI-powered call center support, and Wednesday at its WebexOne 2024 event, it announced new Webex AI agents and assistants that will work alongside humans to streamline processes and address common headaches and snag points.

“In this age of AI, there’s one belief that we hold close to our hearts, and that is the value of human interaction,” Anurag Dhingra, SVP and GM of Cisco collaboration, said in a pre-briefing. “Experiences matter, and in fact matter more in the age of AI.”

AI helping humans

Over the last 18 to 24 months, Cisco has been working to improve the support experience for both customers and employees, said Dhingra. He pointed to a survey the company did with 1,000 customer experience leaders in 12 industries and 10 global markets. More than half of respondents (60%) said self-service isn’t working, and 60% also reported that human agents are overworked. Further, customer experience leaders reported that 1 out of 3 of those agents lack the customer context needed to deliver the best possible customer experiences.

“A lot of this is rooted in fragmented and siloed investments,” he said (indeed, 88% of respondents to the Cisco survey reported technology siloes).

Webex AI Agent, available in early 2025, will bring together conversational intelligence with generative AI to help improve self-service. Dhingra pointed out that, while customers want self-service, 55% avoid it because they find it “rigid and unhelpful.”

Existing AI bots “tend to be scripted, robotic, quite limited in what they can do,” said Dhingra.

The AI Agent platform will include a new design tool, AI Agent Studio, which allows contact centers to build, customize, and deploy voice or digital agents in minutes, choosing the model of their choice, Cisco said. This could help improve customer interactions and resolve issues more quickly and easily.

Dhingra pointed out that well-functioning self-service options can result in a 39% improvement in customer satisfaction (CSAT) scores. Furthermore, predicted Jeetu Patel, Cisco’s EVP and chief product officer, in the near future, 90% to 95% of customer calls will be handled by automated agents. But, he said, “it won’t feel like you’re talking to a robot, it will feel like you’re talking to a human being.”

Human interaction still critical

However, Dhingra pointed out, “there is no substitute for humans; sometimes customers need to talk to humans.”

AI Assistant for Webex Contact Center (to be generally available in Q1 2025) will work alongside humans to quickly supply information. New tools will include:

  • Suggested responses that appear directly in the platform.
  • Context summaries that help facilitate handoffs from AI to human agents. Necessary background information is included to get the human agent up to speed and eliminate the need for customers to repeat themselves.
  • Dropped call summaries: All interactions are captured and documented so that agents and customers can pick up where they left off when a call is resumed.
  • Automatic CSAT scoring based on data and transcripts of customer interactions.

In addition, to support overworked humans, Cisco plans to release an agent wellness platform that will schedule automatic breaks and shift which channels agents support to increase or decrease capacity based on need.

“It’s around helping humans,” said Dhingra.

Able to pivot, understand limitations

Prior to launch, Cisco is performing an internal pilot of Webex AI Agent in its human resources department. In a demo video shared by Dhingra, a worker interacted on her phone with an AI agent and asked the agent to book time off for her in Workday. The agent asked how many days off she was planning to take, and when, then booked it.

The worker then asked the AI agent to send her a new laptop, to which it replied: “I’m sorry but this is not something I’ve been trained on. I’m still learning as an AI agent, thanks for your understanding. If you need a new laptop, please contact your IT support team.”

This ability of AI to understand its limitations is critical, said Dhingra. “There are guardrails in place to stop it from going off rails.”

In the demo, the employee also interrupted the agent a few times, asking one question, then quickly switching to another. After a quick pause, the agent was able to switch gears. “[The employee] pivoted, asked about one thing, before an answer was given, she changed gears,” said Dhingra. “The AI agent was able to deal with that quite effectively.”

Further, he pointed out, the voice was “a lot more natural sounding, not as robotic as it has been in the past.”

Responsible AI ‘table stakes’

Responsible AI is top of mind for nearly every enterprise leader today, and Dhingra explained that Cisco has developed a set of principles it uses to review its AI systems, looking at transparency, fairness, accountability, reliability, security, and privacy.

The goal is to provide customers with information on how the models work and how they were trained, to “describe how the AI works under the covers,” he said. “We break down our responsible AI by use cases.”

Customers can also provide feedback that Cisco treats “just like security incidents,” he said. If a customer flags an issue, there is what he called an “urgent process in place” that will look into it.

“I do think this is table stakes for everyone right now,” said Dhingra.

Are humans reading your AI conversations?

Generative AI (genAI) is taking over the tech industry. From Microsoft’s genAI-assistant-turned-therapist Copilot being pinned to the Windows taskbar to Google’s Android operating system being “designed with AI at its core,” you can’t install a software update anymore without getting a new whizz-bang AI feature that promises to boost your productivity.

But, when you talk to AI, you’re not just talking to AI. A human might well look at your conversations, meaning they aren’t as private as you might expect. This is a big deal for both businesses working with sensitive information as well as individuals asking questions about medical issues, personal problems, or anything else they might not want someone else to know about.

Some AI companies train their large language models (LLMs) based on conversations. This is a common concern — that your business data or personal details might become part of a model and leak out to other people. But there’s a whole other concern beyond that, and it could be an issue even if your AI provider promises never to train its models on the data you feed it.

Want more about the future of AI on PCs? My free Windows Intelligence newsletter delivers all the best Windows tips straight to your inbox. Plus, you’ll get free in-depth Windows Field Guides as a special welcome bonus!

Why humans are reviewing AI conversations

So why are humans looking at those conversations? It’s all about quality assurance and spotting problems. GenAI companies may have humans review chat logs to see how the technology is performing. If an error occurs, they can identify it. Think of it as performing “spot checks” with feedback from human reviewers then used to train the genAI model and improve how it will respond in the future.

Companies also review conversations when they suspect abuse of their service. It’s easy to imagine that the companies could also use AI tools themselves to dig through the masses of chat logs and find ones where there seems to be some sort of problem or a safety issue.

This isn’t new to AI. For example, Microsoft has had contractors listening to people’s Skype audio conversations for quality assurance purposes as well. Yikes.

A real privacy concern

Tools like OpenAI’s ChatGPT and Google’s Gemini are being used for all sorts of purposes. In the workplace, people use them to analyze data and speed up business tasks. At home, people use them as conversation partners, discussing the details of their lives — at least, that’s what many AI companies hope. After all, that’s what Microsoft’s new Copilot experience is all about — just vibing and having a chat about your day.

But people might share data that’d be better kept private. Businesses everywhere are grappling with data security amid the rise of AI chatbots, with many banning their employees from using ChatGPT at work. They might have specific AI tools they require employees to use. Clearly, they realize that any data fed to a chatbot gets sent to that AI company’s servers. Even if it isn’t used to train genAI models in the future, the very act of uploading data could be a violation of privacy laws such as HIPAA in the US.

For many knowledge workers, it’s tempting to give ChatGPT a big data set of customer details or company financial documents and have it do some of that informational grunt work. But, again, a human reviewer might see that data. The same is true when these tools are put to personal use.

Copilot on Windows 11
Humans may review your conversations with Microsoft Copilot.

Chris Hoffman, IDG

Do ChatGPT, Copilot, and Gemini use human reviewers?

To be clear, all signs suggest humans are not actively reading the vast majority of conversations with AI chatbots. There are far too many conversations to make that possible. Still, the main genAI tools you’ve heard of do at least occasionally use human reviews.

For example:

  • ChatGPT lets you turn off chat history by activating a “temporary chat.” With chat history on, the conversations will be used to train OpenAI’s models. With a temporary chat, your conversations won’t be used for model training, but they will be stored for 30 days for possible review by OpenAI “for safety purposes.” ChatGPT’s Enterprise plans provide more data protections, but human reviewers are still involved at times.
  • Microsoft says Copilot conversations are also reviewed by humans in some situations: “We include human feedback from AI trainers and employees in our training process. For example, human feedback that reinforces a quality output to a user’s prompt, improving the end user experience.”
  • Google’s Gemini also uses human reviewers. Google spells it out: “Please don’t enter confidential information in your conversations or any data you wouldn’t want a reviewer to see or Google to use to improve our products, services, and machine-learning technologies.”
ChatGPT Temporary Chat
ChatGPT’s Temporary Chat option provides more privacy, but humans may still review some of your conversations.

Chris Hoffman, IDG

How to ensure no one is reading your AI conversations

Companies that need to safeguard business data and follow the relevant laws should carefully consider the genAI tools and plans they use. It’s not a good idea to have employees using a mishmash of tools with uncertain data protection agreements or to do anything business-related through a personal ChatGPT account.

In the long run, AI models that run locally could prove to be the ideal answer. The dream is that an AI chatbot would run entirely on your own computer or mobile device and wouldn’t ever need to “phone home.” Companies could run their own AI software in their own data centers, if they so chose — keeping all of their data entirely under their own control.

Despite all the criticism of Microsoft’s Recall tool, which will let you search through your Windows 11 desktop usage on a Copilot+ PC when it launches, Recall had the right idea in many ways. It will do everything on your own PC without sending things to Microsoft. Human reviewers won’t see it.

On the flip side, Google recently launched AI history search for Chrome — and, again, human reviewers might examine your browser history searches if you try it out.

Chrome AI History Search
Google warns that humans may see your browsing history if you turn on AI history search in Chrome.

Chris Hoffman, IDG

Two sides of the AI-human question

Let’s come back to earth. I don’t mean to be Chicken Little here: The average person’s ChatGPT conversations or Copilot chats probably aren’t being reviewed. But what’s important to remember is that they could be. That’s part of the deal when you sign up to use these services. And now more than ever, that’s something critical for everyone to keep in mind — from businesses using AI professionally to people chatting with Copilot about their hopes and dreams.

Let’s stay in touch! My free Windows Intelligence newsletter delivers all the best Windows advice straight to your inbox. Plus, get free Windows Field Guides just for subscribing!

5 advanced Gboard tricks for smarter Android typing

QWERTY, QWERTY, QWERTY. QWERTY.

Oh — hi there! Sorry for the slightly nonsensical greeting. I’ve been thinking a lot about keyboards this week, y’see, and how that trusty ol’ QWERTY surface has evolved in our lives.

Also, saying “QWERTY” over and over again is surprisingly fun to do. Go ahead and try it. I’ll wait.

Back? Cool. So, about that QWERTY contemplation: ‘Twas a time not so long ago that our QWERTY interactions on the mobile-tech front revolved almost entirely around actual physical keys. (Drooooooid, anyone?) Then, even when we started relying on on-screen QWERTY surfaces, we were hunting and pecking and doing an awful lot of correcting.

I remember when Google bought out a now-forgotten promising Android keyboard project called BlindType. BlindType’s entire premise was that it was smart enough to figure out what you were trying to type, even when your fingers didn’t hit all the right letters.

The concept seemed downright revolutionary at the time — which is funny now, of course, ’cause that feels like such a common and expected feature in the land o’ Android keyboards. But my goodness, have we come a long way.

These days, you can absolutely type like a clumsy caribou and still see your thoughts come out mostly the way you’d intended. You can seamlessly switch between tapping and swiping, too, and you can even speak what you want to write with surprisingly decent reliability (…most of the time).

But when it comes to Google’s Gboard keyboard, your options for intelligent text input don’t end there. In addition to its many useful shortcuts and shape-shifting form choices, Gboard has some out-of-sight options for advanced text interactions that’ll save you time and make your days significantly easier.

They aren’t things you’ll use all the time, in place of the now-standard sloppy-tappin’, wild-swipin’, and hopeful-speaking methods. Rather, they’re specific tools you’ll use alongside those other Android text input options — like smart supplements for the especially enlightened among us.

Check ’em out for yourself and see which of these Gboard goodies are either new to you or maybe just gems you’ve gotten out of the habit of using.

[Psst: Love shortcuts? My free Android Shortcut Supercourse will teach you tons of time-saving tricks for your phone. Get your first lesson this instant!]

Gboard Android trick #1: The on-demand scan

First up is a super-handy way to import text from the real world and then use it as a starting point for whatever it is you’re typing.

It’s all too easy to overlook or forget, but Gboard has a simple built-in trick for snagging text from a monitor, document, or anything else around you and then importing it directly into your current text field.

Just find the icon that looks like a document with arrows on its corners — either in Gboard’s top row or in the menu of options that comes up when you tap the four-square menu button in the keyboard’s upper-left corner. (And remember: You can always change what’s in that Gboard top row by touching and dragging any icons in that full menu area and placing ’em in whatever position you want.)

Tap that bad boy, point your phone at the text in question — and hey, how ’bout that?!

Google Gboard Android text input: Scan
Scanned words, in a text field and ready — with precisely three taps in Google’s Gboard Android keyboard.

JR Raphael, IDG

You’ve got words from the real world right in front of you — ready to write around or edit as you see fit.

Gboard Android trick #2: Write right

Few mere mortals realize it, but in addition to tapping, swiping, and talking, you can also enter text into any field on Android with some good old-fashioned handwriting on your fancy phone screen.

It’s an interesting option to keep in mind for moments when you feel like your own scribbly scrawling might be more efficient than any other text input method.

This one takes a little more legwork to get going the first time, but once you do that, it’ll never be more than a quick tap away:

  • First, head into Gboard’s settings by tapping the four-square menu icon in the keyboard’s upper-left corner and then tapping the gear-shaped Settings option in the full Gboard menu.
  • Select “Languages” followed by the Add Keyboard button. Type “English (US)” (or whatever language you prefer), then make sure “Handwriting” is active and highlighted at the top of the screen.
  • Tap the Done button to apply the changes.

Now, make your way to any open text field to pull up Gboard, and you should be able to either press and hold the space bar or hit the newly present globe icon next to it to toggle between the standard keyboard setup and your snazzy new handwriting recognition system.

And once you’ve got that handwriting canvas open, all that’s left is to write, write, write away and watch Google’s virtual genie translate your illegible squiggles into regular text almost instantly.

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/10/gboard-android-text-input-handwriting.webp?quality=50&strip=all 900w, https://b2b-contenthub.com/wp-content/uploads/2024/10/gboard-android-text-input-handwriting.webp?resize=150%2C150&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2024/10/gboard-android-text-input-handwriting.webp?resize=300%2C300&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2024/10/gboard-android-text-input-handwriting.webp?resize=768%2C765&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2024/10/gboard-android-text-input-handwriting.webp?resize=699%2C697&quality=50&strip=all 699w, https://b2b-contenthub.com/wp-content/uploads/2024/10/gboard-android-text-input-handwriting.webp?resize=169%2C168&quality=50&strip=all 169w, https://b2b-contenthub.com/wp-content/uploads/2024/10/gboard-android-text-input-handwriting.webp?resize=84%2C84&quality=50&strip=all 84w, https://b2b-contenthub.com/wp-content/uploads/2024/10/gboard-android-text-input-handwriting.webp?resize=482%2C480&quality=50&strip=all 482w, https://b2b-contenthub.com/wp-content/uploads/2024/10/gboard-android-text-input-handwriting.webp?resize=361%2C360&quality=50&strip=all 361w, https://b2b-contenthub.com/wp-content/uploads/2024/10/gboard-android-text-input-handwriting.webp?resize=251%2C250&quality=50&strip=all 251w" width="900" height="897" sizes="(max-width: 900px) 100vw, 900px">
Gboard’s handy handwriting option in action. (Clarity not required.)

JR Raphael, IDG

As you can see above, it works even if your handwriting resembles the harried scrawls of a clumsy caribou. (No offense intended to my caribou comrades.)

Gboard Android trick #3: Quick clips

One of my all-time favorite Gboard tricks is the keyboard’s intelligent integration of the Android system clipboard — and some incredibly helpful tricks that come along with that.

Look for the clipboard-shaped icon either in the keyboard’s top row or within the main Gboard menu to get started. The first time you tap it, you might have to activate the system (via the toggle in the upper-right corner of its interface) and also grant Gboard permission to access your system clipboard. You may also need to mosey back into the Gboard settings to find the “Clipboard” section and enable all the options there to get every piece of the puzzle up and running.

Once you do, though, good golly, is this thing amazing. It’ll automatically show every snippet of text and any images you’ve copied recently, for one-tap inserting into whatever text field you’re working in — and it’ll show your recently captured screenshots for the same purpose, too.

Google Gboard Android text input: Clipboard
Gboard’s clipboard integration makes it easy to find anything you’ve copied and insert it anywhere.

JR Raphael, IDG

Perhaps most useful of all, though, is the Gboard clipboard’s capability to store commonly used items and then make ’em readily available for you to insert anytime, anywhere. You could use that for email addresses, physical addresses, Unicode symbols, snippets of code, or even just phrases you find yourself typing out often in Very Important Work-Related Emails™.

Whatever the case may be, just copy the item in question once, then pull up the Gboard clipboard and press and hold your finger onto the thing you copied. Tap the “Pin” option that pops up, and poof: That text (or image) will be permanently stored in the bottom area of your Gboard clipboard for easy retrieval whenever you need it.

Google Gboard Android text input: Clipboard pin
Pinned items in the Gboard clipboard are like your own on-demand scratchpad for easy inserting anywhere.

JR Raphael, IDG

As an extra bonus, Gboard also now syncs your pinned clipboard data and continues to make any pinned items available on any Android device where you sign in.

Gboard Android trick #4: Your personal editor

When you’re banging out a Very Important Business Email And/Or Document™ on your phone, it’s all too easy to mix up a word or inadvertently accept an errant autocorrect. We’ve all been there — and all had the same ducking reaction — right?

You may not always have a second set of human (or even caribou) eyes to look over your words whilst composing on the go, but Gboard’s recently added proofreading feature can at least give you some second layer of assurance before you hit that daunting Send button.

To find it, tap the four-square menu icon in Gboard’s upper-left corner and look for the Proofread button — with an “A” and a checkmark on it.

Tap that bad boy and tap it good, and in a split second, Gboard will analyze whatever text you’ve entered and offer up suggestions to improve it.

Google Gboard Android text input: Proofread
Need a quick confirmation that your text makes sense? Gboard’s proofreading feature’s got your back.

JR Raphael, IDG

Not bad, Gboard. Not bad. You is clearly the one who is gooder at typings today.

Gboard Android trick #5: The translation station

Last but not least in our text input improvement list is a serious time-saver and communication-booster, and that’s the Gboard Android app’s built-in translation engine.

Hit that four-square menu icon in your keyboard’s upper-left corner once more, find the Translate button, and tap it — then select whatever languages you want and type directly into the Gboard translate box.

Gboard will translate your text in real-time and insert the result, in whatever language you selected, directly into whatever text field you had selected.

Google Gboard Android text input: Translate
Any language, anytime, with Gboard’s on-demand translation system.

JR Raphael, IDG

Pas mal, eh? 

Keep all these advanced input tricks in mind, and you’ll be flyin’ around your phone’s keyboard like a total typing pro — with or without a caribou at your side.

Get six full days of advanced Android knowledge with my free Android Shortcut Supercourse. You’ll learn tons of time-saving tricks for your phone!

Asana launches AI Studio, a no-code tool for building AI agents

Asana has launched AI Studio, a no-code tool for building generative AI agents that can automate work planning tasks. 

The company first unveiled its “AI teammate” plans in June, promising to enable customers to build autonomous agents that can perform tasks independently within the work management app.  

On Tuesday, Asana said that the AI agent builder — renamed Asana AI Studio — is now out of beta and available to customers on its Enterprise and Enterprise+ plans in “early access.” There are two options for accessing AI Studio at this stage: a free plan with daily limits on usage, and a paid add-on. (Asana declined to provide specifics on pricing.)

Customers trialing AI Studio during the beta noted several advantages when deploying AI agents, said Paige Costello, head of AI at Asana. “The key benefits we’re seeing are the speed of decision-making and the overall acceleration of work and reduction in administrative and busy work,” she said.

“There is tremendous potential in AI-based agents to expedite workflow,” said Wayne Kurtzman, research vice president covering social, communities and collaboration at IDC. “The ability to deploy agents in the stream of work, where teams work, and without code becomes a powerful proposition.”  

With the launch, Asana also announced additional features for AI Studio. These include a wider variety of potential AI agent actions, more control over smart workflow capabilities such as data access and costs, and an increase in the amount of context the AI agent can reference in its decision-making. 

Users can also view a record of an AI agent’s actions and decisions. “You can actually dig into work that has happened and understand why the custom agent that you’ve built made a specific choice and undo the selection that it’s done,” said Costello. 

Users can choose from four language models to power AI agents: Anthropic’s Claude 3.5 Sonnet and Claude 3 Haiku, and OpenAI’s GPT-4o and GPT-4o mini. 

With AI agents able to complete tasks autonomously, the propensity for language models to “hallucinate” and provide inaccurate outputs could be a concern for businesses.  Costello said there are safeguards in place to help reduce the likelihood of AI agents generating and acting on incorrect information, and argued those designing the AI- workflows are “in the driver’s seat.” 

For example, a user can require an AI agent to seek human approval before carrying out actions deemed higher risk, such as sending an external email to a customer. “People are the decision makers –– they’re the ones ultimately accountable for work,” said Costello.

Adoption of AI agents is at an early stage for most organizations, but it’s accelerating, said Margo Visitacion, vice president and principal analyst at Forrester, covering application development and delivery. Successful deployments will require “experimentation, failing fast, and learning from those experiments,” she said.

“It takes the right level of oversight, focus on the problems you’re solving, and gathering feedback to ensure you’re using the right model that suits your needs,” said Visitacion.

AI dominates Gartner’s 2025 predictions

Artificial Intelligence continues to dominate this week’s Gartner IT Symposium/Xpo, as well as the research firm’s annual predictions list. 

“It is clear that no matter where we go, we cannot avoid the impact of AI,” Daryl Plummer, distinguished vice president  analyst, chief of research and Gartner Fellow told attendees. “AI is evolving as human use of AI evolves. Before we reach the point where humans can no longer keep up, we must embrace how much better AI can make us.”

Continue reading on Network World for Gartner’s top predictions for 2025.

The iPad mini is still the best small tablet around

We waited three years for Apple to update the iPad mini, and what we got is more of the same — boosted with a much better processor. Apple’s iPad mini has consistently been positioned as either the most versatile, or the most underpowered tablet in the company’s lineup. That’s changed with the latest iteration, which remains small enough to take anywhere, but also becomes much more powerful than ever, thanks to a new A17 Pro chip. 

Whole lotta processor

If you’ve used an iPad mini before, you know what you’re getting: a reliable, powerful, and eminently mobile device with a large display you can slip inside an inner coat pocket and use just about anywhere, except in the rain.

Available in two new colors (four in all, including space gray, starlight, purple, and blue), the chip is the real substance here. Apple promises a 30% faster CPU, a 25% increase in GPU performance, and double the machine learning performance of the last-generation iPad mini. Even in comparison to the last model, that processor means you really will feel the difference in terms of performance — and the internal memory has been beefed up to 8GB, which also makes a big difference.

Geekbench 6 tests on the review device generated single-core results of 2,520 and 6,440 on multi-core performance. In comparison, the iPad mini 6 with an A15 Bionic chip achieved 2,121 single-core, and 5,367 multi-core scores. While aggregate test results will be more accurate, my own testing confirms substantial improvement in the processor. What you are getting is an iPad that will (a) run Apple Intelligence, and (b) deliver the performance you require to run apps for the next few years. 

That’s great for consumer users, but also important to education and enterprise deployments. This is, after all, an iPad that seems a good fit for real-world business applications such as retail, hospitality, warehousing, or distribution. You’ve probably already seen it used in one of those fields, so the processor upgrade will be a significant benefit for firms seeking to deploy AI solutions within their digital transformation efforts.

I recently learned that pilots sometimes fly with an iPad mini so they can use it to check flight maps and flight support apps while they are in the air. They (and their passengers) should be happy with the lack of lag and hardware-accelerated ray tracing they get while using those apps in flight. 

The song remains the same

The display remains the same — in this case, the well-received Liquid Retina system we saw last time around. You can expect a wide range of supported colors at 2255×1488-pixel resolution (at 326ppi) and a nice and bright 500 nits. What changes is you now also gain Wide Color P3 support and support for True Tone. (The latter uses advanced sensors to adjust the color and intensity of your display to match the ambient light, so images appear more natural.)

I’ve been testing the iPad mini for a few days and have not experienced any cases of so-called “jelly-scrolling,” when one side of the screen refreshes at a different rate than the other when scrolling up and down.

When it comes to most of the hardware, the music hasn’t changed. You’ll still find it an easy-to-hold device because it is light, thin, and small. Dimensions remain 7.7 x 5.3 x 0.25 inches. The weight is 0.65 pounds, same as the last generation. You get USB-C (though this supports data transfers at 10Gbs, twice as fast as before), Touch ID, and no headphone port.

Under-the-hood changes are all about networking — Wi-Fi 6E, 5G support, and Bluetooth 5.3. The cameras remain more or less the same, though images captured on the device use machine intelligence to optimize the results. One older image Apple Intelligence selected as my Home screen picture really pops thanks to the AI tweaks.

Good times, AI times

Now, for most mortals, Apple Intelligence remains something that’s nice to have, rather than something essential. That’s how it will remain until the first services under that moniker appear next week. On this device, one thing Apple Intelligence is already good at is summarizing emails and helping write better ones, while the Siri improvements in iOS 18.1 bode well for additional contextual intelligence expected to appear next year.

Given that Apple Intelligence isn’t available yet and won’t be available for some time in China or Europe, it’s too early to surmise the extent to which that will change the user experience. But even while that jury remains out, iPad Air continues to keep its promise to provide a good balance of mobility and usable display space. Its processor also gives third-party AI developers a platform on which to build other non-Apple AI-augmented experiences. That could be useful to some businesses. (And business and education users might appreciate that the camera takes excellent document scans.)

Bad times

One complaint concerns the screen refresh rate. While other tablets have already jumped to 120Hz, the iPad mini remains confined to 60Hz. I don’t know if that decision is based on costs, heat dissipation, energy management, or parsimony, but I have seen enough people commenting on this to know that it’s something Apple will need to deal with in the future. 

If you are using an Apple Pencil with your current iPad mini, I have bad news: the new model will not support any Apple Pencil other than the newish Apple Pencil Pro and the USB-C Apple Pencil. If you’ve been using a second-generation Apple Pencil, you’re out of luck. (First-generation pencil support in iPad mini disappeared with iPad mini 6.)

It is also interesting that Apple continues to avoid kitting out its iPad mini or standard iPad with the M-series chips found inside the iPad Air and iPad Pro. The decision makes it clear that Apple is ultimately using the built-in processor as its way to compete with itself, as no one else in the tablet market is really in its league. Want a more performant tablet? Apple has them, but not in the mini range.

Finally, when it comes to the built-in camera, you probably have a better camera in your smartphone than what’s here. It’s good enough, has been updated to deliver better, and Apple’s image intelligence software means it takes good pictures, if you want to do so with a tablet. It is more than suitable for video conferencing, of course.

What else to know

There is one more thing, the price. Just as in 2021, the entry-level model still costs $499 (Wi-Fi only, add $150 for the cellular model), but it now ships with 128GB of storage rather than 64GB; 256GB and 512GB models are also available. More storage is always a win, but don’t ignore that Apple Intelligence requires at least 4GB of that space for its own use. 

Should you buy it?

I’ve used iPads since the beginning. Over that time, my preferences have kind of coalesced around the iPad Air, which I think provides a brilliant balance between power and affordability, and the iPad mini, which I rate highly for its inherent portability, but always deserved a little more power. 

The move toboost internal storage and memory while pepping things up with an A17 Pro chip means the small device can handle most tasks; you probably won’t be using it extensively for video processing in the field or advanced image editing, but could use it for some of both of those tasks. You’ll also enjoy reading books, gaming, watching videos, or listening to music. (Apple’s audio teams really know how to create great soundscapes in these devices.)

The processor upgrade means that if you are using an iPad mini 6 or earlier, a move to this model makes sense — particularly if you want to explore what Apple Intelligence is all about. While it’s mostly the same, it is overall better, and if you’ve convinced yourself you have a solid reason to invest in a smaller tablet, the iPad mini 7 remains the best small tablet around.

Please follow me on LinkedInMastodon, or join me in the AppleHolic’s bar & grill group on MeWe.