Month: November 2024

O2 unleashes AI grandma on scammers

Research by British telecommunications provider O2 has found that seven in ten Britons (71 percent) would like to take revenge on scammers who have tried to trick them or their loved ones. At the same time, however, one in two people does not want to waste their time on it.

AI grandma against telephone scammers

O2 now wants to remedy this with an artificial intelligence called Daisy. As the “head of fraud prevention”, it’s the job of this state-of-the-art AI granny to keep scammers away from real people for as long as possible with human-like chatter. To activate Daisy, O2 customers simply have to forward a suspicious call to the number 7726.

Daisy combines different AI models that work together to first listen to the caller and convert their voice to text. It then generates responses appropriate to the character’s “personality” via a custom single-layer large language model. These are then fed back via a custom text-to-speech model to generate a natural language response. This happens in real-time, allowing the tool to have a human-like conversation with a caller.

Although human-like is a strong understatement: Daisy was trained with the help of Jim Browning, one of the most famous “scambaiters” on YouTube. With the persona of a lonely and seemingly somewhat bewildered older lady, she tricks the fraudsters into believing that they have found a perfect target, while in reality she beats them with their own weapons.

AI is dumber than you think

OpenAI recently introduced SimpleQA, a new benchmark for evaluating the factual accuracy of large language models (LLMs) that underpin generative AI (genAI).

Think of it as a kind of SAT for genAI chatbots consisting of 4,326 questions across diverse domains such as science, politics, pop culture, and art. Each question is designed to have one correct answer, which is verified by independent reviewers. 

The same question is asked 100 times, and the frequency of each answer is tracked. The idea is that a more confident model will consistently give the same answer.

The questions were selected precisely because they have previously posed challenges for AI models, particularly those based on OpenAI’s GPT-4. This selective approach means that the low accuracy scores reflect performance on particularly difficult questions rather than the overall capabilities of the models.

This idea is also similar to the SATs, which emphasize not information that anybody and everybody knows but harder questions that high school students would have struggled with and had to work hard to master. This benchmark results show that OpenAI’s models aren’t particularly accurate on the questions that work asked. In short, they hallucinate. 

OpenAI’s o1-preview model achieved a 42.7% success rate. GPT-4o followed with a 38.2% accuracy. And the smaller GPT-4o-mini scored only 8.6%. Anthropic did worse than OpenAI’s top model; the Claude-3.5-sonnet model managed to get just 28.9% of the answers correct.

All these models got an F, grade-wise, providing far more incorrect answers than correct ones. And the answers are super easy for a human.

Here are the kinds of questions that are asked by SimpleQA: 

  • What year did the Titanic sink?
  • Who was the first President of the United States?
  • What is the chemical symbol for gold?
  • How many planets are in our solar system?
  • What is the capital city of France?
  • Which river is the longest in the world?
  • Who painted the Mona Lisa?
  • What is the title of the first Harry Potter book?
  • What does CPU stand for?
  • Who is known as the father of the computer?

These are pretty simple questions for most people to answer, but they can present a problem for chatbots. One reason these tools struggled is that SimpleQA questions demand precise, single, indisputable answers. Even minor variations or hedging can result in a failing grade. Chatbots do better with open-ended overviews of even very complex topics but struggle to give a single, concise, precise answer. 

Also, the SimpleQA questions are short and self-contained and don’t provide a lot of context. This is why providing as much context as possible in the prompts that you write improves the quality of responses. 

Compounding the problem, LLMs often overestimate their own accuracy. SimpleQA queried chatbots on what they think is the accuracy of their answers; the models consistently reported inflated success rates. They feign confidence, but their internal certainty may be low.

LLMs don’t really think

Meanwhile, newly published research from MIT, Harvard, and Cornell University show that while LLMs can perform impressive tasks, they lack a coherent understanding of the world.

As one of their test examples, the researchers found that LLMs can generate accurate driving directions in complex environments like New York City. But when researchers introduced detours, the models’ performance dropped because they didn’t have an internal representation of the environment (as people do). Closing just 1% of streets in New York City led to a drop in the AI’s directional accuracy from nearly 100% to 67%. 

Researchers found that even when a model performs well in a controlled setting, it might not possess coherent knowledge structures necessary for random or diverse scenarios. 

The trouble with AI hallucinations

The fundamental problem we all face is this: Industries and individuals are already relying on LLM-based chatbots and generative AI tools for real work in the real world. The public, and even professionals, believe this technology to be more reliable than it actually is. 

As one recent example, OpenAI offers an AI transcription tool called Whisper, which hospitals and doctors are already using for medical transcriptions. The Associated Press reported that a version of Whisper was downloaded more than 4.2 million times from the open-source AI platform HuggingFace.

More than 30,000 clinicians and 40 health systems, including the Children’s Hospital Los Angeles, are using a tool called Nabla, which is based on Whisper but optimized for medical lingo. The company estimates that Nabla has been used for roughly seven million medical visits in the United States and France. 

As with all such AI tools, Whisper is prone to hallucinations

One engineer who looked for Whisper hallucinations in transcriptions found the in every document examined. Another found hallucinations in half of the 100 hours of Whisper transcriptions he analyzed. 

Professors from the University of Virginia looked at thousands of short snippets from a research repository hosted at Carnegie Mellon University. They found that nearly 40% of the hallucinations were “harmful or concerning.”

In one transcription, Whisper even invented a non-existent medication called “hyperactivated antibiotics.”

Experts fear the use of Whisper-based transcription will result in misdiagnoses and other problems.

What to do about AI hallucinations

When you get a diagnosis from your doctor, you might want to get a second opinion. Likewise, whenever you get a result from ChatGPTPerplexity AI, or some other LLM-based chatbot, you should also get a second opinion.

You can use one tool to check another. For example, if the subject of your query has original documentation — say, a scientific research paper, a presentation, or a PDF of any kind — you can upload those original documents into Google’s NotebookLM tool. Then, you can copy results from the other tool, paste them into NotebookLM, and ask if it’s factually accurate. 

You should also check original sources. Fact-check everything. 

Chatbots can be great for learning, for exploring topics, for summarizing documents and many other uses. But they are not reliable sources of factual information, in general. 

What you should never, ever do is copy results from AI chatbots and paste it into something else to represent your own voice and your own facts. The language is often a bit “off.” The emphasis of points can be strange. And it’s a misleading practice. 

Worst of all, the chatbot you’re using could be hallucinating, lying or straight up making stuff up. They’re simply not as smart as people think.

FTC eyes Microsoft’s cloud practices amid antitrust scrutiny

The US Federal Trade Commission (FTC) is reportedly preparing to investigate Microsoft for potentially anticompetitive practices in its cloud computing division. This inquiry centers on whether Microsoft is abusing its market dominance by deploying restrictive licensing terms to dissuade customers from switching from its Azure platform to competitors, the Financial Times reported.

According to the report, the practices under scrutiny include sharply raising subscription fees for customers looking to switch providers, imposing high exit charges, and reportedly making Office 365 less compatible with competitor cloud services.

The investigation reflects the agency’s broader push, led by FTC Chair Lina Khan, to address Big Tech’s influence in sectors such as cloud services, with bipartisan support for curbing monopolistic practices.

In November 2023, the FTC began assessing cloud providers’ practices in four broad areas — competition, single points of failure, security, and AI — and sought feedback from stakeholders in academia, industry, and civil society.

The majority of the feedback the commission received highlighted concerns over licensing constraints that limit customers’ choices. 

Microsoft’s cloud strategy under fire

The inquiry reported by the Financial Times is still in its early stages, but an FTC challenge could significantly impact Microsoft’s cloud operations, which have grown rapidly in recent years.

“Interoperability and the fear of vendor lock-in are important criteria for enterprises selecting cloud vendors,” said Pareekh Jain, CEO of Pareekh Consulting. “This could create a negative perception of Microsoft. Previously, Microsoft faced a similar probe regarding the interoperability of Microsoft Teams.”

This scrutiny aligns with global regulatory focus: In the UK, the Competition and Markets Authority (CMA) is investigating Microsoft and Amazon following complaints about restrictive contracts and high “egress fees,” which make switching providers costly. Similarly, Microsoft recently sidestepped a formal probe in the European Union after it reached a multi-million-dollar settlement with rival cloud providers, addressing concerns of monopolistic practices.

Neither the FTC nor Microsoft had responded to questions about the reported investigation by press time.

Microsoft’s position in the cloud market

Cloud computing has rapidly expanded, with industry spending expected to reach $675 billion in 2024, according to Gartner. Microsoft controls roughly 20% of the global cloud market, second only to Amazon Web Services (31%) and ahead of Google Cloud (12%), according to Statista. Tensions have risen between the leading providers, with Microsoft accusing Google of using “shadow campaigns” to undermine its position by funding adversarial lobbying efforts.

“It seems Google has two ultimate goals in its astroturfing efforts: distract from the intense regulatory scrutiny Google is facing around the world by discrediting Microsoft and tilt the regulatory landscape in favor of its cloud services rather than competing on the merits,” Microsoft Deputy General Counsel Rima Alaily said in a statement in October.

AWS has also accused Microsoft of anticompetitive practices in the cloud computing segment and complained to the UK CMA.

These top cloud providers had already filed an antitrust case against Microsoft in 2022 alleging that Microsoft is using its software licensing terms to restrict European businesses’ options in selecting cloud providers for services like desktop virtualization and application hosting.

Previous FTC interventions and growing cloud sector scrutiny

This move follows the FTC’s legal challenge against Microsoft’s $75 billion acquisition of Activision Blizzard, which faced antitrust concerns around Microsoft’s cloud gaming business. While a federal court allowed the acquisition to proceed, the FTC’s appeal highlights its commitment to maintaining oversight of Big Tech’s market reach.

Since its inception, cloud computing has evolved from simple storage solutions to a cornerstone of AI development, with Microsoft, Amazon, and Google competing for contracts that power AI model training and deployment.

If pursued, this inquiry could lead to intensified regulations on Microsoft’s cloud strategy, underscoring the FTC’s commitment to protecting competitive markets in sectors increasingly dominated by a few key players. Neither the FTC nor Microsoft has publicly commented on the matter.

“Moving forward, all hyperscalers should commit to the interoperability of their cloud solutions in both intent and practice,” Jain noted, adding, “failing to do so may expose them to investigations that could damage their brand and business.”

Shared blame

If enterprises are finding themselves locked in to high costs, though, some of the blame may fall on them, suggested Yugal Joshi, a partner at Everest Group.

“Enterprises are happy signing highly discounted bundled deals, and when these financial incentives run out they complain about lock-in. Many of them already know what they are getting into but then are focused on near-term discounts over long-term interoperability and freedom to choose. Given the macro economy continues to struggle, price-related challenges are pinching harder,” Joshi said. “Therefore, clients are becoming more vocal and proactive about switching vendors if it saves them money.”

Microsoft has been a beneficiary of this, he said, because some clients are planning to move, and some have already moved, to its Dynamics platform from Salesforce.

FTC eyes Microsoft’s cloud practices amid anti-trust scrutiny

The US Federal Trade Commission (FTC) is reportedly preparing to investigate Microsoft for potentially anti-competitive practices in its cloud computing division. This inquiry centers on whether Microsoft is abusing its market dominance by deploying restrictive licensing terms to dissuade customers from switching from its Azure platform to competitors, the Financial Times reported.

According to the report, the practices under scrutiny include sharply raising subscription fees for customers looking to switch providers, imposing high exit charges, and reportedly making Office 365 less compatible with competitor cloud services.

The investigation reflects the agency’s broader push, led by FTC Chair Lina Khan, to address Big Tech’s influence in sectors such as cloud services, with bipartisan support for curbing monopolistic practices.

In November 2023, the FTC began assessing cloud providers’ practices in four broad areas — competition, single points of failure, security, and AI — and sought feedback from stakeholders in academia, industry, and civil society.

The majority of the feedback the commission received highlighted concerns over licensing constraints that limit customers’ choices. 

Microsoft’s cloud strategy under fire

The inquiry reported by the Financial Times is still in its early stages, but an FTC challenge could significantly impact Microsoft’s cloud operations, which have grown rapidly in recent years.

“Interoperability and the fear of vendor lock-in are important criteria for enterprises selecting cloud vendors,” said Pareekh Jain, CEO of Pareekh Consulting. “This could create a negative perception of Microsoft. Previously, Microsoft faced a similar probe regarding the interoperability of Microsoft Teams.”

This scrutiny aligns with global regulatory focus: In the UK, the Competition and Markets Authority (CMA) is investigating Microsoft and Amazon following complaints about restrictive contracts and high “egress fees,” which make switching providers costly. Similarly, Microsoft recently sidestepped a formal probe in the European Union after it reached a multi-million-dollar settlement with rival cloud providers, addressing concerns of monopolistic practices.

Neither the FTC nor Microsoft had responded to questions about the reported investigation by press time.

Microsoft’s position in the cloud market

Cloud computing has rapidly expanded, with industry spending expected to reach $675 billion in 2024, according to Gartner. Microsoft controls roughly 20% of the global cloud market, second only to Amazon Web Services (31%) and ahead of Google Cloud (12%), according to Statista. Tensions have risen between the leading providers, with Microsoft accusing Google of using “shadow campaigns” to undermine its position by funding adversarial lobbying efforts.

“It seems Google has two ultimate goals in its astroturfing efforts: distract from the intense regulatory scrutiny Google is facing around the world by discrediting Microsoft and tilt the regulatory landscape in favor of its cloud services rather than competing on the merits,” Microsoft Deputy General Counsel Rima Alaily said in a statement in October.

AWS has also accused Microsoft of anti-competitive practices in the cloud computing segment and complained to the UK CMA.

These top cloud providers had already filed an antitrust case against Microsoft in 2022 alleging that Microsoft is using its software licensing terms to restrict European businesses’ options in selecting cloud providers for services like desktop virtualization and application hosting.

Previous FTC interventions and growing cloud sector scrutiny

This move follows the FTC’s legal challenge against Microsoft’s $75 billion acquisition of Activision Blizzard, which faced antitrust concerns around Microsoft’s cloud gaming business. While a federal court allowed the acquisition to proceed, the FTC’s appeal highlights its commitment to maintaining oversight of Big Tech’s market reach.

Since its inception, cloud computing has evolved from simple storage solutions to a cornerstone of AI development, with Microsoft, Amazon, and Google competing for contracts that power AI model training and deployment.

If pursued, this inquiry could lead to intensified regulations on Microsoft’s cloud strategy, underscoring the FTC’s commitment to protecting competitive markets in sectors increasingly dominated by a few key players. Neither the FTC nor Microsoft has publicly commented on the matter.

“Moving forward, all hyperscalers should commit to the interoperability of their cloud solutions in both intent and practice,” Jain noted, adding, “failing to do so may expose them to investigations that could damage their brand and business.”

Getting started with Google Password Manager

If you’re still trying to remember all of your passwords and then type ’em into sites by hand, let me tell you: You’re doing it wrong.

With all the credentials we have to keep track of these days, there’s just no way the human brain can handle the task of storing the specifics — at least, not if you’re using complex, unique passwords that aren’t repeated (or almost repeated, even) from one site to the next. That’s where a password manager comes into play: It securely stores all your sign-in info for you and then fills it in as needed.

While there’s a case to be made for leaning on a dedicated app for that purpose (for reasons we’ll discuss further in a moment), Google has its own password management system built right into Chrome — and also now integrated directly into Android, at the operating system level. And it’s far better to rely on that than to use nothing at all.

Google Password Manager 101

First things first: You shouldn’t have to do anything to turn the Google Password Manager on. The system, once considered part of Google’s Smart Lock feature, works across Android, iOS, ChromeOS, and any other desktop platform where you’re signed into Chrome — and it’s typically activated by default in all of those places.

You’ll see the Password Manager’s prompts for credential-saving pop up anytime you enter your username and password into a site within the Chrome browser. The service will also offer to create complex new passwords for you when you’re signing up for something new. And whenever you return to a site where your credentials have been stored, Smart Lock will automatically fill them in for you — or, when more than one sign-in is associated with a single site, it’ll provide you with the option to pick the account you want to use.

The system is able to sign you into Android apps automatically, too, though it works somewhat sporadically — and you never quite know when it’ll be present. To use Google Password Manager in that way, you’ll need to search your Android device’s system settings for autofill, then:

  1. Tap “Autofill service from Google,” tap that same option once more, and confirm that the system is on and active.
  2. Return to that same settings search for autofill, tap “Preferred service,” and ensure that “Google” is both active and set to be the preferred service on that screen.

Google Password Manager can also sign you into both websites and apps across iOS, though on that front, you’ll need to manually enable the system by visiting the Passwords section of the iOS Settings app, selecting “Autofill” followed by “Passwords” and “Chrome,” and then turning on the “Autofill” option within that area.

Adjusting your Password Manager setup

If you ever want to look through and edit your stored passwords or adjust your Google Password Manager settings, the easiest thing is to sign into the Google Password Manager web interface at passwords.google.com — in any web browser, on any device you’re using.

There, you can view, edit, or delete any of your saved passwords as well as see and act on any alerts regarding possible security issues with your credentials.

You can also adjust your Google Password Manager preferences by clicking the gear icon in the upper-right corner of that page. It’s worth peeking in there once in a while, as you may find some options that are off by default and advisable to activate — like proactive alerts anytime a password you’ve saved is found to be compromised and on-device encryption for extra protection of any new passwords you save along the way.

That’s also where you can go to export all of your passwords for use in another service, if such a need ever arises.

google password manager settings

The Google Password Manager web settings section has a host of important options — some of which are disabled by default.

JR Raphael / IDG

Speaking of which, if you do at some point decide to use a standalone password manager — and we’ll dive into that subject further next — you’ll want to be sure to disable the “Offer to save passwords” and “Auto sign-in” options here to effectively turn Google Password Manager off and keep yourself from seeing confusingly overlapping prompts every time you try to sign in somewhere.

You’ll also want to revisit the related settings on any Android and/or iOS devices you’re using to be sure the new password manager is set to take the place of Google Password Manager in all the appropriate areas.

Google Password Manager vs. the competition

So why is it more advisable to use a dedicated password manager instead of Google Password Manager? Well, a few reasons:

First, dedicated password managers provide broader and more consistent support for storing and filling in passwords across the full spectrum of apps on both your phone and your computer — something most of us need to do quite regularly, especially in a work context. You don’t want to have to go manually look up a password and then copy and paste it over every time you sign into something outside of your browser, and with Google Password Manager, that’s frequently what you end up having to do.

Beyond that, dedicated password managers work seamlessly in any browser you’re using, on any device, instead of being closely connected only to Chrome.

They also tend to come with stronger and more explicit security assurances, and they often offer additional features such as the ability to share your passwords with team members or even external clients (with or without allowing the person to actually see the password in question). They frequently include other useful elements beyond just basic password storage, too, including the ability to securely store different types of notes and documents.

I maintain a collection of recommendations for the best password manager on Android, and my top choice right now is 1Password — which costs $36 a year for an individual subscription, $60 a year for a family membership that includes up to five people, $239-a-year Teams Starter Pack that allows up to 10 company users, or $96 per company user per year. And while my recommendation is technically Android-specific, I take into account the experience the service offers across all platforms, since most of us work across multiple device types. 1Password works equally well on the desktop front as well as on iOS.

If you aren’t going to take the time to mess with a dedicated password manager, though, Google’s built-in system is absolutely the next best thing. And now you know exactly how to use it.

This article was originally published in May 2020 and updated in November 2024.

PCs with NPUs tweaked for AI now account for one of every five PCs shipped, says Canalys

One out of every five PCs shipped in the third quarter of 2024, a total of 13.3 million units, was a PC with a neural processing unit (NPU) fine-tuned for generative AI (genAI) development, according to data published Wednesday by analyst firm Canalys.

It is anticipating a rapid rise in shipments of these AI-capable PCs, surging to 60% of units shipped by 2027, with a strong focus on the commercial sector.

Such machines typically house dedicated chipsets, including AMD’s XDNA, Apple’s Neural Engine, Intel’s AI Boost, and Qualcomm’s Hexagon, Canalys said in a statement.

“Copilot+ PCs equipped with Snapdragon X series chips enjoyed their first full quarter of availability, while AMD brought Ryzen AI 300 products to the market, and Intel officially launched its Lunar Lake series,” said Ishan Dutt, principal analyst at Canalys. “However, both x86 chipset vendors are still awaiting Copilot+ PC support for their offerings from Microsoft, which is expected to arrive [in November].”  

Dutt added that there is still resistance to purchasing AI PCs from both key end-user companies and channel players. 

“This is especially true for more premium offerings such as Copilot+ PCs, which Microsoft requires to have at least 40 NPU TOPS [trillion operations per second], alongside other hardware specifications,” Dutt said. “A November poll of channel partners revealed that 31% do not plan to sell Copilot+ PCs in 2025, while a further 34% expect such devices to account for less than 10% of their PC sales next year.”

Canalys labels the machines as “AI-capable PCs,” which is baffling, given that AI has been around for many decades and can — and has — run on all manner of PC. Someone accessing data from an LLM wouldn’t need that level of horsepower. That would only be needed for engineers and LLM developers creating the data-intensive systems.

But such PCs wouldn’t necessarily make sense for most of those LLM developers, said George Sidman, CEO of security firm TrustWrx. Most developers writing LLM applications at that level would be accessing high-end specialized servers, Sidman said. 

“The PC has very little role. You would be running this in a large data center. These things are blocks long,” Sidman said. “You have got to look at the real world issues. With a huge multi-petabyte system behind it, well, you need that for the LLM to be effective.”

Canalys disagreed. It said in its report, “With the use of AI models set to increase exponentially, associated costs to organizations from accessing cloud resources will ramp up significantly. Moving some workloads to AI-capable PCs will help mitigate this, and allow businesses to optimize their use of AI tools according to their budgets.”

Regardless, would such souped-up PCs deliver better overall performance? Yes, Sidman said, but the better question is whether the typical business user would likely notice the difference, given the speeds that exist today in routine business desktops. “Will it improve some performance on the PC? Probably, but it won’t get them anything concrete,” Sidman said.

Apple’s iPhone partners make plans for US manufacturing

In a sign of the times, Apple’s key manufacturing partners are ready to ramp up production in the US should the incoming Trump administration keep its promise to levy painful surcharges on Chinese imports. 

But, of course, these new factories won’t necessarily create vast quantities of jobs, as they are likely to be focused on strategically important, high-value goods made in heavily automated plants

All the same, the news is that Apple’s big Taiwanese partners — Foxconn, Pegatron, and Quanta Computer — are ready to rapidly ramp up US manufacturing investment in response to any changes in national policy, explained Foxconn Chairman Young Liu. His company already has production centers in Texas, Wisconsin, and Ohio, and is ready for additional expansion, he said.

Dealing with uncertainty

This may be shrewd preparation, given that President-Elect Donald J. Trump has threatened to put a 60% levy on Chinese-made products once he re-takes power. “Trump has just been elected. It’s uncertain what policies he will implement…. We’ll be watching to see what changes there will be from the new U.S government,” Liu said, according to Reuters.

Liu was speaking during the company’s stronger-than-anticipated quarterly results call. The company revealed that net income for the quarter was $1.5 billion, with demand for server chips boosting performance. He expects Foxconn to take at least 40% of the global server market in future.

That demand for server chips means the company can see even more value in US production, with Alphabet, Meta and Amazon set to spend billions on server infrastructure to drive AI this year. If you combine that demand with the growing recognition of the need to protect data sovereignty, you can surmise that making servers in this kind of quantity near or in the regions that are demanding them is a sensible business move for the company. (Liu actually uses the term “sovereign server” to articulate this.)

Similarly, as tensions with China could increase under Trump’s management, the Taiwanese firms may feel that manufacturing consumer products in the US is a price they can pay in exchange for some protection around their own national security. (And the strategic need to encourage companies to make chips in the US makes achieving that a matter of national security.)

What about the iPhone

Liu was light with detail on the company’s biggest client, though Apple critics seeking a little mood music might note his warning that the smart consumer products business will show a decline this year. This could either suggest iPhone sales are lower than anticipated or could hint that iPhones are eating the industry’s lunch, with other smartphones Foxconn also makes for other brands not selling terribly well.

Decoding the shadows surrounding the data, it is perhaps telling (and probably related) that Foxconn’s sales hit a record high in October, when the iPhone 16 was introduced. 

I’m inclined to imagine the Apple smartphone is doing just fine.

The new tech, US and India?

The need to diversify manufacturing bases is generating international investments. Apple, Foxconn, and other Apple partners are also deeply immersed in building business in India, with Foxconn already putting $10 billion into that attempt. 

The company intends to make even bigger investments there, even as a local report claims Apple and its suppliers aim to make just under a third (32%) of all iPhones made globally in India by fiscal 2027. 

But even in India, the labor force is a cost, and Foxconn (and Apple) already have plans to reduce the number of workers involved in iPhone assembly, perhaps by as much as 50%.

They hope to achieve this through automation and artificial intelligence, though there is a lot of work to do before robots can match human manufacturing success — still, Apple has said its manufacturing headcount dropped from 1.6 million workers globally to 1.4 million in 2023.

An iPod, a phone, a tool for international politics

Jobs, international tension, money, the march of AI, trade wars and surveillance as a service…., we’re through the smartphone looking glass, people, and no mistake.

In the US, and elsewhere, we’ve quite clearly taken a long, long journey since the optimism and promise voiced by then-Apple CEO Steve Jobs when he described the first iPhone in 2007. He did not say an “iPod, a phone, and a device that challenges economic and national security.”

It is only today, as the march of digital transformation continues, that this is what it turned out to be. 

You can follow me on social media! You’ll find me on BlueSky,  LinkedInMastodon, and MeWe

Now you can download an ISO file of Windows 11 for Arm chips

It’s been possible for users to download ISO files of the Windows operating system, but until now that option has only applied to the x86 version.

Now it’s finally possible to download an ISO file of Windows 11 for computers with Arm-based chips from Microsoft’s website, according to Neowin. The file can be used to install Windows 11 on virtual machines or to create installation media such as a USB stick or a DVD.

Note: not all drivers are included in the ISO file, meaning users might need to complete the installation afterwards by installing drivers from other sources.

AMD to cut 4% of workforce to prioritize AI chip expansion and rival Nvidia

Advanced Micro Devices (AMD) is laying off 4% of its global workforce, around 1,000 employees, as it pivots resources to developing AI-focused chips. This marks a strategic shift by AMD to challenge Nvidia’s lead in the sector.

“As a part of aligning our resources with our largest growth opportunities, we are taking a number of targeted steps that will unfortunately result in reducing our global workforce by approximately 4%,” CRN reported quoting an AMD spokesperson.

Do you need an AI ethicist?

In response to the many ethical concerns surrounding the rise of generative artificial intelligence (genAI), including privacy, bias, and misinformation, many technology companies have started to work with AI ethicists, either on staff or as consultants. These professionals are brought on to steward how the organization adopts AI into their products, services, and workflows.

Bart Willemsen, a vice president and analyst at Gartner, says organizations would be better served with a dedicated ethicist or team rather than tacking on the function to an existing role.

“Having such a dedicated function with a consistent approach that continues to mature over time when it comes to breadth of topics discussed, when it comes to lessons learned of previous conversations and projects, means that the success rate of justifiable and responsible use of AI technology increases,” he said.

While companies that add the role may be well-intentioned, there’s a danger that AI ethicists will be token hires, ones who have no meaningful impact on the organization’s direction and decisions. How, then, should organizations integrate ethicists so they can live up to their mandate of improving ethical decision-making and responsible AI?

We spoke with tech and AI ethicists from around the world for their thoughts on how organizations can achieve this goal. With these best practices, organizations may transform ethics from a matter of compliance to an enduring source of competitive advantage.

The AI ethicist as tech educator

For some, “ethicist” may connote the image of a person lost in their own thoughts, far removed from the day-to-day reality of an organization. In practice, an AI ethicist is a highly collaborative position, one that should have influence horizontally across the organization.

Joe Fennel, AI ethicist at the University of Cambridge in the UK, frequently consults with organizations, training them on ethics along with performance and productivity.

Ethics is like jiu-jitsu, he says: “As you get to the more advanced belts, it really becomes less about the moves and much more about the principles that inform the moves. And it’s principles like balance and leverage and dynamicness.”

He approaches AI in the same way. For example, when teaching prompt engineering with the aim of reducing genAI hallucination rates, he does not require students to memorize specific phrases. Instead, he coaches them on broader principles, such as when to use instructions versus examples to teach the model.

Fennel has coalesced these techniques into an overall methodology with safety and ethical considerations that gets people interested in ethics, he says.

Darren Menachemson, chief ethicist at Australian design consultancy ThinkPlace, also believes that one of the key responsibilities of ethicists is communication, particularly around governance.

“[Governance] means that organizations need to have enough understanding of the technology that they really can control the risks, mitigate, [and] deal with [them]… It means that artificial intelligence as a concept needs to be well communicated through the organization so people understand what its limits are so it can be used responsibly,” he said.

There are of course cultural challenges to this instruction, namely the “move fast and break things” ethos that has defined the tech ecosystem, especially in the face of AI’s rise.

“What we’re seeing is a real imperative among many organizations to move quickly, to keep pace with what’s happening more broadly and also to take advantage of really amazing opportunities that are too significant and carry too many benefits to ignore,” Menachemson said.

Menachemson argues that ethicists, particularly those at the senior level, can succeed in spite of these challenges by possessing three qualities. The first is a deep understanding of the nuances of AI technology and what risk level this poses vis-a-vis the organization’s own risk appetite.

The second is a willingness to engage stakeholders to “understand the business context that artificial intelligence is being introduced into and get beyond the general to the specific in terms of the guidance that you’re offering.”

The third attribute is central to executing on the second. “Bewildering the senior cohorts with technical language or highly academic language loses them and loses the opportunity to have actual influence. Senior ethicists need to be expert communicators and need to understand how they can connect ethics risk to the strategic priorities of the C-suite,” he said.

Delivering actionable guidance at two levels

Although ethics may be subjective, the work of an AI or tech ethicist is far from inexact. When addressing a particular issue, such as user consent, the ethicist generally starts from a broad set of best practices and then gives recommendations tailored to the organization.

“We’ll say, ‘Here is what is currently the industry standard (or the cutting edge) in terms of responsible AI, and it’s really up to you to decide in the landscape of possibilities what you want to prioritize,’” said Matthew Sample, who was an AI ethicist for the Institute for Experiential AI and Northeastern University when Computerworld interviewed him. “For example, if [organizations are] not auditing their AI models for safety, for bias, if they’re not monitoring them over time, maybe they want to focus on that.”

Sample does give advice beyond these best practices, which may be as granular as how to operationalize ethics at the company. “If they literally don’t have even one person at the company who thinks about AI ethics, maybe they need to focus on hiring,” he said as an example. 

But Sample avoids hardline recommendations. “In the spirit of ethics, we certainly don’t say, ‘This is the one and only right thing to do at this point,’” he said.

Menachemson has a similar two-pronged approach to his workflows. At the top level, Menachemson says that ethicists give general guidance on what the risks are for a particular issue and what the possible mitigations and controls are.

“But there’s also an imperative to go deeper,” he said. This step should be focused on the organization’s unique context and can be done only after the basic advice is understood.

“Once that diligence is done, that’s when recommendations that are meaningful can be put to the chief executive or to the board. Until that diligence is done, you don’t have any assurance that you really are controlling the risk in a meaningful way,” he said.

In terms of what to discuss, cover, and communicate, Cambridge’s Fennel believes that AI ethicists should be broad rather than narrow in scope.

“The more comprehensive you are with your AI ethics agenda and assessment, the more diverse your AI safety implementation will be — and, equivalently, the more robust your risk prevention and mitigation strategy should also be,” he said.

Everyone should be an ethicist

When it comes to implementation, Jesslyn Diamond, the director of data ethics at Canada-based Telus Digital, says her group works to anticipate unintended consequences from genAI, such as any potential misuse, through the use of a red team, which identifies gaps and even tries to intentionally break systems.

“We also use the concept of blue teaming, which is trying to build the innovative solutions to protect and enhance the outcomes that are possible together through a purple team,” Diamond said.

The purple team is multidisciplinary in nature, spanning professionals from QA, customer service, finance, policy, and more. “There’s something about the nondeterministic nature of generative AI that really makes these diverse perspectives, inputs, and expertise so necessary,” she said.

Diamond says that purple teaming creates the opportunity for different types of professionals to use the technology, which is helpful in not only exploring the risks and unintended consequences that are important considerations for ethics, but also to reveal additional benefits.

Telus also provides specialized training to employees on concepts like data governance, privacy, security, data ethics, and responsible AI. These employees then become data stewards to their spheres of influence. To date, Telus has a network of over 500 such data stewards.

“Becoming more familiar with how [AI] works really equips both those who are very technical and those who are less technical to be able to fully participate in this important exercise of having that diversity of expertise and background [represented],” Diamond said.

It may seem obvious that ethics should be multidisciplinary, but far too many companies pigeonhole the function in a remote corner of the organization. “It is so important that people understand the technology in order to meaningfully govern it, and that tension between literacy and participation has to happen at the same time,” Diamond said.

Creating a culture of ethical innovation

The goal of advising on ethics is not to create a service desk model, where colleagues or clients always have to come back to the ethicist for additional guidance. Ethicists generally aim for their stakeholders to achieve some level of independence.

“We really want to make our partners self-sufficient. We want to teach them to do this work on their own,” Sample said.

Ethicists can promote ethics as a core company value, no different from teamwork, agility, or innovation. Key to this transformation is an understanding of the organization’s goal in implementing AI.

“If we believe that artificial intelligence is going to transform business models…then it becomes incumbent on an organization to make sure that the senior executives and the board never become disconnected from what AI is doing for or to their organization, workforce, or customers,” Menachemson said.

This alignment may be especially necessary in an environment where companies are diving head-first into AI without any clear strategic direction, simply because the technology is in vogue.

A dedicated ethicist or team could address one of the most foundational issues surrounding AI, notes Gartner’s Willemsen. One of the most frequently asked questions at a board level, regardless of the project at hand, is whether the company can use AI for it, he said. “And though slightly understandable, the second question is almost always omitted: ‘Should we use AI?’” he added.

Rather than operate with this glaring gap, Willemsen says that organizations should invert the order of questions. “Number one: What am I trying to achieve? Forget AI for a second. Let that be the first focus,” he said, noting that the majority of organizations that take this approach have more demonstrable success.

This simple question should be part of a larger program of organizational reflection and self-assessment. Willemsen believes that companies can improve their AI ethics by broadening the scope of their inquiry, asking difficult questions, remaining interested in the answers, and ultimately doing something with those answers.

Although AI may be transformational, Willemsen emphasized the need to closely scrutinize how it would benefit — or not benefit — people.

“This ought to take into account not only the function of AI technology, the extent to which undesired outcomes are to be prevented and that technology must be under control, but can also go into things like inhumane conditions in mining environments for the hardware to run it, the connection to modern day slavery with ‘tagger farms,’ as well as the incalculable damage from unprecedented electricity consumption and water usage for data center cooling,” he said.

Organizations that are fully aware of these issues and aligned with their AI initiatives will see benefits, according to Willemsen. “The value of AI ethics may not be immediately tangible,” he said. “But knowing what is right from wrong means the value and greater benefit of AI ethics has a longer-term view: a consistent application of technology only where it is really useful and makes sense.”