Month: July 2024

CrowdStrike CEO apologizes for crashing IT systems around the world, details fix

CrowdStrike CEO has apologized to the company’s customers and partners for crashing their Windows systems, and the company has described the error that caused the disaster.

“I want to sincerely apologize directly to all of you for today’s outage. All of CrowdStrike understands the gravity and impact of the situation,” CrowdStrike founder and CEO George Kurtz wrote in a blog post on the company’s website titled “Our Statement on Today’s Outage.”

He reiterated the company’s earlier message that the incident, which brought down computers around the world on Friday, July 19, was not the result of a cyberattack.

Blue screen of death strikes crowd of CrowdStrike servers

CrowdStrike has admitted to pushing out a bad software update, causing many Windows machines running the affected software to crash. The problem, apparently affecting its Falcon platform, brought down servers at airlines, locked up computers at banks, and hurt healthcare services.

“CrowdStrike is actively working with customers impacted by a defect found in a single content update for Windows hosts,” the company said Friday in a post to its blog titled “Statement on Windows Sensor Update.”

Mac and Linux versions of the software are unaffected, and the incident was not the result of a cyberattack, it said.

Get ready for Olympic-size threats during the Paris games

The run-up to the Olympic Games in Paris this month has been electrifying — and terrifying. The games begin July 26. 

Images shown online display routes through the city as completely painted purple. Beautiful!

And did you see the new Tom Cruise movie, “Olympics have fallen”? It’s about a terrorist attack that disrupts the Olympics. That movie echoes reports that Paris residents are buying property insurance because experts predict terrorism during the games. 

A recent France24 report claimed that 24% of tickets were returned out of fear of terrorism. And both French and American authorities, including the CIA, warn travelers to avoid the Games because of the risk of terrorism.

Except there’s a catch: none of this happened. It’s all fake. The purple paint story was a set of AI-generated images first circulated on a Chinese social network called Xiaohongshu by a Chinese user. 

The Tom Cruise “movie” was made using AI by an organization called Storm-1679, a Russian disinformation group identified by Microsoft’s Threat Analysis Center that is actively engaged in a sophisticated influence campaign aimed at disrupting the 2024 Paris Olympics. The same organization generated and propagated the fake news about terrorist concerns. AI-generated images created and distributed by Storm-1679 showed fake graffiti in Paris threatening violence against Israeli visitors attending the games.

Microsoft Threat Intelligence report says Russia’s aims are to discredit the International Olympic Committee, France and the city of Paris and create fear around terrorism to reduce attendance. 

(The Russian government wants to wreck the games presumably because Russian athletes are banned from competing under their national flag and can only participate as “Individual Neutral Athletes” due to Russia’s state-sponsored doping program and Russia’s invasion of Ukraine.)

In Paris, fake…everything, it seems

Scammers galore are looking to fake their way into Olympic gold (or Bitcoin). Researchers at threat intelligence provider QuoIntelligence found that sophisticated fraudulent websites are selling fake tickets to the Olympics, mainly to Russian customers seeking to bypass sanctions imposed on them in the wake of the invasion of Ukraine. Organizers have identified 77 fake ticket resale sites.

In the run-up to the Olympics, Paris authorities have been cracking down on counterfeit luxury goods like fake Nike shoes and copies of Louis Vuitton bags. They’re especially targeting sellers in high-counterfeit areas near Olympic venues, where they’ve shut down 10 stores. The Seine-Saint-Denis suburb, where the Olympic Aquatic Center will host various events and the closing ceremony, has been a particular focus of these efforts. French authorities have been working diligently with UNIFAB (Union des Fabricants), providing extensive training to more than 1,200 customs agents to help them identify counterfeit Olympic-related merchandise, including clothing and the official mascot, Phryges.

Even in a normal year, France’s counterfeit luxury goods problem is big. Last year, customs seized 20.5 million knockoffs, a 78% increase over 202. 

French authorities deployed 70 agents specifically to track illegal activities on the internet related to counterfeit goods. And they’re going all in on AI tools. The French anti-counterfeit authorities use AI and advanced computer vision technologies to combat fake goods in the online marketplace. These technologies enable efficient scanning and analyses of large volumes of online product listings, looking at both images and text-based information.

Olympic-sized cybersecurity

While Paris officials are coping with a deluge of fake movies, news, clothes and  tickets, they’re also dealing with an extraordinary cybersecurity landscape.

The Olympics brings together the world’s government officials — especially French leaders — in close physical proximity to random, unvetted international visitors. This unusual scenario is a big opportunity for cyber spies to steal confidential data, which could include strategic plans, personal information and government communications. 

Given the intent for Russian and other state-sponsored actors (and probably malicious actors and hacktivists) to disrupt the games, official Olympics infrastructure will be heavily targeted for distributed denial of service attacks, website defacement, wiper malware and other attacks. 

We can also expect to see new kinds of AI-powered synthetic identity fraud attacks. 

Paris is expecting around 3 million visitors coming to the games. Wi-Fi hotspots will likely to be targeted with man-in-the-middle attacks, where data interception is attempted. 

Cyber attackers are also using the Olympics as a backdrop for techniques like domain spoofing, URL shortening, HTML email spoofing, and lookalike Unicode domains to trick users into providing sensitive information.

French authorities are relying heavily on advance threat intelligence, as well as AI-powered tools for predictive analytics, biometric authentication and advanced liveness detection.

Why we should pay close attention to the Olympics

The Paris Olympic Games represent a glimpse of the future. While past Olympic Games had challenges on a number of fronts, this year’s Olympics are the first to take place in the era of widely available generative AI

They’re also the only Olympic Games in history to take place in Europe while international war was also happening in the area. Russia is the aggressor in that war, mostly banned from the Olympics, and also the world’s leading power in the realm of disinformation and information warfare. Russia’s motivation and capability to wreck the games are high. 

This confluence of meta-factors most likely represent some future global status quo. We are entering a new era of AI-generated fake everything — fake sites, fake identities, fake news, fake products, fake services and more. And we’re already well into an era where geopolitical conflict plays out locally in a way that affects corporations, schools, hospitals and citizens directly. 

And, of course, we’re definitely at the beginning of an age where advanced AI is required for both for cyberattack and cyber defense. 

So, enjoy the Olympic Games. Root for your favorite countries and athletes and watch the drama unfold. But also watch how Paris performs with the enormous challenges of fakes and cyberattacks. And, in the aftermath, look for lessons learned. The Olympics this year will prove to be a laboratory for threats — and defenses against those threats — that are coming our way in the next few years. 

13 fast fixes for common Android problems

Confession time: I know embarrassingly little about car repair, and I couldn’t fix a misbehaving house appliance if my life depended on it (which, on at least a couple occasions, it almost has). Heck, I can barely hang a piece of wall art without screwing something up along the way. When it comes to Android phones, though, well — I’m practically a modern-day mechanic.

Now, hang on a sec: It isn’t nearly as impressive as it sounds. I don’t have any fancy power tools or even a pair of cool-looking coveralls with my name on ’em (not yet, anyway). I’ve mostly just been using and studying Android for a long time now — since somewhere in the mid-1800s, give or take — and when you pay close enough attention to something for a long enough period, you start to see the same basic patterns popping up time and time again.

The truth is that for as “magical” as they may occasionally appear, our sleek and shiny smartphones are ultimately just appliances. And more often than not, the issues most folks have with their phones are pretty darn consistent. That means whether you’re troubleshooting your own device or trying to come to a struggling co-worker’s rescue, the odds are good that your problem can be addressed without too much trouble.

Consider this your guide — a collection of some of the most common complaints I hear about Android phones and the simplest solutions I suggest in those scenarios. Apply the knowledge to your own ailing device or pass it on to someone else who needs it, and you, too, can experience the joy of feeling like a mobile-tech mechanic (with or without the coveralls).

Android problem #1: Low storage

Ah, yes — the age-old problem of finite space. When you see a phone’s storage starting to run low, just remember this catchy little adage: “Stop hoarding stuff, you unruly digital packrat.” (Okay, so maybe it wasn’t quite as catchy as I had hoped.)

In all seriousness, though, most of us really don’t need much stored locally on our smartphones these days — especially on Android, where cloud syncing is simple and automated management is easy. Start by installing the Google Photos app and setting it up to back up all photos and videos as they’re taken. That’ll let you delete the local copies (as well as have a great way to get to all your memories from any device, anytime, even if you lose or break your current Android phone), and that alone is bound to free up tons of room.

Second, install the Files by Google app. It’ll show you all the unnecessary space-takers lurking within your phone’s storage — including those now-redundant local copies of cloud-synced images along with junk files, duplicate files, and other easily eliminated things — and it’ll give you simple one-tap buttons to clear any of that crud away.

Just tap the three-line menu icon in the app’s upper-left corner and then select “Clean” to get started.

files by google app delete suggestions

The Files by Google app identifies areas where you can free up space and gives you a quick ‘n’ easy way to zap unneeded items away.

JR Raphael / IDG

Finally, if you’re using one of Google’s Pixel phones, look in the Storage section of your system settings and tap the line labeled “Smart Storage.” There, you can configure your phone to automatically remove any redundant copies of already-backed-up photos and videos anytime your storage starts to get low again.Finally, if you’re using one of Google’s Pixel phones, tap the Files app’s menu icon once more and this time select “Settings” — then look for the option labeled “Smart Storage.” Activating the toggle alongside that will cause your phone to automatically remove any redundant copies of already-backed-up photos and videos anytime your storage starts to get low.

Android problem #2: Subpar stamina

We could talk about Android battery life all day, but the fastest way to make an immediate difference in your phone’s longevity is to adjust your screen settings.

First, turn down the screen’s brightness (either in the Quick Settings panel that comes up when you swipe down twice from the top of your screen or in the Display section of your system settings). The display burns through more power than anything else on your device, and the lower you can comfortably use it, the longer your phone will last with each charge. On most reasonably recent devices, you can also look for an Adaptive Brightness option that’ll automatically adjust the brightness level for you based on your current environment.

Second, set your “Screen timeout” setting (also in the Display section of your system settings) to as low of a value as you can tolerate. The less time your screen stays on when you aren’t using it, the less unnecessary battery power your phone will burn through.

If you want to manage that more intelligently — with a powerful system that lets you configure your screen to stay on automatically when you’re holding the phone at certain angles (thus indicating that you’re actively looking at it) or when you’ve got specific apps open but then to shut itself off quickly in any other scenarios — consider this advanced screen timeout trick.

Last but not least, look in that same Display area of your system settings for the Dark Theme option. Darker colors tend to consume less power than the bright hues present in most interfaces by default, so switching to the Dark Theme either all the time or on a sunset-to-sunrise schedule should extend your phone’s battery a fair bit.

The other common culprit in a stamina shortcoming is a random app that’s misbehaving and using far more resources than it should — often because it’s getting overly aggressive with background updates and completely unnecessary refreshing (paging Facebook…).

If you look in the Battery section of your system settings, you should be able to find a battery usage breakdown that’ll help you do some detective work and suss out any such offenders — then either uninstall them entirely or clamp down on their ability to do stuff when you aren’t actively using them.

Android problem #3: Too much bloatware

Unless you’re using Google’s Pixel phones, your Android device likely came loaded with lots of junk you don’t want — ranging from superfluous manufacturer-provided services (hi, Samsung!) to carrier-added crapola (to use the highly technical term). But fear not, for most of that can at the very least be hidden out of sight, if not eliminated entirely.

The simplest way to do that is to look in the Apps section of your system settings to find the complete list of installed applications. When you see an app that you don’t want, tap its name and then look for either the Uninstall button — or, if that isn’t present, the Disable command. You may not be able to get rid of absolutely everything that way (paging Bixby…), but you’ll be able to clear out a fair amount of clutter.

Android problem #4: A home screen mess

From built-in search bars you don’t use to silly news streams you’d rather not see, Android phones’ home screens are often anything but optimal out of the box. But you don’t have to live with what your device-maker gives you. Android has a huge array of third-party launchers — alternate environments that completely replace your phone’s stock home screen setup and app drawer arrangement. And there’s something available for practically every preference and style of working.

nova launcher and microsoft launcher in android

Third-party launchers such as Nova Launcher and the Microsoft Launcher, seen here, can clean up your home screen and make it custom-suited to your work style.

JR Raphael / IDG

Look through my Android launcher recommendations to find what’s right for you — then check out these Android productivity tips for making the most of your spiffy new setup.

Android problem #5: A slow-running phone

Just like us mortals, smartphones are prone to slowing down over time as their virtual wits become worn. Unlike our mushy mammal brains, though, your phone’s response time can actually be improved.

Some of the things we just went over, in fact, should make a noticeable difference: cleaning up your storage, uninstalling unused apps (both ones that came pre-installed on your phone and ones you installed yourself but no longer use), and trying out a custom launcher for a more optimal home screen environment.

Beyond that, some of the same steps I describe in my Android data-saving guide can bring a meaningful boost to your overall device speed — things like eliminating unnecessary background activity, compressing your mobile web experience, and shifting to lightweight versions of apps. (See that article for a step-by-step breakdown in each of those areas.)

And finally, allow me to direct your attention to a tucked-away Android system setting that may make the most perceptible impact of all. It’s buried within the Android accessibility settings, but it can be beneficial for just about anyone.

So head into the Accessibility section of your system settings, then — on a Pixel phone:

  • Tap “Color and motion” within the “Display” area of the screen.
  • Find the line labeled “Remove animations.”
  • Flip the toggle next to it into the on and active position.

On a Samsung Galaxy device, meanwhile:

  • Tap “Vision enhancements.”
  • Find the line labeled “Reduce animations.”
  • Flip the toggle next to it into the on and active position.


If you’re using another type of Android device beyond that, the exact placement and phrasing of the option may vary somewhat — but you should be able to find it either by hunting around in that same Accessibility section of your system settings or by searching your settings area for the word animations.

android color and motion controls

Android’s animation-disabling switch, hidden away in the system’s accessibility options, can make any phone feel instantly faster.

JR Raphael / IDG

Now, just head back to your home screen and try moving around your phone — opening your app drawer, swiping down the notification panel, going in and out of apps, and so on. Everything should feel significantly snappier than it did before.

Android problem #6: Lasting lag (or other odd behaviors)

If your phone woes extend beyond slight slowness — and/or you’re seeing strange things happen in general, with no obvious explanation — a rogue app is almost always the answer.

The easiest way to confirm this is to boot your phone into a special state called safe mode. That puts the device into a stock-like environment, without any added apps present, so you can see if things are working normally without any of those extra variables. If they are, you can be pretty confident that an app is causing your issue.

To enter safe mode on your phone, press and hold the on-screen power icon — the same one you usually use to turn the phone off. That’ll reveal the option. (And don’t worry: It’s a temporary state. Everything will go back to normal the next time you reboot after using it!)

android reboot to safe mode screen

The exact interface may vary from one Android device to the next, but long-pressing the on-screen power icon (at left) should always reveal an option to enter the system’s safe mode (at right) for advanced troubleshooting.

JR Raphael / IDG

Provided things seem fine while in safe mode, the best next step is to perform a full factory reset on your phone (after making sure all important data is synced or backed up somewhere, of course). Then, when you sign back into the device anew, do not accept the option to restore your apps and instead opt to start with a completely blank slate.

At that point, your phone should be working flawlessly. And you can then slowly add apps back into the mix one by one, manually, as you need them.

If your issues return at any point along the way, you should be able to narrow down the app that’s causing them quite easily — since you’ll know it’s the most recent thing you reinstalled.

And as a side perk, the reset in and of itself will likely lead to your phone feeling faster and smoother than it has in ages. It’s a good thing to do periodically to clear out the cobwebs and give yourself a fresh start.

Android problem #7: Too much rotation

Our phones are designed to work in both a portrait and a landscape orientation — but sometimes, the sensors get a little oversensitive and end up flipping between views more often than you’d want.

There are some easy answers, though:

First, on any recent Pixel device, march into the Display section of your system settings and tap the line labeled “Auto-rotate screen.” Make sure you tap the words and not the toggle.

On the screen that comes up next, confirm that you have the toggles for both “Use auto-rotate” and “Face Detection” in the on and active positions. That’ll allow your phone to use its front-facing camera to detect which way your face is positioned relative to the screen at any given moment and then have the screen’s orientation match that.

If you’re using any phone other than a Pixel — or if that Face Detection setup isn’t working the way you want — you can instead just disable the auto-rotate function entirely and then decide for yourself how you want your screen to be oriented on a moment-to-moment basis.

  • On any phone that follows Google’s core Android interface, simply turn off the toggle next to “Use auto-rotate” within the Display section of the system settings.
  • On a Samsung phone, the feature curiously isn’t present at all in the system settings — but you can find a toggle for it in the Quick Settings area that comes up when you swipe down twice from the top of your phone. Look for the icon labeled “Auto rotate” and tap it once to disable it (which will change its title to “Portrait,” somewhat confusingly — but that’ll do the trick).

From there on out, anytime you rotate your device, it won’t automatically change the screen’s orientation and will instead place a small icon in the corner of the screen. You can then tap that icon to change the rotation or ignore it to leave it as-is.

Android problem #8: Tiny text

Stop squinting, would ya? If the words on your phone are too damn small, head into the Accessibility section of your system settings and try out two options: “Font size,” which will increase text all throughout your phone, and “Display size” or “Screen zoom,” which will increase the size of everything on your screen.

On a Pixel phone or another device that follows Google’s standard Android interface, both options will be within the “Display size and text” area of the system accessibility settings. On Samsung devices, you’ll need to tap “Vision enhancements” and then select either “Font size and style” or “Screen zoom” to make the two adjustments.

Android problem #9: Annoying notifications

Whether it’s an overly aggressive app or, ahem, an overly aggressive texter, stop notification nuisances at their source by pressing and holding your finger to the next unwanted alert that pops up. That’ll pull up a control panel of sorts that lets you turn off the associated type of notification entirely — or just silence it so that it still shows up but doesn’t actively demand your attention.

silencing notifications in android

Less annoying notifications are always just a long-press and a tap away.

JR Raphael / IDG

And if you’re really feeling crafty, you can create powerful filters for your Android notifications to customize and control exactly how different types of alerts behave — and even, if you’re so inclined, to summarize similar notifications and make your list of pending items less overwhelming.

Android problem #10: The contacts conundrum

It’s 2024, for cryin’ out loud. Your contacts shouldn’t be accessible only on your phone — and you shouldn’t have to jump through hoops to “transfer” them from one device to another.

If you’re using a phone made by anyone other than Google, go into its Contacts app and make sure it’s set to sync your info with your Google account — not with the manufacturer’s own proprietary syncing service.

This is particularly pertinent for Samsung owners, as the company tends to sync contacts with its own self-contained service by default. That’s fine if you only want to access that info from that one phone and if you plan to purchase only phones made by Samsung in the future — eternally — but in any other scenario, that setup is not going to serve you well.

Once you make this change, your contacts will be synced with Google Contacts — which means they’ll always be immediately available within the Google Contacts website on any computer where you’re signed in and on any phone where you install the Google Contacts Android app.

Android problem #11: Call-ending challenges

Ever find yourself scrambling to end a call — but then your screen won’t come back on fast enough? Or maybe the screen comes on, but the command to hang up isn’t right there and ready?

An Android accessibility option can make your life infinitely easier by empowering you to press your phone’s physical power button anytime you’re ready to say goodbye. No need to hunt around for the right icon or even look down at your phone at all — just one button press along the device’s edge, and the person on the other end will be gone (thank goodness!).

All you’ve gotta do is look for the “Power button ends call” option in the Accessibility section of your system settings. On Pixel phones and other devices that follow Google’s standard Android interface, it’ll be within a “System controls” submenu in that area. On Samsung products, you’ll have to tap “Interaction and dexterity” and then “Answering and ending calls” to find it — and it’ll be labeled “Press Side button to end calls” (even though the “Side button” is more often than not actually just the power button).

pixel and samsung controls for ending calls with button in android

If you dig around enough in Android’s settings — on a Pixel phone, at left, and a Samsung device, at right — you can find a switch that’ll make it much easier to end a call.

JR Raphael / IDG

However you get there, flip that switch on — and get ready to get off a call more easily than ever.

Android problem #12: A frozen phone

One of the most frustrating Android problems of all is having a phone that’s either stuck on some process and not responding or stuck in a powered-off state and refusing to turn on. But no matter how dire things may seem, there’s almost certainly always a solution.

The simplest one is a hard reboot: Depending on your device, you’ll want to press and hold either the power button by itself for 30 seconds to a minute — or press and hold the power button and either volume-down or volume-up button together for that same amount of time (or until you feel a vibration and see something show up on the screen). If you see a strange-looking menu that says “Start” and has a picture of an Android robot, don’t worry: Just press the power button again, and your phone should boot up normally.

If nothing happens with either of those processes, try leaving your phone plugged in for a solid few hours, just to make sure the battery isn’t depleted. Then try again.

If things still aren’t coming up — and if you aren’t seeing even the standard battery indicator graphic appear on the display when you plug the phone in — well, my friend, it’s time to make your way to our final Android issue.

Android problem #13: A non-charging phone

Last but not least is the problem to end all Android problems: an Android phone that simply won’t charge (and thus also won’t power up, once its battery has been run all the way down). I’ve been there. And while it’s certainly possible that you could be facing some sort of hardware-related defect, it’s also quite likely that this is something you can fix in a jiff.

So try this: Take something like a toothpick or the end of a paper clip and very carefully and very gently dig around a little in the phone’s charging port to clear out any lint or debris that’s built up in there. It sounds crazy, I know, but sometimes, enough gunk gets collected in that area that the power cable isn’t able to establish a good connection and charge the device (or charge it consistently, without the connection coming in and out and making it difficult for much charging to happen).

Once you’ve cleared out a good amount of gunk, plug the phone in again and see if something happens. If the battery was totally dead, you might have to leave it plugged in for a while before you see any results. But there’s a decent chance this will work — and then, in a matter of minutes, you’ll be back in business.

Sometimes, the simplest fix is the most satisfying one of all.

This article was originally published in August 2020 and updated in July 2024.

Apple Intelligence doesn’t use YouTube, but does it matter?

Apple has confirmed recent claims that it might have used subtitle data from YouTube videos to train one of its artificial intelligence (AI) tools, but says the tool is not used in Apple Intelligence.

Apple confirmed a report from Proofnews that it had used this YouTube data to train one of its models. The company explained that it did so to train the open-source OpenELM models released earlier this year. The information was included within a larger collection maintained by the EleutherAI non-profit company that supports AI research.

Apple used YouTube once, but not now

However, Apple told 9to5Mac that models trained using that information don’t power any of its own AI or machine learning tools, including Apple Intelligence. This was a research project originally created by Apple’s AI teams and then shared, including via the company’s own Machine Learning Research site.

What’s important is that it shows the extent to which Apple wants to be seen as keeping its promise that Apple Intelligence models are trained on licensed data.

But that’s not the big picture. As mentioned earlier in the week, Apple Intelligence does also train its models using “publicly available data collected by our web-crawler.”That admission reflects the extent to which tech companies are using information published online to create new AI products from which they subsequently profit.

Making public data private

The issue is that by turning other people’s creative works into data, and then profiting from that data, tech firms aren’t playing fair. 

Speaking to Proofnews, Dave Farina, the host of “Professor Dave Explains,” put it this way: “If you’re profiting off of work that I’ve done [to build a product] that will put me out of work or people like me out of work, then there needs to be a conversation on the table about compensation or some kind of regulation.”

To some extent, the focus on YouTube data distracts from that critical argument, which is that the generative AI (genAI) tools coming into common use today are likely to have been trained by information created by humans and shared online. That’s the kind of information picked up by webcrawlers, including Apple’s.

But data quality is a real issue here, and the search for the best data inherently means that the best data sources are the highest octane of fuels to power training AI.

The drive for quality means content is king

Consider just two of the challenges AI researchers face.

  • Automated data grading systems might reject old, out-of-date, or false information, but some still gets through, which is why AI systems so often develop hallucinations (the current descriptor for fake information) or exhibit questionable morality (racist or gender-biased language).
  • Data also has a finite lifespan. Facts can and do change over time, and maintaining high-quality data is an essential bulwark against the classic “garbage out” generated by irrelevant information, or high-grade information that becomes irrelevant over time.

What this means is that in their quest for high-quality information, AI companies inevitably seek high-quality data sources. When you translate that into activity picked up from the open public web, that in itself implies that creatives currently battling against tech firms for compensation for use of their material in training AI systems have a good point.

Because the best and most current information they create is worth something, both to the creators, those who consume it, and also to the people who own and train the machines that harvest their data from it. Indeed, given that AI by its nature becomes a tool directly available to everyone and across every supported language, it seems plausible to think the value of that information might actually grow once it is used to train an AI model.

So, while Apple might not be using YouTube data for its Apple Intelligence models, it will be using other data curated across the public web. And while Apple might at least try to avoid using data it should not exploit this way — and is honest enough to have responded to the current YouTube controversy — not every AI firm does the same. And once the machine is trained it cannot be untrained.  

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Want ROI from genAI? Rethink what both terms mean

When generative AI popularity and marketing hype went into overdrive last year, just about every enterprise launched a wide range of genAI projects. And for various reasons, very few of them delivered the kind of return on investment that CEOs and board members had expected.

That meant that 2024 has become the year of AI postmortems and recriminations about why projects went sour and who was to blame. What can IT leaders do now to make sure that genAI projects launched later this year and throughout 2025 fare better? Experts are suggesting a radical rethinking of how ROI should be measured in genAI deployments, as well as the kinds of projects where generative AI belongs at all.

“We have an AI ROI paradox in our sector, and we have to overcome it,” said Atefeh “Atti” Riazi, CIO for media enterprise Hearst, which reported $12 billion in revenue last year. “Although we have [years of experience] measuring the ROI for IT on lots of other projects, AI is so disruptive that we don’t really yet understand its impacts. We don’t understand the implications of it long term.”

When boards push down genAI mandates — and LOBs go rogue

After OpenAI captured the attention of the industry when consumer fascination with ChatGPT surged in early 2023, Conor Twomey observed a “wave of euphoria and fear that swept over every boardroom.” AI vendors tried to take advantage of this euphoria by marketing their own version of FUD (fear, uncertainty, and doubt), said Twomey, head of AI strategy at data management firm KX.

“Every organization went down the same path and said, ‘We don’t know what this thing is capable of.’”

That sparked a flood of genAI deployments ordered from boards of directors and, to a lesser extent, CEOs. This was happening to an extent that has not been seen since the early days of web euphoria around 1994.

“That was something different with generative AI, where a lot of the motion came top-down,” said Rajiv Shah, who manages AI strategy for Snowflake, a cloud data storage and analytics service provider. “Deep learning, for example, was certainly hyped up, but it didn’t have the same top-down pushing.”

Shah says this top-down approach colored and often complicated the traditional requirements for ROI analysis prior to major rollouts. Little wonder that those rollouts failed to meet expectations.

And mandates from above weren’t the only source of pressure IT leaders faced to push through genAI projects. Many business units also brought AI ideas to IT, and IT pointed out why they would be unlikely to be successful. And those departments often said, “Thanks for the input. We are doing it anyway.”

Such projects tend to shift focus away from companies’ true priorities, notes Kelwin Fernandes, CEO at AI consultant NILG.AI.

“I see genAI being applied in non-core processes that won’t directly affect the core business, such as chatbots or support agents. These projects lack support and long-term engagement from the organization,” Fernandes said. “I see genAI not bringing the promised ROI because people moved their priorities from making better decisions to building conversational interfaces or chatbots.”

Inflated expectations, underestimated costs

Early genAI apps often delivered breathtaking results in small pilots, setting expectations that didn’t carry over to larger deployments. “One of the primary culprits of the cost versus value conundrum is lack of scalability,” said KX’s Twomey.

He points to an increasing number of startup companies using open-source genAI technology that is “sufficient for introductory deployments, meaning they work nicely with a couple hundred unstructured documents. Once enterprises feel comfortable with this technology and begin to scale it up to hundreds of thousands of documents, the open-source system bloats and spikes running costs,” he said.

“Same goes for usage,” he added. “When genAI is inserted into a workflow ideal for a subset of users and then exponentially more users are added, it doesn’t work as hoped.”

Patrick Byrnes, formerly senior consultant for AI at Deloitte and now an AI consultant for DataArt, attributes some of the inflated ROI expectations for generative AI projects to the impressive performance delivered by the earliest genAI applications.

“If you go into Gemini or ChatGPT and ask it something basic, you can get an incredible response right away,” he said. Expecting similar results on a larger scale, “some enterprises did not start small. Right out of the gate, they went with high-impact customer facing efforts.”

Indeed, many of the ROI shortcomings with genAI deployments are a result of executives not thinking through the rollout implications sufficiently, according to an executive in the AI field who asked that her name and affiliation not be used.

“Automation driven by AI leads to productivity gains, but often the cost to enable it is overlooked,” she said. “Enterprises focus on model development, training, and system infrastructure but don’t accurately account for cost of data prep. They spin up massive data sets for AI, but small errors can make it useless, which also leads employees to mistrust outputs, leading to costs without ROI.”

Another overlooked factor, she noted, is that many AI vendors are currently focused on customer acquisition, keeping costs down in the short term. “Then they will ratchet up prices with an eye toward profitability, which will lead to higher costs for enterprise users in the future.”

Those costs are not likely to get meaningfully better by 2025. IDC noted that the costs with generative AI efforts are extensive.

“Generative AI requires enormous levels of compute power. NVIDIA’s workhorse chip that powers the GPUs for datacenters and the AI industry costs ~$10,000 per chip,” the analyst firm said in a September 2023 report. “Operational costs are in the range of $4 million to $5 million monthly, and businesses expect model training costs to exceed $5 million. Added to this are electricity costs and datacenter management.”

The hallucination challenge

On top of all this is the fact that genAI periodically hallucinates, meaning that the system makes things up. That will deliver a bitter surprise if the company is trusting it to analyze critical data in healthcare, finance, or aerospace — and even if it is simply relying on genAI to accurately summarize what happened during a meeting.

For business managers who are used to trusting the numbers generated by a spreadsheet projecting revenue growth, that can be unsettling. Those executives are used to the projections failing because an employee’s assumptions turned out to be too optimistic, but they are not used to Excel lying about the mathematical result of 800 numbers being multiplied.

And it cuts into ROI because all generative AI output must be closely fact-checked by a human, erasing many of the perceived productivity gains.

Hearst’s Riazzi sees the genAI hallucination issue as temporary. “Hallucinations do not bother me. Eventually, it will address itself,” she said.

More importantly, she argues that business simply needs to apply the same supervision and oversight to genAI that it has for decades with its human employees, stressing that “people hallucinate as well” and coders have been known to write “buggy code.”

“Human error is already a big issue in medicine and patient care,” Riazzi said. “There is a lot of bad data out there, but there is no difference [in managing hallucinations] from what we are already doing today. We see a lot of data cleansing going on.”

NILG.AI’s Fernandes is doubtful that genAI hallucinations will ever go away, but he says that shouldn’t necessarily be a dealbreaker for any application. It is simply a matter of enterprises adjusting their thinking to deal with an imperfect reality, something they already have experience doing.

“We have quality assurance to reduce production errors, but errors still exist, and that’s why we have return policies and warranties. We use the QA process as a fallback plan of the factory errors and the warranty as a fallback plan of the QA,” he said. “All those actions reduce the probability of failure to a certain point. They can still exist; we have learned to do business with those errors. We need to understand — on each application — what the right fallback action is for an AI error.”

Looking for ROI in all the wrong places

Even when genAI succeeds, its results are sometimes less valuable than anticipated. For example, generative AI is a very effective tool for creating information that is generally handled by lower-level staffers or contractors, where it is simply tweaking existing material for use in social media or e-commerce product descriptions. It still needs to be verified by humans, but it has the potential for cutting costs in creating low-level content.

But because it often is low level, some have questioned whether that is really going to deliver any meaningful financial advantages.

“Even before AI, the market for mediocre written and visual content was already fairly saturated, so it’s no surprise that some enterprises have discovered there is limited ROI in similar mediocre content generated by AI,” said Brian Levine, a managing director at consultant Ernst & Young.

What ROI should look like for enterprise genAI

KX’s Twomey questioned whether many senior enterprise executives have a realistic handle on what ROI should mean in a generative AI rollout, especially in the first year where it is mostly an experiment rather than a traditional deployment.

“Enterprise deployment of genAI has slowed down — and will continue to do so — as enterprises experience an increase in costs that exceeds the value they are getting,” Twomey said. “When this happens, it tells me that enterprises aren’t understanding the ROI and they’re not appropriately controlling TCO.”

And therein lies the conundrum: How can executives appropriately control the total cost of ownership and appropriately interpret the return on investment if they have no idea what either should look like in a generative AI reality?

This gets even more difficult when secondary ROI factors are considered, such as market and customer/prospect perceptions, Twomey points out.

“This complexity with transiting — and scaling — AI workflows in production has been prohibitive for many enterprise deployments,” he said. “The repercussions are clear losses in time, money, and effort that can also result in competitive disadvantages, reputational damage, and stalled future innovation initiatives.”

It may even be premature to measure ROI monetarily for genAI. “The value for enterprises today is to practice, to experiment,” said DataArt’s Byrnes. “That is one of the things that people don’t really appreciate. There is a strong learning component to all of this.”

Focusing genAI

But while experimentation is important, it should be done intelligently. EY’s Levine notes that some companies are inclined to trust generative AI too much when it comes to methodology, allowing the software to figure out how to obtain the desired information. 

Consider the example of a large and growing retail chain that turned to genAI to figure out the best locations for its next 50 stores. Given insufficient guidelines, the AI went off the rails and returned completely unusable results, according to inside sources.

Instead of simply telling the AI to make recommendations for the best places to launch stores, Levine suggests that the retailer would be better served by coding very extensive and very specific lists of how it currently evaluates new locations. That way, the software can follow those instructions, and the chances of it making errors is somewhat reduced.

Would an enterprise ever tell a new employee, “Figure out where our next 50 stores should be. Bye!”? Unlikely. The business would spend days training that employee on what to look for and where to look, and the employee would be shown lots of examples of how it had been done before. If a manager wouldn’t expect a new employee to figure out how to answer the question without extensive training, why would that manager expect genAI to fare any better?

Given that ROI simply means value delivered minus cost, the best way to improve value is to increase the accuracy and usability of the answers provided. Sometimes, that means not giving genAI broad requests and seeing what it chooses to do. That might work in machine learning, but genAI is a different animal.

To be fair, there absolutely are situations where it makes sense to set genAI loose and see where it chooses to go. But for the overwhelming majority of situations, IT will see far better results if it takes the time to train genAI appropriately.

Reining in genAI projects

Now that the initial hype over genAI has died down, it’s important for IT leaders to protect their organizations by focusing on deployments that will bring true value to the company, say AI strategists.

One suggestion for trying to better control generative AI efforts is for enterprises to create AI committees consisting of specialists in various AI disciplines, Snowflake’s Shah said. That way, every single generative AI proposal originating anywhere in the enterprise would have to be run by this committee, who could veto or approve any idea.

“With security and legal, there are so many things that can go wrong with a generative AI effort. This would make executives go in front of the committee and explain exactly what they wanted to do and why,” he said.

Shah sees these AI approval committees as short-term placeholders. “As we mature our understanding, the need for those committees will go away,” he said.

Another suggestion comes from NILG.AI’s Fernandes. Instead of flashy, large-scale genAI projects, enterprises should focus on smaller, more controllable objectives such as “analyzing a vehicle’s damage report and estimating costs, or auditing a sales call and identifying if the person follows the script, or recommending products in e-commerce based on the content/description of those products instead of just the interactions/clicks.”

And instead of implicitly trusting genAI models, “we shouldn’t use LLMs on any critical task without a fallback option. We shouldn’t use them as a source of truth for our decision-making but as an educated guess, just like you would deal with another person’s opinion.”

Want ROI from genAI? Rethink what both terms mean

When generative AI popularity and marketing hype went into overdrive last year, just about every enterprise launched a wide range of genAI projects. And for various reasons, very few of them delivered the kind of return on investment that CEOs and board members had expected.

That meant that 2024 has become the year of AI postmortems and recriminations about why projects went sour and who was to blame. What can IT leaders do now to make sure that genAI projects launched later this year and throughout 2025 fare better? Experts are suggesting a radical rethinking of how ROI should be measured in genAI deployments, as well as the kinds of projects where generative AI belongs at all.

“We have an AI ROI paradox in our sector, and we have to overcome it,” said Atefeh “Atti” Riazi, CIO for media enterprise Hearst, which reported $12 billion in revenue last year. “Although we have [years of experience] measuring the ROI for IT on lots of other projects, AI is so disruptive that we don’t really yet understand its impacts. We don’t understand the implications of it long term.”

When boards push down genAI mandates — and LOBs go rogue

After OpenAI captured the attention of the industry when consumer fascination with ChatGPT surged in early 2023, Conor Twomey observed a “wave of euphoria and fear that swept over every boardroom.” AI vendors tried to take advantage of this euphoria by marketing their own version of FUD (fear, uncertainty, and doubt), said Twomey, head of AI strategy at data management firm KX.

“Every organization went down the same path and said, ‘We don’t know what this thing is capable of.’”

That sparked a flood of genAI deployments ordered from boards of directors and, to a lesser extent, CEOs. This was happening to an extent that has not been seen since the early days of web euphoria around 1994.

“That was something different with generative AI, where a lot of the motion came top-down,” said Rajiv Shah, who manages AI strategy for Snowflake, a cloud data storage and analytics service provider. “Deep learning, for example, was certainly hyped up, but it didn’t have the same top-down pushing.”

Shah says this top-down approach colored and often complicated the traditional requirements for ROI analysis prior to major rollouts. Little wonder that those rollouts failed to meet expectations.

And mandates from above weren’t the only source of pressure IT leaders faced to push through genAI projects. Many business units also brought AI ideas to IT, and IT pointed out why they would be unlikely to be successful. And those departments often said, “Thanks for the input. We are doing it anyway.”

Such projects tend to shift focus away from companies’ true priorities, notes Kelwin Fernandes, CEO at AI consultant NILG.AI.

“I see genAI being applied in non-core processes that won’t directly affect the core business, such as chatbots or support agents. These projects lack support and long-term engagement from the organization,” Fernandes said. “I see genAI not bringing the promised ROI because people moved their priorities from making better decisions to building conversational interfaces or chatbots.”

Inflated expectations, underestimated costs

Early genAI apps often delivered breathtaking results in small pilots, setting expectations that didn’t carry over to larger deployments. “One of the primary culprits of the cost versus value conundrum is lack of scalability,” said KX’s Twomey.

He points to an increasing number of startup companies using open-source genAI technology that is “sufficient for introductory deployments, meaning they work nicely with a couple hundred unstructured documents. Once enterprises feel comfortable with this technology and begin to scale it up to hundreds of thousands of documents, the open-source system bloats and spikes running costs,” he said.

“Same goes for usage,” he added. “When genAI is inserted into a workflow ideal for a subset of users and then exponentially more users are added, it doesn’t work as hoped.”

Patrick Byrnes, formerly senior consultant for AI at Deloitte and now an AI consultant for DataArt, attributes some of the inflated ROI expectations for generative AI projects to the impressive performance delivered by the earliest genAI applications.

“If you go into Gemini or ChatGPT and ask it something basic, you can get an incredible response right away,” he said. Expecting similar results on a larger scale, “some enterprises did not start small. Right out of the gate, they went with high-impact customer facing efforts.”

Indeed, many of the ROI shortcomings with genAI deployments are a result of executives not thinking through the rollout implications sufficiently, according to an executive in the AI field who asked that her name and affiliation not be used.

“Automation driven by AI leads to productivity gains, but often the cost to enable it is overlooked,” she said. “Enterprises focus on model development, training, and system infrastructure but don’t accurately account for cost of data prep. They spin up massive data sets for AI, but small errors can make it useless, which also leads employees to mistrust outputs, leading to costs without ROI.”

Another overlooked factor, she noted, is that many AI vendors are currently focused on customer acquisition, keeping costs down in the short term. “Then they will ratchet up prices with an eye toward profitability, which will lead to higher costs for enterprise users in the future.”

Those costs are not likely to get meaningfully better by 2025. IDC noted that the costs with generative AI efforts are extensive.

“Generative AI requires enormous levels of compute power. NVIDIA’s workhorse chip that powers the GPUs for datacenters and the AI industry costs ~$10,000 per chip,” the analyst firm said in a September 2023 report. “Operational costs are in the range of $4 million to $5 million monthly, and businesses expect model training costs to exceed $5 million. Added to this are electricity costs and datacenter management.”

The hallucination challenge

On top of all this is the fact that genAI periodically hallucinates, meaning that the system makes things up. That will deliver a bitter surprise if the company is trusting it to analyze critical data in healthcare, finance, or aerospace — and even if it is simply relying on genAI to accurately summarize what happened during a meeting.

For business managers who are used to trusting the numbers generated by a spreadsheet projecting revenue growth, that can be unsettling. Those executives are used to the projections failing because an employee’s assumptions turned out to be too optimistic, but they are not used to Excel lying about the mathematical result of 800 numbers being multiplied.

And it cuts into ROI because all generative AI output must be closely fact-checked by a human, erasing many of the perceived productivity gains.

Hearst’s Riazzi sees the genAI hallucination issue as temporary. “Hallucinations do not bother me. Eventually, it will address itself,” she said.

More importantly, she argues that business simply needs to apply the same supervision and oversight to genAI that it has for decades with its human employees, stressing that “people hallucinate as well” and coders have been known to write “buggy code.”

“Human error is already a big issue in medicine and patient care,” Riazzi said. “There is a lot of bad data out there, but there is no difference [in managing hallucinations] from what we are already doing today. We see a lot of data cleansing going on.”

NILG.AI’s Fernandes is doubtful that genAI hallucinations will ever go away, but he says that shouldn’t necessarily be a dealbreaker for any application. It is simply a matter of enterprises adjusting their thinking to deal with an imperfect reality, something they already have experience doing.

“We have quality assurance to reduce production errors, but errors still exist, and that’s why we have return policies and warranties. We use the QA process as a fallback plan of the factory errors and the warranty as a fallback plan of the QA,” he said. “All those actions reduce the probability of failure to a certain point. They can still exist; we have learned to do business with those errors. We need to understand — on each application — what the right fallback action is for an AI error.”

Looking for ROI in all the wrong places

Even when genAI succeeds, its results are sometimes less valuable than anticipated. For example, generative AI is a very effective tool for creating information that is generally handled by lower-level staffers or contractors, where it is simply tweaking existing material for use in social media or e-commerce product descriptions. It still needs to be verified by humans, but it has the potential for cutting costs in creating low-level content.

But because it often is low level, some have questioned whether that is really going to deliver any meaningful financial advantages.

“Even before AI, the market for mediocre written and visual content was already fairly saturated, so it’s no surprise that some enterprises have discovered there is limited ROI in similar mediocre content generated by AI,” said Brian Levine, a managing director at consultant Ernst & Young.

What ROI should look like for enterprise genAI

KX’s Twomey questioned whether many senior enterprise executives have a realistic handle on what ROI should mean in a generative AI rollout, especially in the first year where it is mostly an experiment rather than a traditional deployment.

“Enterprise deployment of genAI has slowed down — and will continue to do so — as enterprises experience an increase in costs that exceeds the value they are getting,” Twomey said. “When this happens, it tells me that enterprises aren’t understanding the ROI and they’re not appropriately controlling TCO.”

And therein lies the conundrum: How can executives appropriately control the total cost of ownership and appropriately interpret the return on investment if they have no idea what either should look like in a generative AI reality?

This gets even more difficult when secondary ROI factors are considered, such as market and customer/prospect perceptions, Twomey points out.

“This complexity with transiting — and scaling — AI workflows in production has been prohibitive for many enterprise deployments,” he said. “The repercussions are clear losses in time, money, and effort that can also result in competitive disadvantages, reputational damage, and stalled future innovation initiatives.”

It may even be premature to measure ROI monetarily for genAI. “The value for enterprises today is to practice, to experiment,” said DataArt’s Byrnes. “That is one of the things that people don’t really appreciate. There is a strong learning component to all of this.”

Focusing genAI

But while experimentation is important, it should be done intelligently. EY’s Levine notes that some companies are inclined to trust generative AI too much when it comes to methodology, allowing the software to figure out how to obtain the desired information. 

Consider the example of a large and growing retail chain that turned to genAI to figure out the best locations for its next 50 stores. Given insufficient guidelines, the AI went off the rails and returned completely unusable results, according to inside sources.

Instead of simply telling the AI to make recommendations for the best places to launch stores, Levine suggests that the retailer would be better served by coding very extensive and very specific lists of how it currently evaluates new locations. That way, the software can follow those instructions, and the chances of it making errors is somewhat reduced.

Would an enterprise ever tell a new employee, “Figure out where our next 50 stores should be. Bye!”? Unlikely. The business would spend days training that employee on what to look for and where to look, and the employee would be shown lots of examples of how it had been done before. If a manager wouldn’t expect a new employee to figure out how to answer the question without extensive training, why would that manager expect genAI to fare any better?

Given that ROI simply means value delivered minus cost, the best way to improve value is to increase the accuracy and usability of the answers provided. Sometimes, that means not giving genAI broad requests and seeing what it chooses to do. That might work in machine learning, but genAI is a different animal.

To be fair, there absolutely are situations where it makes sense to set genAI loose and see where it chooses to go. But for the overwhelming majority of situations, IT will see far better results if it takes the time to train genAI appropriately.

Reining in genAI projects

Now that the initial hype over genAI has died down, it’s important for IT leaders to protect their organizations by focusing on deployments that will bring true value to the company, say AI strategists.

One suggestion for trying to better control generative AI efforts is for enterprises to create AI committees consisting of specialists in various AI disciplines, Snowflake’s Shah said. That way, every single generative AI proposal originating anywhere in the enterprise would have to be run by this committee, who could veto or approve any idea.

“With security and legal, there are so many things that can go wrong with a generative AI effort. This would make executives go in front of the committee and explain exactly what they wanted to do and why,” he said.

Shah sees these AI approval committees as short-term placeholders. “As we mature our understanding, the need for those committees will go away,” he said.

Another suggestion comes from NILG.AI’s Fernandes. Instead of flashy, large-scale genAI projects, enterprises should focus on smaller, more controllable objectives such as “analyzing a vehicle’s damage report and estimating costs, or auditing a sales call and identifying if the person follows the script, or recommending products in e-commerce based on the content/description of those products instead of just the interactions/clicks.”

And instead of implicitly trusting genAI models, “we shouldn’t use LLMs on any critical task without a fallback option. We shouldn’t use them as a source of truth for our decision-making but as an educated guess, just like you would deal with another person’s opinion.”

New UK government downplays AI regulation in program for the next year

As Britain’s King Charles III stood up in the Houses of Parliament on Wednesday to present the new Labour government’s proposed legislative program, technology experts were primed for any mention of artificial intelligence (AI).

In the event, amidst the colorful pomp and arcane ceremony the British state is famous for in the state opening of Parliament, what the speech delivered was mostly a promise of future legislation shorn of any detail on the form this will take.

Talking head

The King’s Speech is where Britain’s elected government, in this case the recently elected Labour administration, lays out bills it plans to enact into law in the coming year.

The monarch delivers the speech, but it is written for him by the government. His role is purely constitutional and ceremonial.

It is hard to imagine a greater contrast than a ceremony whose origins date back hundreds of years and topics such as AI, which embodies the promise and peril of 21st century technology.

The government “will seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models,” announced King Charles.

Beyond the focus on regulating models used for generative AI, though, that leaves the government’s plans and their timing open to interpretation. But even the willingness to act marks a change of direction from the policy of the deposed Conservative administration to legislate on AI within narrow constraints.

Everyone wants to regulate AI

There had been an expectation that the new government would go further, primed by general statements of intent in the Labour Party Manifesto 2024.

“We will ensure our industrial strategy supports the development of the Artificial Intelligence (AI) sector, removes planning barriers to new datacentres,” stated the Manifesto before turning to the need for regulation.

“Labour will ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models and by banning the creation of sexually explicit deepfakes.”

The disappearance of these modest ambitions could signal that the government has yet to work out what “binding regulation” should look like at a time when other legislation seems more pressing.

The previous government worried that too much regulation risked stifling development. Equally, no regulation at all carries the risk that by the time it becomes necessary it will be too late to act.

The EU, of course, already has its AI Act while the US is still working through a mixture of proposed legislation bolstered by the Biden administration’s executive orders describing first principles.

Still too early?

A comment by open-source industry advocate OpenUK in advance of the King’s Speech sums up the dilemma.

“There are lessons the UK can learn from the EU’s AI Act that will likely prove to be an overly prescriptive and unwieldy cautionary tale of regulatory capture with only the largest companies able to comply, stifling innovation in the EU,” said the organization’s CEO, Amanda Brock.

It was still too early to legislate in a way that creates walls and legal restrictions.

“For the UK to stay relevant globally, and to build successful AI companies, openness is crucial. This will allow the UK ecosystem to grow its status as a world leader in open- source AI, behind only the US and China,” she added.

But not everyone is convinced that the wait-and-see approach is the right one.

“Regulation is not just about setting restrictions on AI development; it’s about providing the clarity and guidance needed to promote safe and sustainable innovation,” said Bruna de Castro e Silva of AI Governance specialist Saidot.

“As the EU moves forward with publishing its official AI Act, UK businesses have been left waiting for clear guidance on how to develop and deploy AI safely and ethically.”

This is why AI regulation is seen as a thankless task. Take an interventionist approach and experts will line up to say you’re stifling a technology with huge economic and social potential. Take a more cautious approach and others will say you’re not doing enough.

Last November, the previous Conservative administration of Rishi Sunak jumped on the theme of AI, hosting a global AI Safety Summit with symbolic flourish at the famous Second World War code-breaking facility just outside London, Bletchley Park.

At that event, several big AI names — OpenAI, Google DeepMind, Anthropic — undertook to give a new Frontier AI Taskforce early access to their models to conduct safety evaluations.

The new government inherits that promise even if to many others it will seem as if certainty about the UK’s AI legislative regime is no nearer than it was then.

New UK government downplays AI regulation in program for the next year

As Britain’s King Charles III stood up in the Houses of Parliament on Wednesday to present the new Labour government’s proposed legislative program, technology experts were primed for any mention of artificial intelligence (AI).

In the event, amidst the colorful pomp and arcane ceremony the British state is famous for in the state opening of Parliament, what the speech delivered was mostly a promise of future legislation shorn of any detail on the form this will take.

Talking head

The King’s Speech is where Britain’s elected government, in this case the recently elected Labour administration, lays out bills it plans to enact into law in the coming year.

The monarch delivers the speech, but it is written for him by the government. His role is purely constitutional and ceremonial.

It is hard to imagine a greater contrast than a ceremony whose origins date back hundreds of years and topics such as AI, which embodies the promise and peril of 21st century technology.

The government “will seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models,” announced King Charles.

Beyond the focus on regulating models used for generative AI, though, that leaves the government’s plans and their timing open to interpretation. But even the willingness to act marks a change of direction from the policy of the deposed Conservative administration to legislate on AI within narrow constraints.

Everyone wants to regulate AI

There had been an expectation that the new government would go further, primed by general statements of intent in the Labour Party Manifesto 2024.

“We will ensure our industrial strategy supports the development of the Artificial Intelligence (AI) sector, removes planning barriers to new datacentres,” stated the Manifesto before turning to the need for regulation.

“Labour will ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models and by banning the creation of sexually explicit deepfakes.”

The disappearance of these modest ambitions could signal that the government has yet to work out what “binding regulation” should look like at a time when other legislation seems more pressing.

The previous government worried that too much regulation risked stifling development. Equally, no regulation at all carries the risk that by the time it becomes necessary it will be too late to act.

The EU, of course, already has its AI Act while the US is still working through a mixture of proposed legislation bolstered by the Biden administration’s executive orders describing first principles.

Still too early?

A comment by open-source industry advocate OpenUK in advance of the King’s Speech sums up the dilemma.

“There are lessons the UK can learn from the EU’s AI Act that will likely prove to be an overly prescriptive and unwieldy cautionary tale of regulatory capture with only the largest companies able to comply, stifling innovation in the EU,” said the organization’s CEO, Amanda Brock.

It was still too early to legislate in a way that creates walls and legal restrictions.

“For the UK to stay relevant globally, and to build successful AI companies, openness is crucial. This will allow the UK ecosystem to grow its status as a world leader in open- source AI, behind only the US and China,” she added.

But not everyone is convinced that the wait-and-see approach is the right one.

“Regulation is not just about setting restrictions on AI development; it’s about providing the clarity and guidance needed to promote safe and sustainable innovation,” said Bruna de Castro e Silva of AI Governance specialist Saidot.

“As the EU moves forward with publishing its official AI Act, UK businesses have been left waiting for clear guidance on how to develop and deploy AI safely and ethically.”

This is why AI regulation is seen as a thankless task. Take an interventionist approach and experts will line up to say you’re stifling a technology with huge economic and social potential. Take a more cautious approach and others will say you’re not doing enough.

Last November, the previous Conservative administration of Rishi Sunak jumped on the theme of AI, hosting a global AI Safety Summit with symbolic flourish at the famous Second World War code-breaking facility just outside London, Bletchley Park.

At that event, several big AI names — OpenAI, Google DeepMind, Anthropic — undertook to give a new Frontier AI Taskforce early access to their models to conduct safety evaluations.

The new government inherits that promise even if to many others it will seem as if certainty about the UK’s AI legislative regime is no nearer than it was then.