Month: October 2024

5 ways to stop Windows Update from rebooting your PC

Windows Update can be a pain. Rebooting for updates is one thing — but a forced reboot for updates that shut down your running applications when you’re trying to get work done? Now, that’s obnoxious.

Windows Update hit rock bottom in the early years of Windows 10. Back then, lots of people I know complained to me that Windows Update had automatically rebooted their PC and messed up their work — often to install a major update that made that reboot take an especially long time!

The good news is that Windows Update is less irritating now; whether you’ve upgraded to Windows 11 or are still using Windows 10, Windows Update has learned some restraint. The bad news is that Windows Update still reserves the right to reboot your PC when it wants to automatically install updates. If you leave your computer running overnight, Windows Update might automatically reboot it.

But there are some ways you can take control.

Want more PC advice? My free Windows Intelligence newsletter delivers all the best Windows tips straight to your inbox. Plus, you’ll get in-depth Windows Field Guides as a special welcome bonus!

Why Windows Update forces automatic reboots

Like any operating system, Windows has security vulnerabilities that need to be fixed when they’re discovered. And, even after Windows Update installs those updates, they often don’t take effect until your PC reboots, leaving it vulnerable.

Because many people ignore updates, Windows Update takes things into its own hands and reboots for you, ensuring your PC has all the latest active security patches.

Hopefully, this will become less necessary in the future. Microsoft appears to be working on “hotpatching” for Windows 11, which would let Windows install some security updates and make them take effect immediately — no reboot necessary. That’s something to look forward to.

Windows Update workaround #1: Set your active hours

The best thing you can do to prevent Windows Updates from interrupting your work (or play) is to change your PC’s “active hours.” These are the hours you generally use your computer; Windows Update won’t restart your PC during these times.

You can set up to 18 hours of the day as active hours. For example, you could set the hours of 6 a.m. to midnight as your active hours. Windows Update would then only restart automatically for updates between midnight and 6 a.m.

This works on both Windows 11 and Windows 10. Changing it may not be necessary: Windows will learn when you generally use your PC and attempt to automatically set hours that make sense for you. But you can set them yourself.

To change your PC’s active hours:

  • On Windows 11, open the Settings window from the Start menu, select “Windows Update,” select “Advanced options,” and then click “Active hours.”
  • On Windows 10, open the Settings window, select “Update & Security,” and click “Change active hours.”
Windows Update; Active hours
The Active Hours setting will let you banish unexpected reboots to a time of day where they won’t interrupt your work.

Chris Hoffman, IDG

Windows Update workaround #2: Reboot on your own schedule

Active hours aren’t the ideal solution if you need your PC to run for days on end. Perhaps you’re performing an important long-running task overnight and need to ensure Windows Update doesn’t get in the way and start rebooting things.

Personally, I like taking control of matters. I choose when to reboot for any updates. That’s why I set Windows Update to tell me when it needs an update. I will then reboot at a time that’s convenient for me.

To have Windows Update notify you before rebooting your PC:

  • On Windows 11, head to Settings > Windows Update > Advanced options. Ensure that “Notify me when a restart is required to finish updating” is toggled On.
  • On Windows 10, go to Settings > Update & security > Advanced options. Ensure that “Show a notification when your PC requires a restart to finish updating” is on.

Then, when an update is necessary — which you’ll know when you see that nagging system tray icon — you can choose to restart and update. Just use the power menu in the Start menu and select “Update and restart.”

This works hand in hand with active hours. Windows Update will not reboot during the 18 hours of the day that are your active hours. Then, if you plan on leaving your PC on overnight to perform an important task, you can choose to restart it before you step away. (That’s what I do.)

This isn’t a total escape from Windows Update’s automatic reboots. If you ignore the notification, Windows Update might automatically reboot outside of active hours. But at least you can choose to do it at a convenient time.

Windows Update: Notify me to restart
Windows Update’s notifications are the key to rebooting on your own schedule.

Chris Hoffman, IDG

Windows Update workaround #3: Stop automatic update downloads

Windows has a well-disguised way to stop Windows Update from automatically downloading and installing updates. And, if it won’t install them, it won’t automatically reboot your computer, either.

To do this, you have to set a connection as “metered.” This is what you would do if you were using a cellular data connection without much data, for example. Windows Update will respect this and won’t automatically download updates on metered networks.

To get updates, you’ll have to open the Windows Update pane in Settings and click a button to download them. To do this on either Windows 11 or Windows 10, head to Settings > Network & internet. If you’re connected to a Wi-Fi network, click “Wi-Fi” and then the name of the network. If you’re connected to a wired network, click “Ethernet.” Then, toggle on the “Metered connection” or “Set as metered connection” option.

You’ll want to check this setting to ensure Windows Update respects the “metered connection” option:

  • On Windows 11, head to Settings > Windows Update > Advanced options and ensure “Download updates over metered connections” is set to Off.
  • On Windows 10, go to Settings > Update & Security > Advanced options and ensure “Download updates over metered connections (extra charges may apply)” is set to Off.

Be sure to visit the Windows Update settings screen and install updates regularly if you do this. You can choose to install the updates when a reboot is convenient.

Bear in mind that Windows Update will automatically download updates when it connects to a connection that isn’t marked metered. So, if you mark your home Wi-Fi connection as metered and then take your laptop to a coffee shop, it will automatically begin downloading updates when you connect it to the coffee shop’s Wi-Fi hotspot.

Windows Update: Metered connection
The buried “metered connection” switch stops Windows Update’s automatic update process, putting it under your control.

Chris Hoffman, IDG

There’s another way to take control over updates: While Windows 11 and Windows 10 don’t offer any built-in options for turning off automatic updates, they do offer a way to pause automatic updates. You can pause updates for up to five weeks.

This isn’t something I recommend to most people, as you will be going without security updates. But it’s a way to ensure Windows won’t install any updates — and reboot — for a period of time, if you have a pressing reason to do so.

To pause updates:

  • On Windows 11, head to Settings > Windows Update. Use the “Pause updates” drop-down box and select the number of weeks you want to pause updates for.
  • On Windows 10, head to Settings > Update & security > Advanced options. Use the box under “Pause updates” to choose how long you want to pause updates for.

After you unpause updates, Windows Update must check for and install updates before it lets you pause again.

Windows Update: Pause updates
Windows Update lets you stop getting updates — but only for a few weeks at a time.

Chris Hoffman, IDG

Windows Update workaround #5: Configure group policy (for businesses, mostly)

If you’re using a PC managed by your employer, it may be updated on your employer’s schedule. It’s up to the IT department to configure automatic update behavior. Businesses have a number of group policy options to control just how these automatic restarts work.

If you have a Windows professional license, you can configure some of these yourself on your own PC. But you shouldn’t need to do so — the above options will let you take control.

One final word of wisdom…

However you go about handling Windows Update activity, it’s a good idea to use applications that automatically save your work so they can recover from unexpected reboots.

Luckily, this applies to most modern Windows applications. Then, if your PC suddenly has to shut down — whether due to Windows Update, a blue screen of death, or a power outage — you won’t lose any data.

Get even more Windows tips and tricks with my Windows Intelligence newsletter — three things to try every Friday. Plus, get free copies of Paul Thurrott’s Windows 11 and Windows 10 Field Guides (a $10 value) for signing up.

How Microsoft became a Big Tech choirboy

The US Federal Trade Commission (FTC) and the Department of Justice have aggressively targeted Big Tech, suing Meta, Google, Amazon, and Apple for antitrust violations. And it’s not doing so in a small way — it’s filed multiple law suits against Google, for example. In August, Judge Amit Mehta ruled the company violated antitrust law by actions it took to protect its search business. 

He was direct and blunt in his ruling: “Google is a monopolist, and it has acted as one to maintain its monopoly.” 

No decision has yet been made about what action will arise from the ruling. But the FTC could well recommend the nuclear option: Break up Google by forcing it to spin off parts or all of its search business.

That’s on the heels of other Big Tech suits. In March, the DOJ filed an antitrust suit against Apple, claiming the company has taken a variety of actions to make it more difficult for people from switch from their iPhones to competitors’ devices.

In 2020, the FTC sued Meta for antitrust violations, claiming Meta created a monopoly in social media when it bought Instagram and WhatsApp. There have been some twists and turns, but the suit still stands, though it hasn’t yet been brought to court. If the government prevails, it might try to force Meta to sell off Instagram and WhatsApp.

Just last month, Amazon got its turn when the FTC sued it for taking “interlocking anticompetitive and unfair strategies to illegally maintain its monopoly power.”

Notice anything missing from that group? 

How about Microsoft, valued at more than $3 trillion and the world’s leading AI company? It’s true the FTC went after Microsoft when the company announced it was buying the gaming company Activision for $69 billion. But the feds lost that suit. And even if they had won, there’s a big difference between that fight and the ones against Meta, Google, Apple, and Amazon. Those lawsuits represent existential threats to the way the companies do business, possibly including breaking them up. The Activision suit, if successful, would only have stopped Microsoft from increasing its presence in gaming.

How has Microsoft managed to avoid being targeted? After all, the company has a virtual monopoly on desktop and laptop operating systems, is the global leader in AI, and has a massive presence almost everywhere in the tech world, from cloud computing to productivity software suites and beyond.

This didn’t happen by accident. Here’s how the onetime biggest shark in technology, a company that was set on its heels by an FTC suit decades ago, has managed to stay on the right side of the feds — at least so far.

Becoming the tech world’s choirboy

Back in 1998, Microsoft faced its own existential crisis: The DOJ sued it for illegally using its Windows monopoly to kill its competition. The company lost the lawsuit, and a judge ordered the company be broken up. After an appeal, in 2001 Microsoft and the DOJ reached an agreement in which Microsoft had to share code with other companies and had to allow non-Microsoft browsers access to Windows.

It was little more than a slap on the wrist. Despite that, the company went into a tailspin because it was so focused on defending itself rather than aggressively going after the mobile market, expanding into internet search, focusing on social media, or jumping into online retail.

When Satya Nadella became CEO of Microsoft in 2014, he was aware the FTC lawsuit had set the company on its heels. He determined to do whatever he could to avoid similar suits in the future. So, he changed the company’s old predatory culture and focused on technologies and behavior less likely to invite the wrath of regulators and law enforcement. 

What he did above all was focus on a variety of technologies, rather than a single one. And he did so without trying to gain a monopoly. For example, Nadella bet big on the cloud, growing the company’s cloud-based business and revenue dramatically. Amy Hood, executive vice president and chief financial officer of Microsoft, said of the company’s recent quarter, “Microsoft Cloud quarterly revenue of $36.8 billion, [was] up 21% (up 22% in constant currency) year-over-year.”

Microsoft also gets big revenue from Windows, its office suite, AI, gaming, and more. 

Key is that none of those technologies comes close to being a monopoly. Amazon is the leader in the cloud, not Microsoft. Thanks to iOS and Android, Microsoft doesn’t have a monopoly on operating systems. Google has a sizable office suite business, so Microsoft doesn’t have a monopoly there. And while Microsoft has become big in gaming, the courts have already ruled it doesn’t have a monopoly.

Google, Meta, Apple, and Amazon are each tied to technologies in which they have monopolies. It’s been their strengths — but with the FTC and DOJ targeting them, it could become their downfalls.

How about AI?

That’s not to say Microsoft will avoid government action forever. It wouldn’t be a surprise for the DOJ or the FTC to eventually go after it for its AI dominance. Not only is it now the largest AI company in the world, but it has deep ties to OpenAI, another dominant player in the field. 

At the moment, there’s plenty of competition, with Google, Meta, Apple, and Amazon jumping in, and with other large companies like Anthropic in the running. But if things shake out and Microsoft becomes the runaway leader, it might find itself in regulators’ crosshairs again.

Intel, AMD unite in new x86 alliance to tackle AI, other challenges

Semiconductor rivals Intel and AMD announced the formation of an x86-processor advisory group that will try to address ever-increasing AI workloads, custom chiplets, and advances in 3D packaging and system architectures.

Members of the x86 Ecosystem Advisory Group include Broadcom, Dell, Google, Hewlett Packard Enterprise, HP, Lenovo, Meta, Microsoft, Oracle, and Red Hat. Notably missing: TSMC — the world’s largest chipmaker. Linux creator Linus Torvalds and Epic Games CEO Tim Sweeney are also members.

The mega-tech companies plan to collaborate on architectural interoperability and hope to “simplify software development” across the world’s most widely used computing architecture, according to a news announcement.

“We are on the cusp of one of the most significant shifts in the x86 architecture and ecosystem in decades — with new levels of customization, compatibility and scalability needed to meet current and future customer needs,” Intel CEO Pat Gelsinger said in a statement.

Generative AI (genAI) is moving into smartphones, PC, cars and Internet of Things (IoT) devices because the processing power on edge devices can access data locally, return faster results, and they’re more secure.

That’s why, over the next several years, silicon makers are turning their attention to fulfilling the promise of AI at the edge, which will allow developers to essentially offload processing from data centers — giving genAI app makers a free ride as the user pays for the hardware and network connectivity.

Apple, Samsung, and other smartphone and silicone manufacturers are rolling out AI capabilities on their hardware, fundamentally changing the way users interact with edge devices. On the heels of Apple rolling out an early preview of iOS 18.1 with its first generative AI (genAI) tools, IDC released a report saying nearly three in four smartphones will be running AI features within four years

The release of the next version of Windows — perhaps called Windows 12 — later this year is also expected to be a catalyst for genAI adoption at the edge; the new OS is expected to have AI features built in.

At the 2024 Consumer Electronics Show in April, PC vendors and chipmakers showcased advanced AI-driven functionalities. But despite the enthusiasm generated by those selling or making genAI tools and platforms, enterprises are expected to adopt a more measured approach over the next year, according to one Forrester Research report.

“CIOs face several barriers when considering AI-powered PCs, including the high costs, difficulty in demonstrating how user benefits translate into business outcomes, and the availability of AI chips and device compatibility issues,” said Andrew Hewitt, principal analyst at Forrester Research.

Apple powers up the iPad mini for Apple Intelligence

As expected, Apple has introduced a much faster Apple Intelligence-capable iPad mini equipped with the same A17 Pro chip used in the iPhone 15 Pro series. That’s a good improvement from the A15 Bionic in the previous model, and makes for faster graphics, computation, and AI calculation. 

It also sets the scene for the public release of the first Apple Intelligence features on Oct. 28, when I expect all of Apple’s heavily promoted wave of current hardware ads to at last make more sense. (We can also expect new Macs before the end of October.)

The iPad mini turns 7

By announcing the new mini by press release, Apple broke with tradition twice with this heavily telegraphed (we all expected it) product iteration.

First, in what from memory seems a fairly rare move, Apple unveiled the new hardware right after a US holiday; second, the release wasn’t flagged by Apple industry early-warning system Mark Gurman, though he did anticipate an October update. The introduction of a highly performant Apple tablet is likely to further accelerate Apple’s iPad sales, which increased 14% in Q2 2024, according to Counterpoint. Apple will remain the world’s leading tablet maker, and reports earlier about the death of this particular component of Apple’s tablet range proved unfounded.

What’s new in iPad mini?

At first glance, the new iPad mini will seem familiar to most users. The biggest change is pretty much an updated chip inside a similar device, with the same height, width, and weight as the model it replaces. Available in blue, purple, starlight, and space gray, the iPad mini has an 8.3-in. Liquid Retina display, similar to before. Remarkably, pricing on the new models starts at $499 for 128GB storage — which is twice the storage at the same starting price as the 2021 iPad mini this one replaces. 

There are other highlights here.

A better, faster, AI processor

The A17 Pro processor means the iPad mini now has a 6-core CPU, which makes for a 30% boost in CPU performance in comparison to the outgoing model. You also get a 25% boost to graphics performance, along with the necessary AI-based computation capability enhancements required to run Apple Intelligence. Of course, the chip is far more capable of handling the kind of professionally focused apps used by designers, pilots, or doctors.

While we all recognize at this stage that Apple’s decision to boost all its products with more powerful chips is because it wants to ensure support for Apple Intelligence, this also means you get better performance for other tasks as well. All the same, it will be interesting to discover the extent to which a far more contextually-capable Siri and the many handy writing assistance tools offered by Apple’s AI will boost existing tablet-based workflows in enterprise, education, and domestic use.

Better for conferencing

If you use your iPad for work, it is likely to be good news that the new iPad mini has a 12-megapixel (MP) back camera and 12MP conferencing camera. While the last-generation model also boasted 12MP cameras, the 5x digital zoom is a welcome enhancement, while the 16-core Neural Engine inside the iPad mini’s chip means those images you do capture are augmented on the fly by AI to improve picture/video quality. Overall, you’ll get better results when taking images or capturing video.

What Apple said

“There is no other device in the world like iPad mini, beloved for its combination of powerful performance and versatility in our most ultraportable design,” said Bob Borchers, Apple’s vice president of Worldwide Product Marketing. “iPad mini appeals to a wide range of users and has been built for Apple Intelligence, delivering intelligent new features that are powerful, personal, and private.

“With the powerful A17 Pro chip, faster connectivity, and support for Apple Pencil Pro, the new iPad mini delivers the full iPad experience in our most portable design at an incredible value.”

In common with all its latest product, Apple is applying every possible focus on AI tools, making crystal clear its plans to continue investing in its unique blend of privacy and the personal augmentation promised by its human-focused AI. The current selection of tools the company is providing should really be seen as a beginning of this part of its new journey.

What else stands out?

Additional improvements in the new iPad mini include:

  • Wi-Fi 6E support, which increases bandwidth if you happen to be on a compatible wireless network; 5G cellular available.
  • A 12-Megapixel wide back camera with Smart HDR 4 support and a built in document scanner with the Cameras app.
  • Apple Pencil Pro support.
  • Available for pre-order today, shipping on Oct. 23.
  • Apple Intelligence arrives with its first wave of features five days later.

There’s an environmental mission visible in the product introduction, too. The new iPad uses 100% recycled aluminium in its enclosure along with 100% recycled rare earth elements in all its magnets and recycled gold and tin in the printed circuit boards.

Please follow me on LinkedInMastodon, or join me in the AppleHolic’s bar & grill group on MeWe.

Google bets on nuclear power to drive AI expansion

Google has signed its first corporate deal to purchase power from multiple small modular reactors (SMRs) to meet the energy needs of its AI systems, marking a key step as AI companies shift toward nuclear power.

In a blog post, Google announced an agreement with Kairos Power to source nuclear energy, aiming to bring the first SMR online by 2030, with more reactors planned by 2035.

Continue reading on Network World.

Microsoft’s AI research VP joins OpenAI amid fight for top AI talent

Sebastien Bubeck, Microsoft’s vice president of GenAI research, is leaving the company to join OpenAI, the maker of ChatGPT.

Bubeck, a 10-year veteran at Microsoft, played a significant role in driving the company’s generative AI strategy, with a focus on designing more efficient small language models (SLMs) to rival OpenAI’s GPT systems.

His work culminated in the creation of the compact and cost-effective Phi models, which have since been incorporated into key Microsoft products like the Bing chatbot and Office 365 Copilot, gradually replacing OpenAI’s models in specific functions. His contributions helped enhance AI efficiency while reducing operational costs.

Microsoft confirmed the news but has not disclosed the exact role Bubeck will assume at the AI startup, Reuters reported.

“We appreciate the contributions Sebastian has made to Microsoft and look forward to continuing our relationship through his work with OpenAI,” Reuters reported quoting a Microsoft statement.  Most of Bubeck’s co-authors on Microsoft’s Phi LLM research are expected to remain at the company and continue advancing the technology.

Bubeck is expected to contribute his expertise toward OpenAI’s mission of developing AGI, which refers to autonomous systems capable of outperforming humans in most economically valuable tasks, the report added.

Bubeck’s move comes as OpenAI focuses on achieving artificial general intelligence (AGI), a key goal for the company. As per the report, while Microsoft has heavily invested in OpenAI, the company expressed no concerns about Bubeck’s departure.

“Sebastien Bubeck leads the Machine Learning Foundations group at Microsoft Research Redmond. He joined MSR in 2014, after three years as an assistant professor at Princeton University,” reads the profile of Bubeck in the yet-to-be-removed “About” page of Microsoft.

Bubeck’s X profile still shows him as “VP AI and Distinguished Scientist, Microsoft.”

Queries to Microsoft, OpenAI, and Bubeck did not elicit any response.

The great migration at OpenAI

Sebastien Bubeck’s departure from Microsoft to join OpenAI adds to a growing list of high-profile executive shifts in the AI industry, underscoring the intense competition for top talent as tech giants race to develop artificial general intelligence (AGI). While talent mobility is common in the fast-evolving AI landscape, OpenAI has been hit particularly hard with several key figures leaving in recent months.

Of the 11 founding members of OpenAI, only CEO Sam Altman and Wojciech Zaremba, head of the Codex and Research team, remain with the company. In September, Mira Murati, OpenAI’s high-profile CTO, stepped down, followed by co-founder John Schulman, who left to join Anthropic—a public benefit corporation focused on ethical AI development. These exits came on the heels of the departure of another co-founder, Ilya Sutskever, who resigned earlier this year to start his own venture, Safe Superintelligence Inc (SSI), dedicated to developing responsible AI systems.

Earlier in the year, Jan Leike, another leading OpenAI researcher, also left to join Anthropic, publicly expressing concerns that OpenAI’s “safety culture and processes have taken a backseat.” This wave of exits has raised questions about the company’s internal dynamics as it navigates the highly competitive AI landscape.

Despite these setbacks, OpenAI and its key collaborator, Microsoft, remain steadfast in their pursuit of AGI. Microsoft, which has heavily invested in OpenAI, has integrated its AI technology into core products like Bing and Office 365, while OpenAI continues to push the boundaries of AGI development.

“Leaders at big tech companies have either explicitly stated or signaled that they are deliberately working towards AGI,” said Anil Vijayan, partner at Everest Group. “There’s clearly strong belief that it will end up in a winner-take-all scenario, which is heating up the race to be first to the post,”

The race to AGI has intensified the demand for top-tier AI talent, with larger companies having a clear advantage. “We will see these handful of executives move between the big tech companies that can afford to attract high-profile AI executives. Smaller organizations and startups will struggle to retain high-quality AI talent,” Vijayan said.

For executives, the allure of AGI goes beyond compensation. “Top-tier talent is likely to be attracted by alignment to vision, stated goals, and the chance to be part of history — whether that’s AGI or otherwise,” said Vijayan.

This explains why many top AI professionals gravitate toward companies like OpenAI and Anthropic, which push the boundaries of AI and AGI development.

As the AI landscape continues to evolve, the talent war will likely shape the future of AGI, with big tech companies remaining at the forefront of the race.

How Ernst & Young’s AI platform is ‘radically’ reshaping operations

Multinational consultancy Ernst & Young (EY) said generative AI  (genAI) is “radically reshaping” the way it operates, and the company boasts a 96% adoption rate of the technology by employees.

After spending $1.4 billion on a customized generative AI platform called EY.ai, the company said the technology is creating new efficiencies and allowing its employees to focus on higher-level tasks. Following an initial pilot with 4,200 EY tech-focused team members in 2023, the global organization released its large language model (LLM) to its nearly 400,000 employees.

Even so, the company’s executive leadership insists it’s not handing off all of its business functions and operations to an AI proxy and that humans remain at the center of innovation and development. Looking to the future, EY sees the next evolution as artificial general intelligence (AGI) — a neural network that will be able to think for itself and capable of performing any intellectual task a human can at that point it will become a “strategic partner shifting the focus from task automation to true collaboration between humans and machines,” according to Beatriz Sanz Saiz, EY global consulting data and AI leader.

Computerworld interviewed Saiz about how genAI is changing the way the company operates and how its employees perform their jobs.


You launched EY.ai a year ago. How has that transformed your organization? What kinds of efficiencies and/or productivity gains have you seen? “Over the past year, we’ve harnessed AI to radically reshape the way we operate, both internally and in service to our clients. We’ve integrated AI into numerous facets of our operations, from enhancing client service delivery to improving our internal efficiencies. Teams are now able to focus more on high-value activities that truly drive innovation and business growth, while AI assists with complex data analysis and operational tasks.

“What is fascinating is the level of adoption: 96.4% of EY employees are users of the platform, which is enriching our collective intelligence. EY.ai is a catalyst for changing the way we work and re-skilling EY employees at pace.

“We’ve approached this journey by using ourselves as a perfect test case for the many ways in which we can provide transformational assistance to clients. This is central to our Client Zero strategy, in which we refine solutions and demonstrate their effectiveness in real-world settings — then adapt the crucial learnings from that process and apply them to driving innovation and growth for clients.”

How has EY.ai changed over the past year? “EY.ai has evolved in tandem with the rapid pace of technological advancement. Initially, we focused on testing and learning, but now we’re deeply embedding AI across every function of our business. This shift from experimentation to full-scale implementation is enabling us to be more agile, efficient, and responsive to our clients’ needs. In this journey, we’ve learned that AI’s potential isn’t just about isolated use cases — its true power lies in how it enables transformation at scale.

“The platform’s integration has been refined to ensure that it aligns with our core strategy — especially around making AI fit for purpose within the organization. It evolved from Fabric — an EY core data platform — to EY.ai, which incorporates a strong knowledge layer and AI technology ecosystem. In that sense, we’ve put a lot of effort into understanding the nuances of how AI can best serve each business, function and industry. We are rapidly building industry verticals that challenge the status quo of traditional value chains. We are constantly evolving its ethical framework to ensure the responsible use of AI, with humans always at the heart of the decision-making process.”

Can you describe EY.ai in terms of the model behind it, its size, and the number of instances you have (i.e., an instance for each application, or one model for all applications)? “EY.ai isn’t a one-size-fits-all solution; it operates as a flexible ecosystem tailored to the unique needs of different functions within our organization. We deploy a combination of models, ranging from [LLMs] to smaller, more specialized models designed for specific tasks. This multi-model approach allows us to leverage both open-source and proprietary technologies where they best fit, ensuring that our AI solutions are scalable, efficient, and agile across different applications.”

What advice do you have for other enterprises considering implementing their own AI instances? Go big with LLMs or choose small language models based on both open-source or proprietary (such as Llama-3 type) models? What are the advantages of each? “My advice is to start with a clear understanding of your business goals. Large language models are incredibly powerful, but they’re resource-intensive and can sometimes feel like a sledgehammer for tasks that require a scalpel. Smaller models offer more precision and can be fine-tuned to specific needs, allowing for greater efficiency and control. It’s all about finding the right balance between ambition and practicality.”

What is knowledge engineering and who’s responsible for that role? “Knowledge engineering involves structuring, curating, and governing the knowledge that feeds AI systems, ensuring that they can deliver accurate, reliable, and actionable insights. Unlike traditional data science, which focuses on data manipulation, knowledge engineering is about understanding the context in which data exists and how it can be transformed into useful knowledge.

“Responsibility for this role often falls to Chief Knowledge Officers or similar roles within organizations. These individuals ensure that AI is not only ingesting high-quality data, but also making sense of it in ways that align with the organization’s goals and ethical standards.”

What kind of growth are you seeing in the number of Chief Knowledge Officers, and why are they growing in numbers? “The rise of the Chief Knowledge Officer (CKO) is directly tied to the increasing importance of knowledge engineering in today’s AI-driven world. We are witnessing a fundamental shift where data alone isn’t enough. Businesses need structured, actionable knowledge to truly harness AI’s potential.

“CKOs are becoming indispensable, because in the scenario of agent-based workflows in the enterprise, it is knowledge, not just data, that agents will deploy to accomplish an outcome: i.e. customer service, back-office operations, etc. The CKO’s role is pivotal in aligning AI’s capabilities with business strategy, ensuring that insights derived from AI are both accurate and actionable. It’s not just about managing information, it’s about driving strategic value through knowledge.”

What kind of decline are you seeing in data science roles, and why? “We’re seeing a decline in roles focused purely on data wrangling or basic analytics, as these functions are increasingly automated by AI. However, this shift doesn’t mean data science is becoming obsolete — it means it’s evolving.

“Today, the focus is on data architects, knowledge engineering, agent development and AI governance — roles that ensure AI systems are deployed responsibly and aligned with business goals. We’re also seeing a greater emphasis on roles that do the vital job of managing the ethical dimensions of AI, ensuring transparency and accountability in its use and compliance as the new EU AI Act obligations become effective.”

Many companies have invested resources in cleaning up their unstructured and structured data lakes so it can be used for generating AI responses. Why then do you see fewer and not more investments in data scientists? “Companies are prioritizing AI tools that can automate much of the data preparation and curation process.  The role of the data scientist, over time, will evolve into one that’s more about overseeing these automated processes and ensuring the integrity of the knowledge being generated from the data, rather than manually analyzing or cleaning it. This shift also highlights the growing importance of knowledge engineering over traditional data science roles.
 
“The focus is shifting from manual data analysis to systems that can automatically clean, manage, and analyze data at scale. As AI takes on more of these tasks, the need for traditional data science roles diminishes. Instead, the emphasis is on data architects, knowledge engineering — understanding how to structure, govern, and utilize knowledge in ways that enhance AI’s performance and inform AI agent developers.”

What do you see as the top AI roles emerging as the technology continues to be adopted? “We’re seeing a new wave of AI roles emerging, with a strong focus on governance, ethics, and strategic alignment. Chief AI Officers, AI governance leads, knowledge engineers and AI agent developers are becoming critical to ensuring that AI systems are trustworthy, transparent, and aligned with both business goals and human needs.

“Additionally, roles like AI ethicists and compliance experts are on the rise, especially as governments begin to regulate AI more strictly. These roles go beyond technical skills —  they require a deep understanding of policy, ethics, and organizational strategy. As AI adoption grows, so too will the need for individuals who can bridge the gap between the technology and the focus on human-centered outcomes.”

How will artificial general intelligence (AGI) transform the enterprise long term? “AGI will revolutionize the enterprise in ways we can barely imagine today. Unlike current AI, which is designed for specific tasks, AGI will be capable of performing any intellectual task a human can, which will fundamentally change how businesses operate. AGI has the potential to be a strategic partner in decision-making, innovation, and even customer engagement, shifting the focus from task automation to true collaboration between humans and machines. The long-term impact will be profound, but it’s crucial that AGI is developed and governed responsibly, with strong ethical frameworks in place to ensure it serves the broader good.”

Many believe AGI is the more frightening AI evolution. Do you believe AGI has a place in the enterprise, and can it be trusted or controlled? “I understand the concerns around AGI, but with the right safety controls, I believe it has enormous potential to bring positive change if it’s developed responsibly. AGI will certainly have a place in the enterprise. It will fundamentally transform the way companies achieve outcomes. This technology is driven by goals, outcomes — not by processes. It will disrupt the pillar of process in the enterprise, which will be a game changer.

“For that reason, trust and control will be key. Transparency, accountability, and rigorous governance will be essential in ensuring AGI systems are safe, ethical, and aligned with human values. At EY, we strongly advocate for a human-centered approach to AI, and this will be even more critical with AGI. We need to ensure that it’s not just about the technology, but about how that technology serves the real interests of society, businesses, and individuals alike.”

How do you go about ensuring “a human is at the center” of any AI implementation, especially when you may some day be dealing with AGI? “Keeping humans at the center, especially as we approach AGI, is not just a guiding principle — it’s an absolute necessity. The EU AI Act is the most developed effort yet in establishing the guardrails to control the potential impacts of this technology at scale. At EY, we are rapidly adapting our corporate policies and ethical frameworks in order to, first, be compliant, but also to lead the way in showing the path of responsible AI to our clients.

“At EY, we believe that AI implementation should always be framed by ethics, human oversight, and long-term societal impacts. We actively work to embed trust and transparency into every AI system we deploy, ensuring that human wellbeing and ethical considerations remain paramount at all times. AGI will be no different: its success will depend on how well we can align it with human values, protect individual rights, and ensure that it enhances, rather than detracts from, our collective future.”

Adobe lets customers test Firefly AI video generator


Adobe’s AI model for video generation is now available in a limited beta, enabling users to create short video clips from text and image prompts.

The Firefly Video model, first unveiled in April, is the latest generative AI model Adobe has developed for its Creative Cloud products — the others cover image, design and vector graphic generation.

From Monday, there are two ways to access the Firefly Video model as part of the beta trial.

One is the text and image to video generation that Adobe previewed last month, accessible in the Firefly web app at firefly.adobe.com. This enables users to create five-second, 720p-resolution videos from natural-language text prompts. These can contain realistic video footage and 2D or 3D animations. It’s also possible to generate video using still images as a prompt, meaning a photograph or illustration could be used to create b-roll footage.

To provide greater control over the output, there are options for different camera angles, shot size, motion and zoom, for example, while Adobe says it’s working on more ways to direct the AI-generated video.

Waiting list

Adobe said it only trains the video model on stock footage and public domain data that it has rights to use for training its AI models. It won’t use customer data or data scraped from the internet, it said.

To access the beta, you’ll need to join the waitlist. It’s free for now, though Adobe said in a new release that it will reveal pricing information once the Firefly Video model gets a full launch.

Adobe is one of several technology companies working on AI video generation capabilities. OpenAI’s Sora promises to let users create minute-long video clips, while Meta recently announced its Movie Gen video model and Google unveiled Veo back in May. However, none of these tools are publicly available at this stage.

Extended remix

The other way to access the Firefly Video model is with the Generative Extend tool, available in beta in video editing app Premiere Pro. Generate Extend can be used to create new frames to lengthen a video clip — although only by a couple of seconds, enabling an editor to hold a shot longer to create smoother transitions. Footage created with Generative Extend must be 1920×1080 or 1280×720 during the beta, though Adobe said its working on support for higher resolutions.

With Generative Extend (now in beta) Adobe Premiere Pro users can generate up to two extra seconds on video clips to help with editing.

Background audio can also be extended for up to 10 seconds, thanks to Adobe’s AI audio generation technology, though spoken dialogue can’t be generated.

At its MAX conference on Monday, Adobe also announced that its  GenStudio for Marketing Performance app, designed to help businesses manage the influx of AI-generated content, is now generally available.

Adobe lets customers test Firefly AI video generator


Adobe’s AI model for video generation is now available in a limited beta, enabling users to create short video clips from text and image prompts.

The Firefly Video model, first unveiled in April, is the latest generative AI model Adobe has developed for its Creative Cloud products — the others cover image, design and vector graphic generation.

From Monday, there are two ways to access the Firefly Video model as part of the beta trial.

One is the text and image to video generation that Adobe previewed last month, accessible in the Firefly web app at firefly.adobe.com. This enables users to create five-second, 720p-resolution videos from natural-language text prompts. These can contain realistic video footage and 2D or 3D animations. It’s also possible to generate video using still images as a prompt, meaning a photograph or illustration could be used to create b-roll footage.

To provide greater control over the output, there are options for different camera angles, shot size, motion and zoom, for example, while Adobe says it’s working on more ways to direct the AI-generated video.

Waiting list

Adobe said it only trains the video model on stock footage and public domain data that it has rights to use for training its AI models. It won’t use customer data or data scraped from the internet, it said.

To access the beta, you’ll need to join the waitlist. It’s free for now, though Adobe said in a new release that it will reveal pricing information once the Firefly Video model gets a full launch.

Adobe is one of several technology companies working on AI video generation capabilities. OpenAI’s Sora promises to let users create minute-long video clips, while Meta recently announced its Movie Gen video model and Google unveiled Veo back in May. However, none of these tools are publicly available at this stage.

Extended remix

The other way to access the Firefly Video model is with the Generative Extend tool, available in beta in video editing app Premiere Pro. Generate Extend can be used to create new frames to lengthen a video clip — although only by a couple of seconds, enabling an editor to hold a shot longer to create smoother transitions. Footage created with Generative Extend must be 1920×1080 or 1280×720 during the beta, though Adobe said its working on support for higher resolutions.

With Generative Extend (now in beta) Adobe Premiere Pro users can generate up to two extra seconds on video clips to help with editing.

Background audio can also be extended for up to 10 seconds, thanks to Adobe’s AI audio generation technology, though spoken dialogue can’t be generated.

At its MAX conference on Monday, Adobe also announced that its  GenStudio for Marketing Performance app, designed to help businesses manage the influx of AI-generated content, is now generally available.

Adobe’s makes GenStudio app generally available

Adobe’s GenStudio content supply chain platform is now generally available, with the ability to publish content directly to social media channels such as Instagram, Snap and TikTok coming soon.

Adobe launched GenStudio for Performance Marketing — as the standalone GenStudio application is now called — in preview at Adobe MAX conference in 2023. At this year’s MAX event it made a slight change in branding: GenStudio now refers to both the GenStudio for Performance Marketing app and the various Adobe applications it integrates with, such as Adobe Experience Manager, Adobe Express, and Workfront.

Adobe has been quick to integrate its Firefly generative AI models across Creative Cloud apps such as Photoshop and Illustrator, enabling designers to increase their output significantly, the company says. (IDC analysts also predict genAI will boost marketing team productivity by 40% in the next five years.)

The aim of GenStudio for Performance Marketing is to help marketers access and use the AI-generated content created within their organization while respecting brand guidelines and legal compliance policies.

“The challenge facing most brands out there is that they have an inefficient content supply chain, where bottlenecks appear in areas like planning, content development and measurement,” said Varun Parmar, general manager for GenStudio at Adobe, in a news briefing. This is where GenStudio for Performance Marketing can help, he said, providing a “seamless way for brands and agencies to deliver on-brand and personalized content that is compliant with brand standards.”

GenStudio for Performance Marketing performs several functions. First, it serves as a content repository where users can access pre-approved assets such as images, logos, and videos for use in the creation of marketing content. This could be anything from display ads to banners and emails. To enable reuse of content across campaigns, GenStudio for Performance Marketing integrates with Adobe Experience Manager Assets, Adobe’s digital asset management app.

Firefly video

Users can also edit and adapt existing assets from the app using the Firefly AI models. This could mean creating variations of email ads tailored to a specific geographic region, for instance.

Those models will soon include new video capabilities, including text-to-video and image-to-video, now available as beta versions.

In GenStudio for Performance Marketing, an AI-powered “brand check” feature can automatically inspect assets before they are used in marketing campaigns, comparing with pre-defined templates and alerting marketing and design teams where content may be out of step with a firm’s brand compliance guidelines. Here, each asset is given a score out of 100, with detailed recommendations for changes: an email headline that’s too lengthy, for example, or innappropriate tone of voice. An integration with Adobe’s Workfront also enables automated “multi-step review workflows,” to provide additional oversight of the approval process

Adobe also plans to let users publish content directly from GenStudio for Performance Marketing to social media channels from the likes of Meta, TikTok and Snap, as well as display ad campaigns with Google’s Campaign Manager 360, Amazon Ads and Microsoft Advertising. This campaign activation feature is “coming soon,” Adobe said, without providing further details. It will also be possible for customers to publish content via their own email and web channels via Adobe Journey Optimizer in future, Adobe said.

Finally, GenStudio for Performance Marketing will provide analytics on the performance of content that’s live on platforms owned by Meta (such as click-through rate, cost per click and spend), with integrations with others such as Microsoft Advertising, Snap and TikTok also available “soon.”

“All companies have to ramp up their genAI knowledge and its impact on brand content/assets,” said Jessica Liu, principal analyst at Forrester.

“Solutions like GenStudio present compelling opportunities for companies to alter their creative development and production process — such as creating more content, accelerating workflows, streamlining workflows, or shifting workforce skillsets.”

Ad customization comes at a customized price

Adobe hasn’t published a list price for GenStudio and GenStudio for Performance Marketing. A company representative said, “As this is enterprise software, there isn’t a one size fits all pricing as it’s based on the customer need/requirement.”