Page 13 of 101

OpenAI announces ChatGPT Pro, priced at $200 per month

The $200 monthly pricing OpenAI has set for a subscription to its recently launched ChatGPT Pro is definitely “surprising,”  said Gartner analyst Arun Chandrasekaran on Friday, but at the same time it’s indicative that the company is betting that organizations will ultimately pay more for enhanced AI capabilities.

In an announcement on Thursday, OpenAI said the plan, priced at nearly 10 times more than its existing corporate plans, includes access to OpenAI o1, as well as to o1-mini, GPT-4o, and Advanced Voice.

Part of the company’s 12 days of Shipmas campaign, it also includes OpenAI o1 pro mode, a version of o1 that, the company said, “uses more compute to think harder and provide even better answers to the hardest problems. In the future, we expect to add more powerful, compute-intensive productivity features to this plan.”

For considerably less, OpenAI’s previously most expensive subscription, ChatGPT Team, offers a collaborative workspace with limited access to OpenAI o1 and o1-mini, and an admin console for workspace management, and costs $25 per user per month. And ChatGPT Plus, which also offers limited access to o1 and o1-mini, plus standard and advanced voice, is $20 per user per month.

ChatGPT Pro also costs far more than its competitors are charging. A 12-month commitment to the enterprise edition of Gemini Code Assist, which Google describes as “an AI-powered collaborator that helps your development team build, deploy and operate applications throughout the software development life cycle (SDLC),” costs $45 per user per month.

Monthly pricing plans for Anthropic’s Claude AI range from $18 for Claude Pro to $25 for the Claude Team edition, while the cost per user per month with an annual subscription for Microsoft 365 Copilot, which contains Copilot Studio for the creation of AI agents and the ability to automate business processes, is $30.

Small target market

With its new plan, said Chandrasekaran, OpenAI is not “targeting information retrieval use cases, because the chatbot is actually pretty effective for them.”

This latest salvo is, he said is “more about potentially using [ChatGPT Pro] as a decision intelligence tool to automate tasks that human beings do. That’s kind of the big bet here, but nevertheless, it’s still a very big jump in price, because GPT Plus is $20 per user per month. And even the ChatGPT Enterprise, which is the enterprise version of the product, is $60 or $70, so it’s a very, very big jump in my opinion.”

Thomas Randall, director of AI market research at Info-Tech Research Group, said, “the persona for ChatGPT’s ‘Pro’ offering will be very narrowly scoped, and it isn’t quite clear who that is. This is especially the case as ChatGPT has an ‘enterprise’ plan for organizations that can still take advantage of the ‘Pro’ offering. ‘Pro’ will perhaps be for individuals with highly niche use cases, or small businesses.”

‘Plus’ remains competitive

But, he said, “the value add between ‘Plus’ and ‘Pro’ is not currently clear from a marketing perspective. The average user of ChatGPT will still do well with the free option, perhaps being persuaded to pay for ‘Plus’  if they are using it more extensively for content writing or coding. When priced against other tools, ChatGPT’s ‘Plus’ will remain very competitive against its rivals.”

According to Randall, “Anthropic is still trying to achieve market share (though it has recently fumbled with an ambiguous marketing campaign), while Gemini is not currently accurate enough in its outputs to effectively position itself. As an example, when I asked ChatGPT, Anthropic’s Claude, and Gemini to give me a list of 100 historical events for a certain country, ChatGPT and Anthropic were comparable, but Gemini would only list up to 40, but still call it a list of 100.”

As for Microsoft Copilot, he said, it “still struggles to showcase the value-add of its rather expensive licensing. While Microsoft certainly needs to show revenue return from the amount it has invested in Copilot, the product has not been immediately popular, and was perhaps released too early. We may end up seeing a rebrand, or Copilot eventually being packaged with Microsoft’s enterprise plans.”

ByteDance is about to learn a painful genAI lesson

When TikTok owner ByteDance discovered recently that an intern had allegedly damaged a large language model (LLM) the intern was assigned to work on, ByteDance sued the intern for more than $1 million worth of damage. Filing that lawsuit might turn out to be not only absurdly short-sighted, but also delightfully self-destructive.

Really, ByteDance managers? You think it’s a smart idea to encourage people to more closely examine this whole situation publicly? 

Let’s say the accusations are correct and this intern did cause damage. According to Reuters, the lawsuit argues the intern “deliberately sabotaged the team’s model training tasks through code manipulation and unauthorized modifications.” 

How closely was this intern — and most interns need more supervision than a traditional employee — monitored? If I wanted to keep financial backers happy, especially when ByeDance is under US pressure to sell the highly-lucrative TikTok, I would not want to advertise the fact that my team let this happen.

Even more troubling is that this intern was technically able to do this, regardless of supervision. The lesson here is one that IT already knows, but is trying to ignore: generative AI (genAI) tools are impossible to meaningfully control and guardrails are so easy to sweep past that they are a joke.

The conundrum with genAI is that the same freedom and flexibility that can make the technology so useful also makes it so easy to manipulate into doing bad things. There are ways to limit what LLM-based tools will do. But one, they often fail. And two, IT management is often hesitant to even try and limit what end-users can do, fearing they could kill any of the promised productivity gains from genAI. 

As for those guardrails, the problem with all manner of genAI offerings is that users can talk to the system and communicate with it in a synthetic back-and-forth. We all know that it’s not a real conversation, but that exchange allows the genAI system to be tricked or conned into doing what it’s not supposed to do. 

Let’s put that into context: Can you imagine an ATM that allows you to talk it out of demanding the proper PIN? Or an Excel spreadsheet that allows itself to be tricked into thinking that 2 plus 2 equals 96?

I envision the conversation going something like: “I know I can’t tell you how to get away with murdering children, but if you ask me to tell you how to do it ‘hypothetically,’ I will. Or if you ask me to help you with the plot details for a science-fiction book where one character gets away with murdering lots of children — not a problem.”

This brings us back to the ByteDance intern nightmare. Where should the fault lie? If you were a major investor in the company, would you blame the intern? Or would you blame management for lack of proper supervision and especially for having not done nearly enough due diligence on the company’s LLM model? Wouldn’t you be more likely to blame the CIO for allowing such a potentially destructive system to be bought and used?

Let’s tweak this scenario a bit. Instead of an intern, what if the damage were done by one a trusted contractor? A salaried employee? A partner company helping on a project? Maybe a mischievous cloud partner who was able to access your LLM via your cloud workspace?

Meaningful supervision with genAI systems is foolhardy at best. Is a manager really expected to watch every sentence that is typed — and in real-time to be truly effective? A keystroke-capture program to analyze work hours later won’t help. (You’re already thinking about using genAI to analyze those keystroke captures, aren’t you? Sigh.)

Given that supervision isn’t the answer and that guardrails only serve as an inconvenience for your good people and will be pushed aside by your bad, what should be done?

Even if we ignore the hallucination disaster, the flexibility inherent in genAI makes it dangerous. Therein lies the conflict between genAI efficiency and effectiveness. Many enterprises are already giving genAI access to myriad numbers of systems so that it can perform far more tasks. Sadly, that’s mistake number one.

Given that you can’t effectively limit what it does, you need to strictly limit what it can access. As to the ByteDance situation, at this time, it’s not clear what tasks the intern was given and what access he or she was supposed to have.

It’s one thing to have someone acting as an end-user and leveraging genAI; it’s an order of magnitude more dangerous if that person is programming the LLM. That combines the wild west nature of genAI with the cowboy nature of an ill-intentioned employee, contractor, or partner. 

This case, with this company and the players involved, should serve as a cautionary tale for all: the more you expand the capabilities of genAI, the more it morphs into the most dangerous Pandora’s Box imaginable.

After shooting, UnitedHealthcare comes under scrutiny for AI use in treatment approval

In the wake of the murder of its CEO this week, UnitedHealthcare has come under greater scrutiny for its use of an allegedly flawed AI algorithm that overrides doctors to deny elderly patients critical heathcare coverage.

UnitedHealthcare CEO Brian Thompson was fatally shot in a targeted attack outside a New York City hotel on Dec 4. The shooter fled on an e-bike, leaving shell casings with possible motive-related messages, though the actual intent remains unclear. (The words “deny,” “defend” and “depose” were written on the shell casings.)

One motive floated by many is that the murder might be connected to high treatment rejection rates or UnitedHealthcare’s (UHC) outright refusal to pay for some care. Healthcare providers and insurers have been automating responses to care requests using generative AI (genAI) tools, which have been accused of producing high denial of care rates, in some cases, 16 times higher than is typical.

UHC uses a genAI tool called nH Predict, which has been accused in a lawsuit of prematurely discharging patients from care facilities and forcing them to exhaust their savings for essential treatment. The lawsuit, filed last year in federal court in Minnesota, alleges UHC illegally denied Medicare Advantage care to elderly patients by using an AI model with a 90% error rate, overriding doctors’ judgments on the medical necessity of expenses.

Some have argued that the genAI algorithm’s high rejection rate is a feature, not a flaw. An investigation by STAT News cited in the lawsuit, claims UHC pressured employees to use the algorithm to deny Medicare Advantage payments, aiming to keep patient rehab stays within 1% of the length predicted by nH Predict.

According to the lawsuit, UnitedHealth started using nH Predict in November 2019. nH Predict, developed by US-based health tech company NaviHealth (now part of UnitedHealth Group), is a proprietary assessment tool that designs personalized treatment plans and recommends care settings, including hospital discharge timing.

“Despite the high error rate, defendants continue to systemically deny claims using their flawed AI model because they know that only a tiny minority of policyholders (roughly 0.2%) will appeal denied claims, and the vast majority will either pay out-of pocket costs or forgo the remainder of their prescribed post-acute care,” the lawsuit argued. “Defendants bank on the patients’ impaired conditions, lack of knowledge, and lack of resources to appeal the erroneous AI-powered decisions.”

Last year, UnitedHealth Group and its pharmacy services subsidiary Optum rebranded NaviHealth following congressional criticism over the algorithms it used to deny patient care payments. More recently, in an October report, the US Senate Permanent Subcommittee on Investigations criticized UHC, Humana, and CVS for prioritizing profits over patient care.

“The data obtained so far is troubling regardless of whether the decisions reflected in the data were the result of predictive technology or human discretion,” according to the report. “It suggests Medicare Advantage insurers are intentionally targeting a costly but critical area of medicine — substituting judgment about medical necessity with a calculation about financial gain.”

Using millions of medical records, nH Predict analyzes patient data such as age, diagnoses, and preexisting conditions to predict the type and duration of care each patient will require. nH Predict has faced criticism for its high error rate, premature termination of patient treatment payments (especially for the elderly and disabled), lack of transparency in decision-making, and potential to worsen health inequalities.

UHC declined to comment on its use of genAI tools, opting instead to release a statement on how its dealing with the loss of its CEO.

The healthcare industry and insurers have long embraced AI and generative AI, with providers now leveraging it to streamline tasks like note-taking and summarizing patient records. The tech has also been used to assess radiology and electrocardiogram results and predict a patient’s risk of developing and worsening disease.

Insurers use AI to automate processes such as prior authorization, where providers or patients must get insurer approval before receiving specific medical services, procedures, or medications. The high denial rates from AI-driven automation have frustrated physicians, leading them to counter by using AI tools themselves to draft appeals against the denials.

Asthma drugs, new weight loss drugs and biologics — a class of drugs that can be life-saving for people with autoimmune disease or even cancer — are routinely denied coverage by insurance companies. Data shows that clinicians rarely appeal denials more than once, and a recent American Medical Association survey showed that 93% of physicians report care delays or disruptions associated with prior authorizations.

“Usually, any expensive drug requires a prior authorization, but denials tend to be focused on places where the insurance company thinks that a cheaper alternative is available, even if it is not as good,” Dr. Ashish Kumar Jha, dean of the School of Public Health at Brown University, explained in an earlier interview with Computerworld.

Jha, who is also a professor of Health Services, Policy and Practices at Brown and served as the White House COVID-19 response coordinator in 2022 and 2023, said that while prior authorization has been a major issue for decades, only recently has AI been used to “turbocharge it” and create batch denials. The denials force physicians to spend hours each week challenging them on behalf of their patients.

GenAI technology is based on large language models, which are fed massive amounts of data. People then train the model on how to answer queries, a technique known as prompt engineering.

“So, all of the [insurance company] practices over the last 10 to 15 years of denying more and more buckets of services — they’ve now put that into databases, trained up their AI systems and that has made their processes a lot faster and more efficient for insurance companies,” Jha said. “That has gotten a lot of attention over the last couple of years.”

The suspect in the Wednesday shooting of Thompson has not yet been captured, nor has there been any claims of motive.

Apple is about to add seriously useful tools to Apple Intelligence

Apple is close to introducing iOS 18.2, a major update that brings significant additions to  Apple Intelligence, its suite of generative AI (genAI) tools.

Highlights of this AI-tinged release include the integration of Siri with ChatGPT, along with new writing and imaging tools. The update is expected to ship as soon as Dec. 10.

Apple Intelligence supplements Apple’s existing machine-learning tools and relies on the company’s own genAI models. Introduced at Apple’s worldwide developer event in June, Apple Intelligence first arrived on Macs, iPhones, and iPads in October with the release of iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1, though additional features are being rolled out as they are ready.

Improved Writing Tools are coming

For most users, additions to Apple’s Writing Tools suite will make the biggest difference. Users will get access to an improved and enhanced Compose tool which can write or rewrite things for you. ChatGPT integration is also tightened in the release, including within writing tools. Another potentially very useful tool with this release is message categorization in Mail. This will automatically attempt to sort and prioritize your incoming mail and messages.

There’s AI elsewhere in this release, with tools including natural language search in Apple Music and Apple TV apps.

Siri gets ChatGPT, and AI for the rest of us

If you are using Apple Intelligence and it needs to hand off your request to ChatGPT for completion, you will be warned and given a chance to abandon the request rather than share your data there. It is important to note that under Apple’s arrangement with ChatGPT, neither Apple nor OpenAI stores the requests made, so there is some provision for privacy. (It would be wise to make sure use of ChatGPT is authorized under your company’s privacy and security policies.)

The ChatGPT integration is the big-ticket item in this release, but for many Apple users the even bigger draw will be support for Apple Intelligence in additional countries; Australia, Canada, New Zealand, South Africa, and the UK all gain local English support. (Apple’s superb AirPods Pro 2 Hearing Test feature will also be made available to nine additional countries, including France, Italy, Spain, UK, Romania, Cyprus, Czechia, and the UAE.)

What do I see?

Visual Intelligence is another great feature to try out. It lets you point your camera at your surroundings to get contextual information about where you are. You might point your camera at a restaurant to find opening hours or customer reviews. You can also use this tool to get phone numbers, addresses, or purchasing links for items in the view.

Imaging tools made available in this release include Image Playground and Genmoji. Image Playground will use genAI to create images based on your suggestions, or on pre-built suggestions Apple provides. It can also learn from your iMessage or Notes content to offer up imagery it “thinks” suitable for use in those apps. Image Wand will turn rough sketches into nicer images in Notes.

For fun, there is Genmoji. This is a genAI feature that creates custom emoji, including animated ones. The idea is that you can type in, or speak, a description of the emoji you want to use and select among those the system generates or tweak what it creates.

Apple Intelligence isn’t available to everyone. You must be running a Mac or iPad with an M-series processor to run these tools, or be equipped with an iPhone 15 Pro, iPhone 15 Pro Mac, or any iPhone 16 model, and the most up-to-date version of the relevant operating system. Older iPhones will be unable to access Apple Intelligence features. All these new features should appear next week, even as we know for certain the company is developing more.

Eroding consumer resistance, one fun feature at a time

The big undercurrent to all of this is that by deploying these AI tools across its huge population of customers, Apple is also encouraging users to try out these tools. That process should eventually help erode consumer resistance to the fast-evolving technology. Apple becomes a trusted partner to show the potential of genAI in a deliberate and non-frightening way. The industry needs that, of course, given the steady emergence of somewhat less benign AI tools.

The rest will be history, eh, Siri?

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe

Meta: AI created less than 1% of the disinformation around 2024 elections

AI-generated content accounted for less than 1% of the disinformation fact-checkers linked to political elections that took place worldwide in 2024, according to social media giant Meta. The company cited political elections in the United States, Great Britain, Bangladesh, Indonesia, India, Pakistan, France, South Africa, Mexico and Brazil, as well as the EU elections.

“At the beginning of the year, many warned about the potential impact that generative AI could have on the upcoming elections, including the risk of widespread deepfakes and AI-powered disinformation campaigns,” Meta President of Global Affairs Nick Clegg wrote. “Based on what we have monitored through our services, it appears that these risks did not materialize in a significant way and that any impact was modest and limited in scope.”

Meta did not provide detailed information on how much AI-generated disinformation its fact-checking uncovered related to major elections.

Meta: AI created less than 1% of the disinformation around 2024 elections

AI-generated content accounted for less than 1% of the disinformation fact-checkers linked to political elections that took place worldwide in 2024, according to social media giant Meta. The company cited political elections in the United States, Great Britain, Bangladesh, Indonesia, India, Pakistan, France, South Africa, Mexico and Brazil, as well as the EU elections.

“At the beginning of the year, many warned about the potential impact that generative AI could have on the upcoming elections, including the risk of widespread deepfakes and AI-powered disinformation campaigns,” Meta President of Global Affairs Nick Clegg wrote. “Based on what we have monitored through our services, it appears that these risks did not materialize in a significant way and that any impact was modest and limited in scope.”

Meta did not provide detailed information on how much AI-generated disinformation its fact-checking uncovered related to major elections.

Apple shops at Amazon for Apple Intelligence services

Apple shops at Amazon.

In this case, it is using artificial intelligence (AI) processors from Amazon Web Services (AWS) for some of its Apple Intelligence and other services, including Maps, Apps, and search. Apple is also testing advanced AWS chips to pretrain some of its AI models as it continues its rapid pivot toward becoming the world’s most widely deployed AI platform.

That’s the big — and somewhat unexpected — news to emerge from this week’s AWS:Reinvent conference.

Apple watchers will know that the company seldom, if ever, sends speakers to other people’s trade shows. So, it matters that Apple’s Senior Director of Machine Learning and AI, Benoit Dupin, took to the stage at the Amazon event. That appearance can be seen as a big endorsement both of AWS and its AI services, and the mutually beneficial relationship between Apple and AWS.

Not a new relationship.

Apple has used AWS servers for years, in part to drive its iCloud and Apple One services and to scale additional capacity at times of peak demand. “One of the unique elements of Apple’s business is the scale at which we operate, and the speed with which we innovate. AWS has been able to keep the pace,” Dupin said.

Some might note that Dupin (who once worked at AWS) threw a small curveball when he revealed that Apple has begun to deploy Amazon’s Graviton and Inferentia for machine learning services such as streaming and search. He explained that moving to these chips has generated an impressive 40% efficiency increase in Apple’s machine learning inference workloads when compared to x86 instances. 

Dupin also confirmed Apple is in the early stages of evaluating the newly-introduced AWS Trainium 2 AI training chip, which he expects will bring in 50% improvement in efficiency when pre-training AI. 

Scale, speed, and Apple Intelligence

On the AWS connection to Apple Intelligence, he explained: “To develop Apple Intelligence, we needed to further scale our infrastructure for training.” As a result, Apple turned to AWS because the service could provide access to the most performant accelerators in quantity. 

Dupin revealed that key areas where Apple uses Amazon’s services include fine-tuning AI models, optimizing trained models to fit on small devices, and “building and finalizing our Apple Intelligence adapters, ready to deploy on Apple devices and servers.. We work with AWS Services across virtually all phase of our AI and ML lifecycle,” he said. 

Apple Intelligence is a work in progress and the company is already developing additional services and feature improvements, “As we expand the capabilities and feature of Apple Intelligence, we will continue to depend on the scalable, efficient, high-performance accelerator technologies AWS delivers,” he said. 

Apple CEO Tim Cook recently confirmed more services will appear in the future. “I’m not going to announce anything today. But we have research going on. We’re pouring all of ourselves in here, and we work on things that are years in the making,” Cook said.

TSMC, Apple, AWS, AI, oh my!

There’s another interesting connection between Apple and AWS. Apple’s M- and A- series processors are manufactured by Taiwan Semiconductor Manufacturing (TSMC), with devices made by Foxconn and others. TSMC also makes the processors used by AWS. And it manufactures the AI processors Nvidia provides; we think it will be tasked with churning out Apple Silicon server processors to support Private Cloud Compute services and Apple Intelligence.

It is also noteworthy that AWS believes it will be able to link more of its processors together for huge cloud intelligence servers beyond what Nvidia can manage. Speaking on the fringes of AWS Reinvent, AWS AI chip business development manager Gadi Hutt claimed his company’s processors will be able to train some AI models at 40% lower cost than on Nvidia chips. 

Up next?

While the appearance of an Apple exec at the AWS event suggests a good partnership, I can’t help but be curious about whether Apple has its own ambitions to deliver server processors, and the extent to which these might deliver significant performance/energy efficiency gains, given the performance efficiency of Apple silicon.

Speculation aside, as AI injects itself into everything, the gold rush for developers capable of building and maintaining these services and the infrastructure (including energy infrastructure) required for the tech continues to intensify; these kinds of fast-growing industry-wide deployments will surely be where opportunity shines.

You can watch Dupin’s speech here.

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe

Apple shops at Amazon for Apple Intelligence services

Apple shops at Amazon.

In this case, it is using artificial intelligence (AI) processors from Amazon Web Services (AWS) for some of its Apple Intelligence and other services, including Maps, Apps, and search. Apple is also testing advanced AWS chips to pretrain some of its AI models as it continues its rapid pivot toward becoming the world’s most widely deployed AI platform.

That’s the big — and somewhat unexpected — news to emerge from this week’s AWS:Reinvent conference.

Apple watchers will know that the company seldom, if ever, sends speakers to other people’s trade shows. So, it matters that Apple’s Senior Director of Machine Learning and AI, Benoit Dupin, took to the stage at the Amazon event. That appearance can be seen as a big endorsement both of AWS and its AI services, and the mutually beneficial relationship between Apple and AWS.

Not a new relationship.

Apple has used AWS servers for years, in part to drive its iCloud and Apple One services and to scale additional capacity at times of peak demand. “One of the unique elements of Apple’s business is the scale at which we operate, and the speed with which we innovate. AWS has been able to keep the pace,” Dupin said.

Some might note that Dupin (who once worked at AWS) threw a small curveball when he revealed that Apple has begun to deploy Amazon’s Graviton and Inferentia for machine learning services such as streaming and search. He explained that moving to these chips has generated an impressive 40% efficiency increase in Apple’s machine learning inference workloads when compared to x86 instances. 

Dupin also confirmed Apple is in the early stages of evaluating the newly-introduced AWS Trainium 2 AI training chip, which he expects will bring in 50% improvement in efficiency when pre-training AI. 

Scale, speed, and Apple Intelligence

On the AWS connection to Apple Intelligence, he explained: “To develop Apple Intelligence, we needed to further scale our infrastructure for training.” As a result, Apple turned to AWS because the service could provide access to the most performant accelerators in quantity. 

Dupin revealed that key areas where Apple uses Amazon’s services include fine-tuning AI models, optimizing trained models to fit on small devices, and “building and finalizing our Apple Intelligence adapters, ready to deploy on Apple devices and servers.. We work with AWS Services across virtually all phase of our AI and ML lifecycle,” he said. 

Apple Intelligence is a work in progress and the company is already developing additional services and feature improvements, “As we expand the capabilities and feature of Apple Intelligence, we will continue to depend on the scalable, efficient, high-performance accelerator technologies AWS delivers,” he said. 

Apple CEO Tim Cook recently confirmed more services will appear in the future. “I’m not going to announce anything today. But we have research going on. We’re pouring all of ourselves in here, and we work on things that are years in the making,” Cook said.

TSMC, Apple, AWS, AI, oh my!

There’s another interesting connection between Apple and AWS. Apple’s M- and A- series processors are manufactured by Taiwan Semiconductor Manufacturing (TSMC), with devices made by Foxconn and others. TSMC also makes the processors used by AWS. And it manufactures the AI processors Nvidia provides; we think it will be tasked with churning out Apple Silicon server processors to support Private Cloud Compute services and Apple Intelligence.

It is also noteworthy that AWS believes it will be able to link more of its processors together for huge cloud intelligence servers beyond what Nvidia can manage. Speaking on the fringes of AWS Reinvent, AWS AI chip business development manager Gadi Hutt claimed his company’s processors will be able to train some AI models at 40% lower cost than on Nvidia chips. 

Up next?

While the appearance of an Apple exec at the AWS event suggests a good partnership, I can’t help but be curious about whether Apple has its own ambitions to deliver server processors, and the extent to which these might deliver significant performance/energy efficiency gains, given the performance efficiency of Apple silicon.

Speculation aside, as AI injects itself into everything, the gold rush for developers capable of building and maintaining these services and the infrastructure (including energy infrastructure) required for the tech continues to intensify; these kinds of fast-growing industry-wide deployments will surely be where opportunity shines.

You can watch Dupin’s speech here.

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe

Google DeepMind and World Labs unveil AI tools to create 3D spaces from simple prompts

Google DeepMind and startup World Labs this week both revealed previews of AI tools that can be used to create immersive 3D environments from simple prompts.

World Labs, the startup founded by AI pioneer Fei-Fei Li and backed by $230 million in funding, announced its 3D “world generation” model on Tuesday. It turns a static image into a computer game-like 3D scene that can be navigated using keyboard and mouse controls. 

“Most GenAI tools make 2D content like images or videos,” World Labs said in a blog post. “Generating in 3D instead improves control and consistency. This will change how we make movies, games, simulators, and other digital manifestations of our physical world.”

One example is the Vincent van Gogh painting “Café Terrace at Night,” which the AI model used to generateadditional content to create a small area to view and move around in. Others are more like first-person computer games. 

World Labs world generation model

World Labs’ 3D “world generation” model turns a static image into a computer game-like 3D scene that can be navigated with keyboard and mouse controls.

World Labs

WorldLabs also demonstrated the ability to add effects to 3D scenes, and control virtual camera zoom, for instance. (You can try out the various scenes here.)

Creators that have tested the technology said it could help cut the time needed to build 3D environments, according to a video posted in the blog post, and help users brainstorm ideas much faster.

The 3D scene builder is a “first early preview” and is not available as a product yet. 

Separately, Google’s DeepMind AI research division announced in a blog post Wednesday its Genie 2, a “foundational world model” that enables an “endless variety of action-controllable, playable 3D environments.” 

It’s the successor to the first Genie model, unveiled earlier this year, which can generate 2D platformer-style computer games from text and image prompts. Genie 2 does the same for 3D games that can be navigated in first-person view or via an in-game avatar that can perform actions such as running and jumping. 

It’s possible to generate “consistent worlds” for up to a minute, DeepMind said, with most of the examples showcased in the blog post lasting between 10 and 20 seconds. Genie 2 can also remember parts of the virtual world that are no longer in view, reproducing them accurately when they’re observable again.

DeepMind said its work on Genie is still at an early stage; it’s not clear when the technology might be more widely available. Genie 2 is described as a research tool that can “rapidly prototype diverse interactive experiences” and train AI agents.

Google also announced that its generative AI (genAI) video model, Veo, is now available in a private preview to business customers using its Vertex AI platform. The image-to-video model will open up “new possibilities for creative expression” and streamline “video production workflows,” Google said in a blog post Tuesday

Amazon Web Services also announced its range of Nova AI models this week, including AI video generation capabilities; OpenAI is thought to be launching Sora, its text-to-video software, later this month. 

Microsoft: TPM 2.0 is a ‘non-negotiable’ requirement for Windows 11

With Windows 10 end of support on the horizon, Microsoft said its Trusted Platform Module (TPM) 2.0 requirement for PCs is a “non-negotiable standard” for upgrading to Windows 11.

TPM 2.0 was introduced as a requirement with the launch of Windows 11 three years ago and is aimed at securing data on a device at the  hardware level. It refers to a specially designed chip — integrated into a PC’s motherboard or added to the CPU — and firmware that enables storage of encryption keys, security certificates, and passwords.

TPM 2.0 is a “non-negotiable standard for the future of Windows,” said Steven Hosking, Microsoft senior product manager, in a Wednesday blog post. He called it “a necessity for maintaining a secure and future-proof IT environment with Windows 11.”

New Windows PCs typically support TPM 2.0, but older devices running Windows 10 might not. This means businesses will have to replace Windows 10 PCs ahead of end of support for the operating system; that deadline is set for Oct. 14, 2025.  

Windows 10 remains widely used — more so than its successor. According to Statcounter, the proportion of Windows 10 desktop PCs actually increased last month in the US and now accounts for 61% of desktops, compared to 37% for Windows 11. 

Hosking noted that the “implementation [of TPM 2.0] might require a change for your organization.… Yet it represents an important step toward more effectively countering today’s intricate security challenges.”

For devices that don’t have TPM 2.0, Hosking recommends that IT admins: evaluate current hardware for compatibility with tools such as Microsoft Intune; “plan and budget for upgrades” of non-compliant devices; and “review security policies and procedures” to incorporate the use of TPM 2.0.