Google DeepMind and startup World Labs this week both revealed previews of AI tools that can be used to create immersive 3D environments from simple prompts.
World Labs, the startup founded by AI pioneer Fei-Fei Li and backed by $230 million in funding, announced its 3D “world generation” model on Tuesday. It turns a static image into a computer game-like 3D scene that can be navigated using keyboard and mouse controls.
“Most GenAI tools make 2D content like images or videos,” World Labs said in a blog post. “Generating in 3D instead improves control and consistency. This will change how we make movies, games, simulators, and other digital manifestations of our physical world.”
One example is the Vincent van Gogh painting “Café Terrace at Night,” which the AI model used to generateadditional content to create a small area to view and move around in. Others are more like first-person computer games.
World Labs
WorldLabs also demonstrated the ability to add effects to 3D scenes, and control virtual camera zoom, for instance. (You can try out the various scenes here.)
Creators that have tested the technology said it could help cut the time needed to build 3D environments, according to a video posted in the blog post, and help users brainstorm ideas much faster.
The 3D scene builder is a “first early preview” and is not available as a product yet.
It’s the successor to the first Genie model, unveiled earlier this year, which can generate 2D platformer-style computer games from text and image prompts. Genie 2 does the same for 3D games that can be navigated in first-person view or via an in-game avatar that can perform actions such as running and jumping.
It’s possible to generate “consistent worlds” for up to a minute, DeepMind said, with most of the examples showcased in the blog post lasting between 10 and 20 seconds. Genie 2 can also remember parts of the virtual world that are no longer in view, reproducing them accurately when they’re observable again.
DeepMind said its work on Genie is still at an early stage; it’s not clear when the technology might be more widely available. Genie 2 is described as a research tool that can “rapidly prototype diverse interactive experiences” and train AI agents.
Google also announced that its generative AI (genAI) video model, Veo, is now available in a private preview to business customers using its Vertex AI platform. The image-to-video model will open up “new possibilities for creative expression” and streamline “video production workflows,” Google said in a blog post Tuesday.
Amazon Web Services also announced its range of Nova AI models this week, including AI video generation capabilities; OpenAI is thought to be launching Sora, its text-to-video software, later this month.
With Windows 10 end of support on the horizon, Microsoft said its Trusted Platform Module (TPM) 2.0 requirement for PCs is a “non-negotiable standard” for upgrading to Windows 11.
TPM 2.0 was introduced as a requirement with the launch of Windows 11 three years ago and is aimed at securing data on a device at the hardware level. It refers to a specially designed chip — integrated into a PC’s motherboard or added to the CPU — and firmware that enables storage of encryption keys, security certificates, and passwords.
TPM 2.0 is a “non-negotiable standard for the future of Windows,” said Steven Hosking, Microsoft senior product manager, in a Wednesday blog post. He called it “a necessity for maintaining a secure and future-proof IT environment with Windows 11.”
New Windows PCs typically support TPM 2.0, but older devices running Windows 10 might not. This means businesses will have to replace Windows 10 PCs ahead of end of support for the operating system; that deadline is set for Oct. 14, 2025.
Windows 10 remains widely used — more so than its successor. According to Statcounter, the proportion of Windows 10 desktop PCs actually increased last month in the US and now accounts for 61% of desktops, compared to 37% for Windows 11.
Hosking noted that the “implementation [of TPM 2.0] might require a change for your organization.… Yet it represents an important step toward more effectively countering today’s intricate security challenges.”
For devices that don’t have TPM 2.0, Hosking recommends that IT admins: evaluate current hardware for compatibility with tools such as Microsoft Intune; “plan and budget for upgrades” of non-compliant devices; and “review security policies and procedures” to incorporate the use of TPM 2.0.
Generative artificial intelligence (genAI) has evolved quickly during the past two years from prompt engineering and instruction fine-tuning to the integration of external knowledge sources aimed at improving the accuracy of chatbot answers.
GenAI’s latest big step forward has been the arrival of autonomous agents, or AI-enabled applications capable of perceiving their environment, making decisions, and taking actions to achieve specific goals. The key word here is “agency,” which allows the software to take action on its own. Unlike genAI tools — which usually focus on creating content such as text, images, and music — agentic AI is designed to emphasize proactive problem-solving and complex task execution.
The simplest definition of an AI agent is the combination of a large language model (LLM) and a traditional software application that can act independently to complete a task.
In 2025, 25% of companies that use genAI will launch agentic AI pilots or proofs of concept, according to report by professional services firm Deloitte. In 2027, that number will grow to half of all companies. “Some agentic AI applications…could see actual adoption into existing workflows in 2025, especially by the back half of the year,” Deloitte said. “Agentic AI could increase the productivity of knowledge workers and make workflows of all kinds more efficient. But the ‘autonomous’ part may take time for wide adoption.”
Agentic AI operates in two key ways. First, it offers specialized agents capable of autonomously completing tasks across the open web, in mobile apps, or as an operating system. A specific type of agentic AI, called conversational web agents, functions much like chatbots. In this case, the agentic AI engages users through multimodal conversations, extending beyond simple text chats to accompany them as they navigate the open web or use apps, according to Larry Heck, a professor at Georgia Institute of Technology’s schools of Electrical and Computer Engineering and Interactive Computing.
“Unlike traditional virtual assistants like Siri, Alexa, or Google Assistant, which operate within restricted ecosystems, conversational web agents empower users to complete tasks freely across the open web and apps,” Heck said. “I suspect that AI agents will be prevalent in many arenas, but perhaps the most common uses will be through extensions to web search engines and traditional AI Virtual Assistants like Siri, Alexa, and Google Assistant.”
Other uses for agentic AI
A variety of tech companies, cloud providers, and others are developing their own agentic AI offerings, making strategic acquisitions, and increasingly licensing agentic AI technology from startups and hiring their employees rather than buying the companies outright for the tech. Investors have poured more than $2 billion into agentic AI startups in the past two years, focusing on companies that target the enterprise market, according to Deloitte.
AI agents are already showing up in places you might not expect. For example, most self-driving vehicles today use sensors to collect data about their surroundings, which is then processed by AI agentic software to create a map and navigate the vehicle. AI agents play several other critical roles in autonomous vehicle route optimization, traffic management, and real-time decision-making — they can even predict when a vehicle needs maintenance.
Going forward, AI agents are poised to transform the overall automated driving experience, according to Ritu Jyoti, a group vice president for IDC Research. For example, earlier this year, Nvidia released Agent Driver, an LLM-powered agent for autonomous vehicles that offers more “human-like autonomous driving.”
IDC
These AI agents are also finding their way into a myriad number of industries and uses, from financial services (where they can collect information as part of know-your-client (KYC) applications) to healthcare (where an agentic AI can survey members conversationally and refill prescriptions). The variety of tasks they can tackle can include:
Autonomous diagnostic systems (such as Google’s DeepMind for retinal scans), which analyze medical images or patient data to suggest diagnoses and treatments.
Algorithmic trading bots in financial services that autonomously analyze market data, predict trends, and execute trades with minimal human intervention.
AI agents in the insurance industry that collect key details across channels and analyze the data to give status updates; they can also ask pre-enrollment questions and provide electronic authorizations.
Supplier communications agents that help customers optimize supply chains and minimize costly disruptions by autonomously tracking supplier performance, and detecting and responding to delays; that frees up procurement teams from time-consuming manual monitoring and firefighting tasks.
Sales qualification agents that allow sellers to focus their time on high-priority sales opportunities while the agent researches leads, helps prioritize opportunities, and guides customer outreach with personalized emails and responses, according to IDC’s Ryoti.
Customer intent and customer knowledge management agents that can make a first impression for customer care teams facing high call volumes, talent shortages and high customer expectations, according to Ryoti.
“These agents work hand in hand with a customer service representative by learning how to resolve customer issues and autonomously adding knowledge-based articles to scale best practices across the care team,” she explained.
And for developers, Cognition Labs in March launched Devin AI, a DIY agentic AI tool that autonomously works through tasks that would typically require a small team of software engineers to tackle. The agent can build and deploy apps end-to-end, independently find and fix bugs in codebases, and it can train and fine tune its own AI models.
Devin can even learn how to use unfamiliar technologies by performing its own research on them.
Notably, AI agents also have the ability to remember past interactions and behaviors. They can store those experiences and even perform “self-reflection” or evaluation to inform future actions, according to IDC. “This memory component allows for continuity and improvement in agent performance over time,” the research firm said in a report.
Other agentic AI systems (such as AlphaGo, AlphaZero, OpenAI’s Dota 2 bot) can be trained using reinforcement learning to autonomously strategize and make decisions in games or simulations to maximize rewards.
Agentic AI software development
Evans Data Corp., a market research firm that specializes in software development, conducted a multinational survey of 434 AI and machine learning developers. When asked what they most likely would create using genAI tools, the top answer was software code, followed by algorithms and LLMs. They also expect genAI to shorten the development lifecycle and make it easier to add machine-learning features.
GenAI-assisted coding allows developers to write code faster — and often, more accurately — using digital tools to create code based on natural language prompts or partial code inputs. (Like some email platforms, the tools can also suggest code for auto-completion as it’s written in real time.)
By 2027, 70% of professional developers are expected to be using AI-powered coding tools, up from less than 10% in September 2023, according to Gartner Research. And within three years, 80% of enterprises will have integrated AI-augmented testing tools into their software engineering toolchain — a significant increase from approximately 15% early last year, Gartner said.
One of the top tools used for genAI-automated software development is GitHub Copilot. It’s powered by genAI models developed by GitHub, OpenAI (the creator of ChatGPT), and Microsoft, and is trained on all natural languages that appear in public repositories.
GitHut combined multiple AI agents to enable them to work hand-in-hand to solve coding tasks; multi-agent AI systems allow multiple applications to work together on a common purpose. For example, GitHub earlier this year launched Copilot Workspace, a technical preview of its Copilot-native developer. The multi-agent system allows specialized agents to collaborate and communicate, solving complex problems more efficiently than a single agent.
With agentic AI coding tools like Copilot Workspace and code-scanning autofix, developers will be able to more efficiently build software that’s more secure, according to a GitHub blog.
The technology could also give rise to less positive results. AI agents might, for example, be better at figuring out online customer intent — a potential red flag for users who have long been concerned about security and privacy when searching and browsing online; detecting their intent could reveal sensitive information. According to Heck, AI agents could help companies understand a user’s intent more precisely, making it easier to “monetize this data at higher rates.
“But this increased granularity of knowledge of the user’s intent can also be more likely to cause security and privacy issues if safeguards are not put in place,” he said.
And while most agentic AI tools claim to be safe and secure, a lot depends on the information sources they use. That’s because the source of data used by the agents could vary — from more limited corporate data to the wide open internet. (The latter has a tendency to affect genAI outputs and can introduce errors and hallucinations.)
Setting guardrails around information access, can act like a boss and set limits on agentic AI actions. That’s why user education and training are critical in the secure implementation and use of AI agents and copilots, according to Anderw Silberman, director of marketing at Zenity, a banking software provider.
“Users need to understand not just how to operate these tools, but also their limitations, potential biases, and security implications,” Silberman wrote in a blog post. Training programs should cover topics such as recognizing and reporting suspicious AI behavior, understanding the appropriate use cases for AI tools, and maintaining data privacy when interacting with AI systems.”
Generative artificial intelligence (genAI) has evolved quickly during the past two years from prompt engineering and instruction fine-tuning to the integration of external knowledge sources aimed at improving the accuracy of chatbot answers.
GenAI’s latest big step forward has been the arrival of autonomous agents, or AI-enabled applications capable of perceiving their environment, making decisions, and taking actions to achieve specific goals. The key word here is “agency,” which allows the software to take action on its own. Unlike genAI tools — which usually focus on creating content such as text, images, and music — agentic AI is designed to emphasize proactive problem-solving and complex task execution.
The simplest definition of an AI agent is the combination of a large language model (LLM) and a traditional software application that can act independently to complete a task.
In 2025, 25% of companies that use genAI will launch agentic AI pilots or proofs of concept, according to report by professional services firm Deloitte. In 2027, that number will grow to half of all companies. “Some agentic AI applications…could see actual adoption into existing workflows in 2025, especially by the back half of the year,” Deloitte said. “Agentic AI could increase the productivity of knowledge workers and make workflows of all kinds more efficient. But the ‘autonomous’ part may take time for wide adoption.”
Agentic AI operates in two key ways. First, it offers specialized agents capable of autonomously completing tasks across the open web, in mobile apps, or as an operating system. A specific type of agentic AI, called conversational web agents, functions much like chatbots. In this case, the agentic AI engages users through multimodal conversations, extending beyond simple text chats to accompany them as they navigate the open web or use apps, according to Larry Heck, a professor at Georgia Institute of Technology’s schools of Electrical and Computer Engineering and Interactive Computing.
“Unlike traditional virtual assistants like Siri, Alexa, or Google Assistant, which operate within restricted ecosystems, conversational web agents empower users to complete tasks freely across the open web and apps,” Heck said. “I suspect that AI agents will be prevalent in many arenas, but perhaps the most common uses will be through extensions to web search engines and traditional AI Virtual Assistants like Siri, Alexa, and Google Assistant.”
Other uses for agentic AI
A variety of tech companies, cloud providers, and others are developing their own agentic AI offerings, making strategic acquisitions, and increasingly licensing agentic AI technology from startups and hiring their employees rather than buying the companies outright for the tech. Investors have poured more than $2 billion into agentic AI startups in the past two years, focusing on companies that target the enterprise market, according to Deloitte.
AI agents are already showing up in places you might not expect. For example, most self-driving vehicles today use sensors to collect data about their surroundings, which is then processed by AI agentic software to create a map and navigate the vehicle. AI agents play several other critical roles in autonomous vehicle route optimization, traffic management, and real-time decision-making — they can even predict when a vehicle needs maintenance.
Going forward, AI agents are poised to transform the overall automated driving experience, according to Ritu Jyoti, a group vice president for IDC Research. For example, earlier this year, Nvidia released Agent Driver, an LLM-powered agent for autonomous vehicles that offers more “human-like autonomous driving.”
IDC
These AI agents are also finding their way into a myriad number of industries and uses, from financial services (where they can collect information as part of know-your-client (KYC) applications) to healthcare (where an agentic AI can survey members conversationally and refill prescriptions). The variety of tasks they can tackle can include:
Autonomous diagnostic systems (such as Google’s DeepMind for retinal scans), which analyze medical images or patient data to suggest diagnoses and treatments.
Algorithmic trading bots in financial services that autonomously analyze market data, predict trends, and execute trades with minimal human intervention.
AI agents in the insurance industry that collect key details across channels and analyze the data to give status updates; they can also ask pre-enrollment questions and provide electronic authorizations.
Supplier communications agents that help customers optimize supply chains and minimize costly disruptions by autonomously tracking supplier performance, and detecting and responding to delays; that frees up procurement teams from time-consuming manual monitoring and firefighting tasks.
Sales qualification agents that allow sellers to focus their time on high-priority sales opportunities while the agent researches leads, helps prioritize opportunities, and guides customer outreach with personalized emails and responses, according to IDC’s Ryoti.
Customer intent and customer knowledge management agents that can make a first impression for customer care teams facing high call volumes, talent shortages and high customer expectations, according to Ryoti.
“These agents work hand in hand with a customer service representative by learning how to resolve customer issues and autonomously adding knowledge-based articles to scale best practices across the care team,” she explained.
And for developers, Cognition Labs in March launched Devin AI, a DIY agentic AI tool that autonomously works through tasks that would typically require a small team of software engineers to tackle. The agent can build and deploy apps end-to-end, independently find and fix bugs in codebases, and it can train and fine tune its own AI models.
Devin can even learn how to use unfamiliar technologies by performing its own research on them.
Notably, AI agents also have the ability to remember past interactions and behaviors. They can store those experiences and even perform “self-reflection” or evaluation to inform future actions, according to IDC. “This memory component allows for continuity and improvement in agent performance over time,” the research firm said in a report.
Other agentic AI systems (such as AlphaGo, AlphaZero, OpenAI’s Dota 2 bot) can be trained using reinforcement learning to autonomously strategize and make decisions in games or simulations to maximize rewards.
Agentic AI software development
Evans Data Corp., a market research firm that specializes in software development, conducted a multinational survey of 434 AI and machine learning developers. When asked what they most likely would create using genAI tools, the top answer was software code, followed by algorithms and LLMs. They also expect genAI to shorten the development lifecycle and make it easier to add machine-learning features.
GenAI-assisted coding allows developers to write code faster — and often, more accurately — using digital tools to create code based on natural language prompts or partial code inputs. (Like some email platforms, the tools can also suggest code for auto-completion as it’s written in real time.)
By 2027, 70% of professional developers are expected to be using AI-powered coding tools, up from less than 10% in September 2023, according to Gartner Research. And within three years, 80% of enterprises will have integrated AI-augmented testing tools into their software engineering toolchain — a significant increase from approximately 15% early last year, Gartner said.
One of the top tools used for genAI-automated software development is GitHub Copilot. It’s powered by genAI models developed by GitHub, OpenAI (the creator of ChatGPT), and Microsoft, and is trained on all natural languages that appear in public repositories.
GitHut combined multiple AI agents to enable them to work hand-in-hand to solve coding tasks; multi-agent AI systems allow multiple applications to work together on a common purpose. For example, GitHub earlier this year launched Copilot Workspace, a technical preview of its Copilot-native developer. The multi-agent system allows specialized agents to collaborate and communicate, solving complex problems more efficiently than a single agent.
With agentic AI coding tools like Copilot Workspace and code-scanning autofix, developers will be able to more efficiently build software that’s more secure, according to a GitHub blog.
The technology could also give rise to less positive results. AI agents might, for example, be better at figuring out online customer intent — a potential red flag for users who have long been concerned about security and privacy when searching and browsing online; detecting their intent could reveal sensitive information. According to Heck, AI agents could help companies understand a user’s intent more precisely, making it easier to “monetize this data at higher rates.
“But this increased granularity of knowledge of the user’s intent can also be more likely to cause security and privacy issues if safeguards are not put in place,” he said.
And while most agentic AI tools claim to be safe and secure, a lot depends on the information sources they use. That’s because the source of data used by the agents could vary — from more limited corporate data to the wide open internet. (The latter has a tendency to affect genAI outputs and can introduce errors and hallucinations.)
Setting guardrails around information access, can act like a boss and set limits on agentic AI actions. That’s why user education and training are critical in the secure implementation and use of AI agents and copilots, according to Anderw Silberman, director of marketing at Zenity, a banking software provider.
“Users need to understand not just how to operate these tools, but also their limitations, potential biases, and security implications,” Silberman wrote in a blog post. Training programs should cover topics such as recognizing and reporting suspicious AI behavior, understanding the appropriate use cases for AI tools, and maintaining data privacy when interacting with AI systems.”
South Korea’s sudden political upheaval has raised fresh concerns for its economy and global supply chains, with analysts warning of potential disruptions to its critical technology exports.
As a major producer of memory chips, displays, and other critical tech components, South Korea plays an essential role in global supply chains for products ranging from smartphones to data centers.
Automation in the past mainly affected industrial jobs in rural areas. GenAI, on the other hand, can be used for non-routine cognitive tasks, which is expected to affect more highly skilled workers and big cities where these workers are often based. The report estimates that up to 70% of these workers will be able to get half of their tasks done twice as fast with the help of genAI. The industries likely to be affected include education, IT, and finance.
The OECD notes that even if work tasks disappear, unemployment won’t necessarily increase. The overall number of jobs could increase, but those new positions might not directly benefit those who lost work because of automation and new efficiencies.
Apple has faced an unequal battle in recent years as some lawmakers, the FBI, and regulators insist that the company create backdoors through which to access messages and other parts of its platform.
Apple and others have always insisted that there is no such thing as a safe backdoor, and that if one person has access, then it’s only a matter of time until others gain access, too.
Use encryption for all your communications
Now, the FBI seems to agree.
In a recent security warning, the FBI and the US Infrastructure Security Agency have warned people to use encrypted apps such as iMessage and FaceTime for communication in order to retain security resilience against foreign hackers.
They also warn people to avoid using Rich Communication Services (RCS) when sharing messages between iPhones and Android devices, as RCS does not yet provide end-to-end encryption. (It is allegedly coming eventually, according to RCS standards body, the GSMA). What this means is that Android and iPhone users should probably consider installing Signal for cross platform communications, which does provide cross-platform encryption.
Apple also continues to invest in encryption technologies to protect its customers, and recently introduced upgraded protection against future high-level attacks that use quantum computers to break into your communications.
An about face?
What’s noteworthy about the FBI warning is that the agency has been battling Apple for years to convince it to put backdoors into its encryption — ostensibly to enable law enforcement. Apple has resisted so far, arguing that once you leave any form of vulnerability in any platform you are automatically placing customers at risk.
Knowledge of these back doors will inevitably slip outside the control of law enforcement into the hands of nation state attackers and — eventually — criminal groups, making everybody far less secure and placing personal, commercial, and national interest at risk. Not only does such weakened encryption directly threaten personal privacy, it also undermines national security.
A former head of UK national security agency MI5 warned of this almost a decade ago, while Apple software Vice President Craig Federighi has similarly warned: “Weakening security makes no sense when you consider that customers rely on our products to keep their personal information safe, run their businesses or even manage vital infrastructure like power grids and transportation systems.”
All the same, demands that Apple weaken platform security by diluting device encryption have remained. But with the attack environment now in a red zone, the FBI issued its warning about encryption.
It comes after a CISA warning concerning ongoing attacks by China-based hackers.
So, what is the FBI saying?
“Our suggestion, what we have told folks internally, is not new here: Encryption is your friend, whether it’s on text messaging or if you have the capacity to use encrypted voice communication,” said Jeff Greene, executive assistant director for cybersecurity at the CISA. “Even if the adversary is able to intercept the data, if it is encrypted, it will make it impossible [to use].”
The FBI also shared a recipe for security that should be on the desk of every IT purchaser. It recommends you use mobile devices that automatically receive timely OS updates, have encryption built in, and use multi-factor authentication for most collaboration tools. In other words, use a higher-end smartphone in preference to a low-end land-fill wannabe. Or, given that the best way to ensure security in your tech is to invest in secure products, use an iPhone, which has built-in encryption and is designed with a security-first agenda.
That focus on security likely reflects how Apple approaches the topic.
The next big war
After all, it was almost a decade ago that Apple CEO Tim Cook warned: “I think some of the top people predict that the next big war is fought on cybersecurity. With hacking getting more and more sophisticated, the hacking community has gone from the hobbyist in the basement to huge, sophisticated companies that are essentially doing this, or groups of people or foreign agents inside and outside the United States. People are running huge enterprises off of hacking and stealing data.
“So yes, every software release we do, we get more and more secure,” he said at the time.
Now, at last, the FBI seems to agree that encryption makes us safer. We really should keep using it, and reject arguments against doing so.
Mark Zuckerberg has consistently championed Meta’s Llama AI model as a leader in generative AI technology, positioning it as a strong competitor to OpenAI and Google. However, behind the scenes, Meta is complementing Llama with a rival AI model to meet its internal needs.
Meta’s internal AI-powered coding assistant, Metamate, uses both Meta’s Llama model and OpenAI’s GPT-4 to help developers and employees with coding tasks, reported The Fortune. The tool, which has been operational since early 2024, dynamically switches between the two models depending on the query, according to a current and a former Meta employee who spoke anonymously to The Fortune.
Mark Zuckerberg has consistently championed Meta’s Llama AI model as a leader in generative AI technology, positioning it as a strong competitor to OpenAI and Google. However, behind the scenes, Meta is complementing Llama with a rival AI model to meet its internal needs.
Meta’s internal AI-powered coding assistant, Metamate, uses both Meta’s Llama model and OpenAI’s GPT-4 to help developers and employees with coding tasks, reported The Fortune. The tool, which has been operational since early 2024, dynamically switches between the two models depending on the query, according to a current and a former Meta employee who spoke anonymously to The Fortune.