AI-generated content accounted for less than 1% of the disinformation fact-checkers linked to political elections that took place worldwide in 2024, according to social media giant Meta. The company cited political elections in the United States, Great Britain, Bangladesh, Indonesia, India, Pakistan, France, South Africa, Mexico and Brazil, as well as the EU elections.
“At the beginning of the year, many warned about the potential impact that generative AI could have on the upcoming elections, including the risk of widespread deepfakes and AI-powered disinformation campaigns,” Meta President of Global Affairs Nick Clegg wrote. “Based on what we have monitored through our services, it appears that these risks did not materialize in a significant way and that any impact was modest and limited in scope.”
Meta did not provide detailed information on how much AI-generated disinformation its fact-checking uncovered related to major elections.
AI-generated content accounted for less than 1% of the disinformation fact-checkers linked to political elections that took place worldwide in 2024, according to social media giant Meta. The company cited political elections in the United States, Great Britain, Bangladesh, Indonesia, India, Pakistan, France, South Africa, Mexico and Brazil, as well as the EU elections.
“At the beginning of the year, many warned about the potential impact that generative AI could have on the upcoming elections, including the risk of widespread deepfakes and AI-powered disinformation campaigns,” Meta President of Global Affairs Nick Clegg wrote. “Based on what we have monitored through our services, it appears that these risks did not materialize in a significant way and that any impact was modest and limited in scope.”
Meta did not provide detailed information on how much AI-generated disinformation its fact-checking uncovered related to major elections.
In this case, it is using artificial intelligence (AI) processors from Amazon Web Services (AWS) for some of its Apple Intelligence and other services, including Maps, Apps, and search. Apple is also testing advanced AWS chips to pretrain some of its AI models as it continues its rapid pivot toward becoming the world’s most widely deployed AI platform.
That’s the big — and somewhat unexpected — news to emerge from this week’s AWS:Reinvent conference.
Apple watchers will know that the company seldom, if ever, sends speakers to other people’s trade shows. So, it matters that Apple’s Senior Director of Machine Learning and AI, Benoit Dupin, took to the stage at the Amazon event. That appearance can be seen as a big endorsement both of AWS and its AI services, and the mutually beneficial relationship between Apple and AWS.
Not a new relationship.
Apple has used AWS servers for years, in part to drive its iCloud and Apple One services and to scale additional capacity at times of peak demand. “One of the unique elements of Apple’s business is the scale at which we operate, and the speed with which we innovate. AWS has been able to keep the pace,” Dupin said.
Some might note that Dupin (who once worked at AWS) threw a small curveball when he revealed that Apple has begun to deploy Amazon’s Graviton and Inferentia for machine learning services such as streaming and search. He explained that moving to these chips has generated an impressive 40% efficiency increase in Apple’s machine learning inference workloads when compared to x86 instances.
Dupin also confirmed Apple is in the early stages of evaluating the newly-introduced AWS Trainium 2 AI training chip, which he expects will bring in 50% improvement in efficiency when pre-training AI.
Scale, speed, and Apple Intelligence
On the AWS connection to Apple Intelligence, he explained: “To develop Apple Intelligence, we needed to further scale our infrastructure for training.” As a result, Apple turned to AWS because the service could provide access to the most performant accelerators in quantity.
Dupin revealed that key areas where Apple uses Amazon’s services include fine-tuning AI models, optimizing trained models to fit on small devices, and “building and finalizing our Apple Intelligence adapters, ready to deploy on Apple devices and servers.. We work with AWS Services across virtually all phase of our AI and ML lifecycle,” he said.
Apple Intelligence is a work in progress and the company is already developing additional services and feature improvements, “As we expand the capabilities and feature of Apple Intelligence, we will continue to depend on the scalable, efficient, high-performance accelerator technologies AWS delivers,” he said.
Apple CEO Tim Cook recently confirmed more services will appear in the future. “I’m not going to announce anything today. But we have research going on. We’re pouring all of ourselves in here, and we work on things that are years in the making,” Cook said.
TSMC, Apple, AWS, AI, oh my!
There’s another interesting connection between Apple and AWS. Apple’s M- and A- series processors are manufactured by Taiwan Semiconductor Manufacturing (TSMC), with devices made by Foxconn and others. TSMC also makes the processors used by AWS. And it manufactures the AI processors Nvidia provides; we think it will be tasked with churning out Apple Silicon server processors to support Private Cloud Compute services and Apple Intelligence.
It is also noteworthy that AWS believes it will be able to link more of its processors together for huge cloud intelligence servers beyond what Nvidia can manage. Speaking on the fringes of AWS Reinvent, AWS AI chip business development manager Gadi Hutt claimed his company’s processors will be able to train some AI models at 40% lower cost than on Nvidia chips.
Up next?
While the appearance of an Apple exec at the AWS event suggests a good partnership, I can’t help but be curious about whether Apple has its own ambitions to deliver server processors, and the extent to which these might deliver significant performance/energy efficiency gains, given the performance efficiency of Apple silicon.
Speculation aside, as AI injects itself into everything, the gold rush for developers capable of building and maintaining these services and the infrastructure (including energy infrastructure) required for the tech continues to intensify; these kinds of fast-growing industry-wide deployments will surely be where opportunity shines.
In this case, it is using artificial intelligence (AI) processors from Amazon Web Services (AWS) for some of its Apple Intelligence and other services, including Maps, Apps, and search. Apple is also testing advanced AWS chips to pretrain some of its AI models as it continues its rapid pivot toward becoming the world’s most widely deployed AI platform.
That’s the big — and somewhat unexpected — news to emerge from this week’s AWS:Reinvent conference.
Apple watchers will know that the company seldom, if ever, sends speakers to other people’s trade shows. So, it matters that Apple’s Senior Director of Machine Learning and AI, Benoit Dupin, took to the stage at the Amazon event. That appearance can be seen as a big endorsement both of AWS and its AI services, and the mutually beneficial relationship between Apple and AWS.
Not a new relationship.
Apple has used AWS servers for years, in part to drive its iCloud and Apple One services and to scale additional capacity at times of peak demand. “One of the unique elements of Apple’s business is the scale at which we operate, and the speed with which we innovate. AWS has been able to keep the pace,” Dupin said.
Some might note that Dupin (who once worked at AWS) threw a small curveball when he revealed that Apple has begun to deploy Amazon’s Graviton and Inferentia for machine learning services such as streaming and search. He explained that moving to these chips has generated an impressive 40% efficiency increase in Apple’s machine learning inference workloads when compared to x86 instances.
Dupin also confirmed Apple is in the early stages of evaluating the newly-introduced AWS Trainium 2 AI training chip, which he expects will bring in 50% improvement in efficiency when pre-training AI.
Scale, speed, and Apple Intelligence
On the AWS connection to Apple Intelligence, he explained: “To develop Apple Intelligence, we needed to further scale our infrastructure for training.” As a result, Apple turned to AWS because the service could provide access to the most performant accelerators in quantity.
Dupin revealed that key areas where Apple uses Amazon’s services include fine-tuning AI models, optimizing trained models to fit on small devices, and “building and finalizing our Apple Intelligence adapters, ready to deploy on Apple devices and servers.. We work with AWS Services across virtually all phase of our AI and ML lifecycle,” he said.
Apple Intelligence is a work in progress and the company is already developing additional services and feature improvements, “As we expand the capabilities and feature of Apple Intelligence, we will continue to depend on the scalable, efficient, high-performance accelerator technologies AWS delivers,” he said.
Apple CEO Tim Cook recently confirmed more services will appear in the future. “I’m not going to announce anything today. But we have research going on. We’re pouring all of ourselves in here, and we work on things that are years in the making,” Cook said.
TSMC, Apple, AWS, AI, oh my!
There’s another interesting connection between Apple and AWS. Apple’s M- and A- series processors are manufactured by Taiwan Semiconductor Manufacturing (TSMC), with devices made by Foxconn and others. TSMC also makes the processors used by AWS. And it manufactures the AI processors Nvidia provides; we think it will be tasked with churning out Apple Silicon server processors to support Private Cloud Compute services and Apple Intelligence.
It is also noteworthy that AWS believes it will be able to link more of its processors together for huge cloud intelligence servers beyond what Nvidia can manage. Speaking on the fringes of AWS Reinvent, AWS AI chip business development manager Gadi Hutt claimed his company’s processors will be able to train some AI models at 40% lower cost than on Nvidia chips.
Up next?
While the appearance of an Apple exec at the AWS event suggests a good partnership, I can’t help but be curious about whether Apple has its own ambitions to deliver server processors, and the extent to which these might deliver significant performance/energy efficiency gains, given the performance efficiency of Apple silicon.
Speculation aside, as AI injects itself into everything, the gold rush for developers capable of building and maintaining these services and the infrastructure (including energy infrastructure) required for the tech continues to intensify; these kinds of fast-growing industry-wide deployments will surely be where opportunity shines.
Google DeepMind and startup World Labs this week both revealed previews of AI tools that can be used to create immersive 3D environments from simple prompts.
World Labs, the startup founded by AI pioneer Fei-Fei Li and backed by $230 million in funding, announced its 3D “world generation” model on Tuesday. It turns a static image into a computer game-like 3D scene that can be navigated using keyboard and mouse controls.
“Most GenAI tools make 2D content like images or videos,” World Labs said in a blog post. “Generating in 3D instead improves control and consistency. This will change how we make movies, games, simulators, and other digital manifestations of our physical world.”
One example is the Vincent van Gogh painting “Café Terrace at Night,” which the AI model used to generateadditional content to create a small area to view and move around in. Others are more like first-person computer games.
World Labs
WorldLabs also demonstrated the ability to add effects to 3D scenes, and control virtual camera zoom, for instance. (You can try out the various scenes here.)
Creators that have tested the technology said it could help cut the time needed to build 3D environments, according to a video posted in the blog post, and help users brainstorm ideas much faster.
The 3D scene builder is a “first early preview” and is not available as a product yet.
It’s the successor to the first Genie model, unveiled earlier this year, which can generate 2D platformer-style computer games from text and image prompts. Genie 2 does the same for 3D games that can be navigated in first-person view or via an in-game avatar that can perform actions such as running and jumping.
It’s possible to generate “consistent worlds” for up to a minute, DeepMind said, with most of the examples showcased in the blog post lasting between 10 and 20 seconds. Genie 2 can also remember parts of the virtual world that are no longer in view, reproducing them accurately when they’re observable again.
DeepMind said its work on Genie is still at an early stage; it’s not clear when the technology might be more widely available. Genie 2 is described as a research tool that can “rapidly prototype diverse interactive experiences” and train AI agents.
Google also announced that its generative AI (genAI) video model, Veo, is now available in a private preview to business customers using its Vertex AI platform. The image-to-video model will open up “new possibilities for creative expression” and streamline “video production workflows,” Google said in a blog post Tuesday.
Amazon Web Services also announced its range of Nova AI models this week, including AI video generation capabilities; OpenAI is thought to be launching Sora, its text-to-video software, later this month.
With Windows 10 end of support on the horizon, Microsoft said its Trusted Platform Module (TPM) 2.0 requirement for PCs is a “non-negotiable standard” for upgrading to Windows 11.
TPM 2.0 was introduced as a requirement with the launch of Windows 11 three years ago and is aimed at securing data on a device at the hardware level. It refers to a specially designed chip — integrated into a PC’s motherboard or added to the CPU — and firmware that enables storage of encryption keys, security certificates, and passwords.
TPM 2.0 is a “non-negotiable standard for the future of Windows,” said Steven Hosking, Microsoft senior product manager, in a Wednesday blog post. He called it “a necessity for maintaining a secure and future-proof IT environment with Windows 11.”
New Windows PCs typically support TPM 2.0, but older devices running Windows 10 might not. This means businesses will have to replace Windows 10 PCs ahead of end of support for the operating system; that deadline is set for Oct. 14, 2025.
Windows 10 remains widely used — more so than its successor. According to Statcounter, the proportion of Windows 10 desktop PCs actually increased last month in the US and now accounts for 61% of desktops, compared to 37% for Windows 11.
Hosking noted that the “implementation [of TPM 2.0] might require a change for your organization.… Yet it represents an important step toward more effectively countering today’s intricate security challenges.”
For devices that don’t have TPM 2.0, Hosking recommends that IT admins: evaluate current hardware for compatibility with tools such as Microsoft Intune; “plan and budget for upgrades” of non-compliant devices; and “review security policies and procedures” to incorporate the use of TPM 2.0.
Generative artificial intelligence (genAI) has evolved quickly during the past two years from prompt engineering and instruction fine-tuning to the integration of external knowledge sources aimed at improving the accuracy of chatbot answers.
GenAI’s latest big step forward has been the arrival of autonomous agents, or AI-enabled applications capable of perceiving their environment, making decisions, and taking actions to achieve specific goals. The key word here is “agency,” which allows the software to take action on its own. Unlike genAI tools — which usually focus on creating content such as text, images, and music — agentic AI is designed to emphasize proactive problem-solving and complex task execution.
The simplest definition of an AI agent is the combination of a large language model (LLM) and a traditional software application that can act independently to complete a task.
In 2025, 25% of companies that use genAI will launch agentic AI pilots or proofs of concept, according to report by professional services firm Deloitte. In 2027, that number will grow to half of all companies. “Some agentic AI applications…could see actual adoption into existing workflows in 2025, especially by the back half of the year,” Deloitte said. “Agentic AI could increase the productivity of knowledge workers and make workflows of all kinds more efficient. But the ‘autonomous’ part may take time for wide adoption.”
Agentic AI operates in two key ways. First, it offers specialized agents capable of autonomously completing tasks across the open web, in mobile apps, or as an operating system. A specific type of agentic AI, called conversational web agents, functions much like chatbots. In this case, the agentic AI engages users through multimodal conversations, extending beyond simple text chats to accompany them as they navigate the open web or use apps, according to Larry Heck, a professor at Georgia Institute of Technology’s schools of Electrical and Computer Engineering and Interactive Computing.
“Unlike traditional virtual assistants like Siri, Alexa, or Google Assistant, which operate within restricted ecosystems, conversational web agents empower users to complete tasks freely across the open web and apps,” Heck said. “I suspect that AI agents will be prevalent in many arenas, but perhaps the most common uses will be through extensions to web search engines and traditional AI Virtual Assistants like Siri, Alexa, and Google Assistant.”
Other uses for agentic AI
A variety of tech companies, cloud providers, and others are developing their own agentic AI offerings, making strategic acquisitions, and increasingly licensing agentic AI technology from startups and hiring their employees rather than buying the companies outright for the tech. Investors have poured more than $2 billion into agentic AI startups in the past two years, focusing on companies that target the enterprise market, according to Deloitte.
AI agents are already showing up in places you might not expect. For example, most self-driving vehicles today use sensors to collect data about their surroundings, which is then processed by AI agentic software to create a map and navigate the vehicle. AI agents play several other critical roles in autonomous vehicle route optimization, traffic management, and real-time decision-making — they can even predict when a vehicle needs maintenance.
Going forward, AI agents are poised to transform the overall automated driving experience, according to Ritu Jyoti, a group vice president for IDC Research. For example, earlier this year, Nvidia released Agent Driver, an LLM-powered agent for autonomous vehicles that offers more “human-like autonomous driving.”
IDC
These AI agents are also finding their way into a myriad number of industries and uses, from financial services (where they can collect information as part of know-your-client (KYC) applications) to healthcare (where an agentic AI can survey members conversationally and refill prescriptions). The variety of tasks they can tackle can include:
Autonomous diagnostic systems (such as Google’s DeepMind for retinal scans), which analyze medical images or patient data to suggest diagnoses and treatments.
Algorithmic trading bots in financial services that autonomously analyze market data, predict trends, and execute trades with minimal human intervention.
AI agents in the insurance industry that collect key details across channels and analyze the data to give status updates; they can also ask pre-enrollment questions and provide electronic authorizations.
Supplier communications agents that help customers optimize supply chains and minimize costly disruptions by autonomously tracking supplier performance, and detecting and responding to delays; that frees up procurement teams from time-consuming manual monitoring and firefighting tasks.
Sales qualification agents that allow sellers to focus their time on high-priority sales opportunities while the agent researches leads, helps prioritize opportunities, and guides customer outreach with personalized emails and responses, according to IDC’s Ryoti.
Customer intent and customer knowledge management agents that can make a first impression for customer care teams facing high call volumes, talent shortages and high customer expectations, according to Ryoti.
“These agents work hand in hand with a customer service representative by learning how to resolve customer issues and autonomously adding knowledge-based articles to scale best practices across the care team,” she explained.
And for developers, Cognition Labs in March launched Devin AI, a DIY agentic AI tool that autonomously works through tasks that would typically require a small team of software engineers to tackle. The agent can build and deploy apps end-to-end, independently find and fix bugs in codebases, and it can train and fine tune its own AI models.
Devin can even learn how to use unfamiliar technologies by performing its own research on them.
Notably, AI agents also have the ability to remember past interactions and behaviors. They can store those experiences and even perform “self-reflection” or evaluation to inform future actions, according to IDC. “This memory component allows for continuity and improvement in agent performance over time,” the research firm said in a report.
Other agentic AI systems (such as AlphaGo, AlphaZero, OpenAI’s Dota 2 bot) can be trained using reinforcement learning to autonomously strategize and make decisions in games or simulations to maximize rewards.
Agentic AI software development
Evans Data Corp., a market research firm that specializes in software development, conducted a multinational survey of 434 AI and machine learning developers. When asked what they most likely would create using genAI tools, the top answer was software code, followed by algorithms and LLMs. They also expect genAI to shorten the development lifecycle and make it easier to add machine-learning features.
GenAI-assisted coding allows developers to write code faster — and often, more accurately — using digital tools to create code based on natural language prompts or partial code inputs. (Like some email platforms, the tools can also suggest code for auto-completion as it’s written in real time.)
By 2027, 70% of professional developers are expected to be using AI-powered coding tools, up from less than 10% in September 2023, according to Gartner Research. And within three years, 80% of enterprises will have integrated AI-augmented testing tools into their software engineering toolchain — a significant increase from approximately 15% early last year, Gartner said.
One of the top tools used for genAI-automated software development is GitHub Copilot. It’s powered by genAI models developed by GitHub, OpenAI (the creator of ChatGPT), and Microsoft, and is trained on all natural languages that appear in public repositories.
GitHut combined multiple AI agents to enable them to work hand-in-hand to solve coding tasks; multi-agent AI systems allow multiple applications to work together on a common purpose. For example, GitHub earlier this year launched Copilot Workspace, a technical preview of its Copilot-native developer. The multi-agent system allows specialized agents to collaborate and communicate, solving complex problems more efficiently than a single agent.
With agentic AI coding tools like Copilot Workspace and code-scanning autofix, developers will be able to more efficiently build software that’s more secure, according to a GitHub blog.
The technology could also give rise to less positive results. AI agents might, for example, be better at figuring out online customer intent — a potential red flag for users who have long been concerned about security and privacy when searching and browsing online; detecting their intent could reveal sensitive information. According to Heck, AI agents could help companies understand a user’s intent more precisely, making it easier to “monetize this data at higher rates.
“But this increased granularity of knowledge of the user’s intent can also be more likely to cause security and privacy issues if safeguards are not put in place,” he said.
And while most agentic AI tools claim to be safe and secure, a lot depends on the information sources they use. That’s because the source of data used by the agents could vary — from more limited corporate data to the wide open internet. (The latter has a tendency to affect genAI outputs and can introduce errors and hallucinations.)
Setting guardrails around information access, can act like a boss and set limits on agentic AI actions. That’s why user education and training are critical in the secure implementation and use of AI agents and copilots, according to Anderw Silberman, director of marketing at Zenity, a banking software provider.
“Users need to understand not just how to operate these tools, but also their limitations, potential biases, and security implications,” Silberman wrote in a blog post. Training programs should cover topics such as recognizing and reporting suspicious AI behavior, understanding the appropriate use cases for AI tools, and maintaining data privacy when interacting with AI systems.”
Generative artificial intelligence (genAI) has evolved quickly during the past two years from prompt engineering and instruction fine-tuning to the integration of external knowledge sources aimed at improving the accuracy of chatbot answers.
GenAI’s latest big step forward has been the arrival of autonomous agents, or AI-enabled applications capable of perceiving their environment, making decisions, and taking actions to achieve specific goals. The key word here is “agency,” which allows the software to take action on its own. Unlike genAI tools — which usually focus on creating content such as text, images, and music — agentic AI is designed to emphasize proactive problem-solving and complex task execution.
The simplest definition of an AI agent is the combination of a large language model (LLM) and a traditional software application that can act independently to complete a task.
In 2025, 25% of companies that use genAI will launch agentic AI pilots or proofs of concept, according to report by professional services firm Deloitte. In 2027, that number will grow to half of all companies. “Some agentic AI applications…could see actual adoption into existing workflows in 2025, especially by the back half of the year,” Deloitte said. “Agentic AI could increase the productivity of knowledge workers and make workflows of all kinds more efficient. But the ‘autonomous’ part may take time for wide adoption.”
Agentic AI operates in two key ways. First, it offers specialized agents capable of autonomously completing tasks across the open web, in mobile apps, or as an operating system. A specific type of agentic AI, called conversational web agents, functions much like chatbots. In this case, the agentic AI engages users through multimodal conversations, extending beyond simple text chats to accompany them as they navigate the open web or use apps, according to Larry Heck, a professor at Georgia Institute of Technology’s schools of Electrical and Computer Engineering and Interactive Computing.
“Unlike traditional virtual assistants like Siri, Alexa, or Google Assistant, which operate within restricted ecosystems, conversational web agents empower users to complete tasks freely across the open web and apps,” Heck said. “I suspect that AI agents will be prevalent in many arenas, but perhaps the most common uses will be through extensions to web search engines and traditional AI Virtual Assistants like Siri, Alexa, and Google Assistant.”
Other uses for agentic AI
A variety of tech companies, cloud providers, and others are developing their own agentic AI offerings, making strategic acquisitions, and increasingly licensing agentic AI technology from startups and hiring their employees rather than buying the companies outright for the tech. Investors have poured more than $2 billion into agentic AI startups in the past two years, focusing on companies that target the enterprise market, according to Deloitte.
AI agents are already showing up in places you might not expect. For example, most self-driving vehicles today use sensors to collect data about their surroundings, which is then processed by AI agentic software to create a map and navigate the vehicle. AI agents play several other critical roles in autonomous vehicle route optimization, traffic management, and real-time decision-making — they can even predict when a vehicle needs maintenance.
Going forward, AI agents are poised to transform the overall automated driving experience, according to Ritu Jyoti, a group vice president for IDC Research. For example, earlier this year, Nvidia released Agent Driver, an LLM-powered agent for autonomous vehicles that offers more “human-like autonomous driving.”
IDC
These AI agents are also finding their way into a myriad number of industries and uses, from financial services (where they can collect information as part of know-your-client (KYC) applications) to healthcare (where an agentic AI can survey members conversationally and refill prescriptions). The variety of tasks they can tackle can include:
Autonomous diagnostic systems (such as Google’s DeepMind for retinal scans), which analyze medical images or patient data to suggest diagnoses and treatments.
Algorithmic trading bots in financial services that autonomously analyze market data, predict trends, and execute trades with minimal human intervention.
AI agents in the insurance industry that collect key details across channels and analyze the data to give status updates; they can also ask pre-enrollment questions and provide electronic authorizations.
Supplier communications agents that help customers optimize supply chains and minimize costly disruptions by autonomously tracking supplier performance, and detecting and responding to delays; that frees up procurement teams from time-consuming manual monitoring and firefighting tasks.
Sales qualification agents that allow sellers to focus their time on high-priority sales opportunities while the agent researches leads, helps prioritize opportunities, and guides customer outreach with personalized emails and responses, according to IDC’s Ryoti.
Customer intent and customer knowledge management agents that can make a first impression for customer care teams facing high call volumes, talent shortages and high customer expectations, according to Ryoti.
“These agents work hand in hand with a customer service representative by learning how to resolve customer issues and autonomously adding knowledge-based articles to scale best practices across the care team,” she explained.
And for developers, Cognition Labs in March launched Devin AI, a DIY agentic AI tool that autonomously works through tasks that would typically require a small team of software engineers to tackle. The agent can build and deploy apps end-to-end, independently find and fix bugs in codebases, and it can train and fine tune its own AI models.
Devin can even learn how to use unfamiliar technologies by performing its own research on them.
Notably, AI agents also have the ability to remember past interactions and behaviors. They can store those experiences and even perform “self-reflection” or evaluation to inform future actions, according to IDC. “This memory component allows for continuity and improvement in agent performance over time,” the research firm said in a report.
Other agentic AI systems (such as AlphaGo, AlphaZero, OpenAI’s Dota 2 bot) can be trained using reinforcement learning to autonomously strategize and make decisions in games or simulations to maximize rewards.
Agentic AI software development
Evans Data Corp., a market research firm that specializes in software development, conducted a multinational survey of 434 AI and machine learning developers. When asked what they most likely would create using genAI tools, the top answer was software code, followed by algorithms and LLMs. They also expect genAI to shorten the development lifecycle and make it easier to add machine-learning features.
GenAI-assisted coding allows developers to write code faster — and often, more accurately — using digital tools to create code based on natural language prompts or partial code inputs. (Like some email platforms, the tools can also suggest code for auto-completion as it’s written in real time.)
By 2027, 70% of professional developers are expected to be using AI-powered coding tools, up from less than 10% in September 2023, according to Gartner Research. And within three years, 80% of enterprises will have integrated AI-augmented testing tools into their software engineering toolchain — a significant increase from approximately 15% early last year, Gartner said.
One of the top tools used for genAI-automated software development is GitHub Copilot. It’s powered by genAI models developed by GitHub, OpenAI (the creator of ChatGPT), and Microsoft, and is trained on all natural languages that appear in public repositories.
GitHut combined multiple AI agents to enable them to work hand-in-hand to solve coding tasks; multi-agent AI systems allow multiple applications to work together on a common purpose. For example, GitHub earlier this year launched Copilot Workspace, a technical preview of its Copilot-native developer. The multi-agent system allows specialized agents to collaborate and communicate, solving complex problems more efficiently than a single agent.
With agentic AI coding tools like Copilot Workspace and code-scanning autofix, developers will be able to more efficiently build software that’s more secure, according to a GitHub blog.
The technology could also give rise to less positive results. AI agents might, for example, be better at figuring out online customer intent — a potential red flag for users who have long been concerned about security and privacy when searching and browsing online; detecting their intent could reveal sensitive information. According to Heck, AI agents could help companies understand a user’s intent more precisely, making it easier to “monetize this data at higher rates.
“But this increased granularity of knowledge of the user’s intent can also be more likely to cause security and privacy issues if safeguards are not put in place,” he said.
And while most agentic AI tools claim to be safe and secure, a lot depends on the information sources they use. That’s because the source of data used by the agents could vary — from more limited corporate data to the wide open internet. (The latter has a tendency to affect genAI outputs and can introduce errors and hallucinations.)
Setting guardrails around information access, can act like a boss and set limits on agentic AI actions. That’s why user education and training are critical in the secure implementation and use of AI agents and copilots, according to Anderw Silberman, director of marketing at Zenity, a banking software provider.
“Users need to understand not just how to operate these tools, but also their limitations, potential biases, and security implications,” Silberman wrote in a blog post. Training programs should cover topics such as recognizing and reporting suspicious AI behavior, understanding the appropriate use cases for AI tools, and maintaining data privacy when interacting with AI systems.”
South Korea’s sudden political upheaval has raised fresh concerns for its economy and global supply chains, with analysts warning of potential disruptions to its critical technology exports.
As a major producer of memory chips, displays, and other critical tech components, South Korea plays an essential role in global supply chains for products ranging from smartphones to data centers.
Automation in the past mainly affected industrial jobs in rural areas. GenAI, on the other hand, can be used for non-routine cognitive tasks, which is expected to affect more highly skilled workers and big cities where these workers are often based. The report estimates that up to 70% of these workers will be able to get half of their tasks done twice as fast with the help of genAI. The industries likely to be affected include education, IT, and finance.
The OECD notes that even if work tasks disappear, unemployment won’t necessarily increase. The overall number of jobs could increase, but those new positions might not directly benefit those who lost work because of automation and new efficiencies.