Author: Security – Computerworld

The next AI wave — agents — should come with warning labels

The next wave of artificial intelligence (AI) adoption is already under way, as AI agents — AI applications that can function independently and execute complex workflows with minimal or limited direct human oversight — are being rolled out across the tech industry.

Unlike a large language model (LLM) or generative AI (genAI) tools, which usually focus on creating content such as text, images, and music, agentic AI is designed to emphasize proactive problem-solving and complex task execution, much as a human would. The key word is “agency,” or software that can act on its own.

AI agents can combine multiple capabilities (such as language understanding, reasoning, decision-making, and planning), and execute actions in a broader context, such as controlling robots, managing workflows, or interacting with APIs. They can even be grouped together, allowing a multi-AI agent system working together to solve tasks in a distributed and collaborative way. (OpenAI unveiled “Swarm,” an experimental multi-agentic framework last fall.)

Agents can also use LLMs as part of its decision-making or interaction strategy. For example, while the OpenAI’s LLM-based ChatGPT can generate a poem, or Google’s BERT can classify sentiment in a sentence, an AI agent such as Siri or Alexa can be used to control smart devices and set reminders.

Benjamin Lee, a professor of engineering and computer science at the University of Pennsylvania, said agentic AI is poised to represent a ”paradigm shift.” That’s because the agents could boost productivity by enabling humans to delegate large jobs to an agent instead of individual tasks.

Specialized models could compute answers with fewer calculations and less energy, with agents efficiently choosing the right model for each task — a challenge for humans today, according to Lee.

“Research in artificial intelligence has, until recently, focused on training models that perform well on a single task,” Lee said, “but a job is often comprised of many interdependent tasks. With agentic AI, humans no longer provide the AI an individual task but rather provide the AI a job. An intelligent AI will then strategize and determine the set of tasks needed to complete that job.”

According to Capgemini, 82% of organizations plan to adopt AI agents over the next three years, primarily for tasks such as email generation, coding, and data analysis. Similarly, Deloitte predicts that enterprises using AI agents this year will grow their use of the technology by 50% over the next two years.

“Such systems exhibit characteristics traditionally found exclusively in human operators, including decision-making, planning, collaboration, and adapting execution techniques based on inputs, predefined goals, and environmental considerations,” Capgemini explained.

A warning against unsupervised AI

Capgemini also warned that organizations planning to implement AI agents should establish safeguards to ensure transparency and accountability for any AI-driven decisions. That’s because AI agents that use unclean data can introduce errors, inconsistencies, or missing values that make it difficult for the model to make accurate predictions or decisions. If the dataset has missing values for certain features, for instance, the model might incorrectly assume relationships or fail to generalize well to new data.

An agent could also draw data from individuals without consent or use data that’s not anonymized properly, potentially exposing personally identifiable information. Large datasets with missing or poorly formatted data can also slow model training and cause it to consume more resources, making it difficult to scale the system.

In addition, while AI agents must also comply with the European Union’s AI Act and similar regulations, innovation will quickly outpace those rules. Businesses must not only ensure compliance but also manage various risks, such as misrepresentation, policy overrides, misinterpretation, and unexpected behavior.

“These risks will influence AI adoption, as companies must assess their risk tolerance and invest in proper monitoring and oversight,” according to a Forrester Research report — “The State Of AI Agents” — published in October.

Matt Coatney, CIO of business law firm Thompson Hine, said his organization is already actively experimenting with agents and agentic systems for both legal and administrative tasks. “However, we are not yet satisfied with their performance and accuracy to consider for real-world workflows quite yet,” he said, adding that the firm is focused on agent use in contract review, billing, budgeting, and business development.

Thompson Hine employs more than 400 attorneys, operates in nine US states and promotes its use of advanced technologies, including AI, in providing legal services.

Coatney stressed that research and development around AI agents is still evolving. Most commercially available tools are either fledgling startups or open-source projects like Autogen (Microsoft). Established players such as Salesforce and ServiceNow highlight AI agents as key features, but the term “agent” remains loosely defined and is often overused in marketing, he said.

For example, Salesforce Einstein is designed to enhance customer relationship management using predictive analytics and automation. And Auto-GPT enables users to create an autonomous assistant to complete complex tasks by analyzing a text prompt with GPT-4 and GPT-4o then breaking the goal into manageable subtasks.

AI agents

Forrester Research

“AI agents are still largely experimental, but looking at where enterprises have historically invested in automation is instructive,” Coatney said. “Time-consuming, frequent tasks are ripe for this type of solution: finance, operations, administrative processes, etc. Additionally, AI agents are being explored for tasks where genAI)is specifically strong, such as writing.

“For instance, one could imagine a multi-agent system involving an AI project manager, blog writer, brand manager, editor, and SEO specialist working in concert to automatically create on-brand marketing material,” he said.

“These agents leverage the strengths of multiple paradigms while mitigating risk by using more deterministic techniques when appropriate,” Coatney said. “I am particularly excited about the potential of integrating systems and data both within and beyond the enterprise. I see great potential in unlocking value still largely isolated in departmental and vendor silos.”

AI uses

Forrester Research

Limited capabilities today

Tom Coshow, a senior director analyst at Gartner, said many agents today have limited independence, making few decisions and often requiring human review of their actions. Additionally, one of the bigger challenges with deploying agents is ensuring they’re grounded with quality data that produce consistent results, he said.

“AI agents are tricky to deploy and require extensive testing and monitoring,” Coshow said. “The AI agent market is bubbling with startups, the hyper scalers, former RPA [Robotic Process Automation] companies, former conversational AI companies and data and analytics firms.”

Yet, businesses are optimistic about AI broadly, hoping automation will drive efficiency and better business outcomes. Among tech decision-makers who work in services, according to Forrester Research, 70% of businesses expect their organization will increase spending on third-party RPA and automation services in the next 12 months.

Among digital business strategy decision-makers, 92% say that their firm is investing in chatbots or plans to do so in the next two years; 89% said the same about Autonomy, Will, and Agency technology — the three main facets that allow AI agents to act with varying levels of independence and intentionality.

“Businesses must navigate a convoluted landscape of standalone solutions with piecemeal applications lacking an overarching framework for effective coordination or orchestration,” Forrester explained in a September report, “AI Agents: The Good, The Bad, And The Ugly.”

The challenge is that AI agents must both make decisions and execute processes, which requires integrating automation tools like iPaaS and RPA with AI’s flexible decision-making, Forrester said.

Last year, companies such as Salesforce, ServiceNow, Microsoft, and Workday introduced AI agents to streamline tasks such as recruiting, contacting sales leads, creating marketing content, and managing IT.

At Johnson & Johnson, AI agents now assist in drug discovery by optimizing chemical synthesis, including determining the best timing for solvent switches to crystallize molecules into drugs. While effective, the company remains cautious about potential risks, like biased outputs or errors, according to CIO Jim Swanson.

“Like other cutting-edge AI solutions, agents require significant technical and process expertise to effectively deploy,” Thompson Hine’s Coatney said. “Since they are so new and experimental, the jury is still out as to whether the increased value is worth the complexity of setting them up and thoroughly testing them. ROI, as it always has been, is highly project dependent.”

Matt Mullenweg: WordPress developer hours cutback may or may not slow innovation

Automattic CEO Matt Mullenweg said his decision to reduce his team’s weekly hours working on WordPress by 99% , from 4,000 hours to 45 hours, was designed to pressure WP Engine to drop its lawsuit against Mullenweg and Automattic

“They don’t actually make WordPress. They just resell it,” Mullenweg told Computerworld Friday evening. “If what they are reselling is no longer getting all of the free updates, they have less stuff to sell.” 

“It doesn’t make sense for Automattic to pay people to work on all of these things,” he said. “We are under attack and we are circling the wagons. Our number one goal is for WP Engine to drop their expensive lawsuits against me and Automattic.”

WP Engine was asked for comment, but did not respond.

Asked whether the move would also hurt users of WordPress, which is behind about 60% of the world’s web sites, Mullenweg said that he didn’t think it would. 

“WordPress is great software. It doesn’t change anything that WordPress already does,” Mullenweg said. “How does this affect the timeline? For new stuff, it might slow it down, it might not. It depends on who shows up and commits code. In terms of new functionality, the scope will be smaller.”

He added, “I love WordPress and will continue to put in hours, nights, and weekends to help however possible.”

Mullenweg also stressed that the 45 hours his team will continue to work on WordPress will make sure that security updates/patches are maintained. 

“Security is never going to be an issue. We will always maintain security,” he said. “No one would ever stop a security update.”

Automattic controls WordPress.com, while the project site, WordPress.org, is controlled solely by Mullenweg.

The cutback in hours had been considered last month when Automattic announced a holiday shutdown of some WordPress services and Mullenweg later said that the shutdown might last all of 2025. Instead, Automattic management opted to implement this severe development hours cutback.

On Thursday, Automattic announced, “we’ve observed an imbalance in how contributions to WordPress are distributed across the ecosystem, and it’s time to address this. Additionally, we’re having to spend significant time and money to defend ourselves against the legal attacks started by WP Engine and funded by Silver Lake, a large private equity firm.”

“Automatticians who contributed to core will instead focus on for-profit projects within Automattic, such as WordPress.com, Pressable, WPVIP, Jetpack, and WooCommerce,” the statement said. “As part of this reset, Automattic will match its volunteering pledge to those made by WP Engine and other players in the ecosystem, or about 45 hours a week that qualify under the Five For the Future program as benefitting the entire community and not just a single company. These hours will likely go towards security and critical updates.”

The implication is that the labor reallocations would be reversed were WP Engine to drop its lawsuit. Mullenweg said recent changes that WP Engine has made has altered his demands. He is no longer asking for money, for example.

His original demand had been for payment; in late October, Mullenweg said WP Engine “could have avoided all of this for $32 million. This should have been very easy,” and he then accused WP Engine of having engaged in “18 months of gaslighting” and said, “that’s why I got so crazy.” 

But on Friday, Mullenweg said he is no longer seeking money because WP Engine made extensive changes to its web site and is no longer violating Automattic trademarks, which was apparently what the payment was for.

“They have stopped violating the trademark. They have cleaned up,” Mullenweg said. “To use someone else’s trademark, you typically license it. For more than 18 months, we were trying to do a deal there. They obviously never did one. I realized that they were just stringing me along.”

Analysts and members of the WordPress user community, who made their comments to Computerworld prior to Mullenweg’s interview, were mixed. Some said they were worried that these latest WordPress changes might exacerbate enterprise IT worries about sticking with WordPress.

“This is a massive number of hours that they are planning on cutting back. The community is not likely to make up those hours. They are going to direct their resources to a legal battle and the platform will not be stable,” said Melody Brue, VP/principal analyst at Moor Insights & Strategy. “Users have to plan for the likelihood that they cannot take up the slack. WordPress users are already panicking. They can’t trust him now. They will turn off automatic [WordPress] updates.”

Brue said that Mullenweg’s tactics have yet to work. 

“This has become a spiteful game that he is playing. Part of his whole game is that he makes these big tantrums and threats to get attention,” Brue said. “So far, that hasn’t worked.”

Michelle Rosen, an IDC research manager, said that she was not sure whether this move would ultimately hurt WordPress.

“Automattic has been the largest contributor to WordPress by far, so this decision has to hurt the project’s ability to evolve and improve,” Rosen said. “That said, WordPress has been around for a long time and many users rely on it only as the core of their CMS solution, with other components built on top. In this context, the impact may be lower, especially if Automattic continues to handle security issues.”

Users’ reactions were also mixed.

Jack Prenter, the CEO at WordPress site Dollarwise, said he was somewhat concerned. 

“There is a general loss of confidence. I don’t know if there’s a lot you can do. That’s why the situation is so painful,” Prenter said. “There is such a large ecosystem built around it that people are not going to let it fall apart. It can technically continue to function, but you can cancel all of the future roadmap. Nothing new is going to happen.”

Another WordPress user, Ben May, managing director of The Code Co in Australia, is less concerned. “I suspect this latest statement is ratcheting up the WPE campaign, I guess in an effort to change the hearts and minds of people sympathetic to WPE. I don’t see it as an existential threat to WordPress and am not losing any sleep over it for the time being,” May said. “From what I’ve seen online already, the community is big enough and willing enough to step in and fill in the gaps that would be left with the reduced contributions.”

Tech unemployment in the US drops to lowest level in more than two years

Tech hiring rose in December, dropping the IT unemployment rate to 2% — its lowest since November 2023, according to an analysis of the latest jobs data published today by the US Bureau of Labor statistics (BLS). The overall national unemployment rate held steady at 4.1%, according to the BLS.

The tech sector added a net 7,000 jobs, bringing the total core tech workforce to nearly 6.5 million, according to CompTIA, a nonprofit association for the IT industry and workforce. The group found that the unemployment rate last month among tech professionals fell a full half a percent from November.

IT jobs

CompTIA

And as 2025 gets under wa, IT employment and hiring appears to be on a positive track, according to staffing agencies. According to ManpowerGroup, the net employment outlook for Q1 2025 is 2% higher than it was for the same period last year — 37% this year compared to 35% in early 2024.

ManpowerGroup recently published its Q1 2025 report on hiring, which claimed hiring in IT fields will beat all other professions in the US. Still, the firm also predicted employers will pull back on hiring in the months ahead because of “economic uncertainty.”

IT employment

ManpowerGroup

“As we move into 2025, we’re seeing stable year-over-year hiring trends, with employers holding onto the talent they have and planning muted hiring for the quarter ahead,” said Jonas Prising, ManpowerGroup chair and CEO.

Overall, studies by ManpowerGroup, online hiring platform Indeed, and Deloitte Consulting showed that IT hiring will increasingly be based on finding workers with flexible skills that can meet changing demands.

In fact, employment within the tech sector encompassing all types of workers declined by 6,117 jobs in December, according to CompTIA’s data. Positions in PC, semiconductor and components manufacturing accounted for the bulk of those cuts.

The tech sector employs nearly 5.6 million people, which translates to a percentage decline of 1%.

Quarterly IT employment rates

ManpowerGroup

“Employers know a skilled and adaptable workforce is key to navigating transformation, and many are prioritizing hiring and retaining people with in-demand flexible skills that can flex to where demand sits,” Prising said.

Ger Doyle, ManpowerGroup’s US country manager, said the December BLS jobs report delivered “a strong finish to 2024 and is a promising sign of what’s to come in the new year. However, the labor market may still face challenges until inflation is under more control, which is necessary to prevent slower hiring, layoffs, and reduced job growth. Our real-time data shows that open positions have decreased by 8% month-over-month, but increased by 3% year-over-year.”

Overall, job postings have remained steady since November, up 13% year-over-year, reflecting growing demand in digital services, healthcare, and convenience retail, according to ManpowerGroup’s data.

The temp job market was also a bright spot, with open job postings reaching their highest levels since September 2023 and new job postings at their peak since March 2022, according to Doyle. “This surge is driven by an increased demand for IT roles as organizations turn to project work to develop artificial intelligence and machine learning,” Doyle said.

Kye Mitchell, head of Experis North America — a ManpowerGroup tech recruiting business — said demand increased among tech employers in December, particularly related to the “gig economy.” Uber led the surge in such jobs with a remarkable 4,150% increase in job postings, while Outlier Inc., a platform that connects experts to advance generative AI, saw a 342% rise in demand.

“This trend was also evident in the temp job market, where the demand for computer and information research scientists skyrocketed by 2,000% as organizations focused on developing artificial intelligence and machine learning, increasingly relying on temp workers,” Mitchell said.

In December, there were 434,415 active tech job postings, including 165,189 newly added (both down from November). Roles in software development, IT project management, cybersecurity, data science, and tech support saw the most activity, according to CompTIA.

Top hiring companies included Amazon, Accenture, Deloitte, PwC, GovCIO, Robert Half, Lumen Technologies, and Insight Global. Job postings spanned all career levels: 22% required 0-3 years of experience, 28% wanted 4-7 years, and 16% sought 8+ years, CompTIA’s data showed.

Notably, 45% of postings across tech roles didn’t require a four-year degree, according to CompTIA. Network support specialists (85%), tech support specialists (72%), and computer programmers (54%) had the highest percentages of degree-optional roles.

For more historical data, here’s a rundown of tech unemployment data dating back to mid-2020.

4 in 10 companies plan to replace employees with AI, WEF says

Forty-one percent of companies intend to cut their workforce in the next five years as many tasks are automated with AI, according to the World Economic Forum (WEF) Future of Jobs Report 2025.

At the same time, 70% of companies say they expect to hire people with knowledge of the new AI tools, reports CNN Business.

The WEF sees advances in AI and renewable energy as reshaping the labor market, driving demand for a variety of technical or specialist roles while leading to a decline for others. The shifts will also likely push companies to upskill their own employees.

There’s good news as well. According to the WEF forecast, while 92 million existing jobs will disappear by 2030, 170 million new jobs will be created. In other words, there will be a net addition of 78 million jobs if the forecast is accurate.

New malware justifies Apple’s locked-down security strategy

Apple has told us Macs aren’t secure enough and it continues working to improve their security, as it does across all of its platforms. But a newly identified malware attack confirms that third-party developers can sometimes be a weak link in the perimeter.

In this case, Checkpoint security has identified a malware-as-a-service attack it calls Banshee macOS Stealer. 

This insidious attack, which has apparently now been closed down, was spread via seemingly legitimate browser downloads distributed outside of Apple’s Mac App Store. When installed, it was capable of exfiltrating all kinds of information, including account, banking and crypto logins, and more, and was resistant to Apple’s own antivirus protection system, Gatekeeper. (The malware is also available on Windows, but I’m less sure of the degree of risk users on that platform face.

If it’s too good to be true, it’s too good to be true

Here’s what we know:

  • The software was distributed in infected versions of popular software (such as Chrome or Telegram) via phishing websites and fake GitHub repositories.
  • When in the field, it targets third-party browsers such as Chrome, browser extensions, and makes use of a 2FA extension to capture sensitive information.
  • It also tricks users into sharing their passwords with legitimate seeming system prompts, sending stolen data back via command and control servers. 

An attack-as-a-service malware of this kind usually relies on a command server within the exfiltration process, with legitimate-seeming but infiltrated software a method of attack ever since people used to share applications via FTP, and probably before.

None of this is new. Nor is the main attack’s reliance on tricking users. Everyone by now knows that computer users are now and will forever be the weakest link in platform security. Convincing people to download software that is infected is common, and recent attacks from NSO and other reprehensible companies showed that it is still possible to craft attacks that don’t even require user intervention. (Though those are very, very expensive.)

What is new is that those behind the attack used some of Apple’s own anti-virus tools, stealing, “a string encryption algorithm from Apple’s own XProtect antivirus engine, which replaced the plain text strings used in the original version,” according to Checkpoint.

This is what helped the attack evade detection for two months, though it was eventually identified, mitigated, and the operation shut down. Crisis over.

Prevention beats cure

Except the crisis is never really over. 

What this attack exposed is that platforms can be undermined, and while Macs (and Apple’s other products) are — unlike others — secure by design, that doesn’t mean they are infallible.

The introduction of Lockdown Mode demonstrates that Apple knows attacks happen. Within that context, it becomes super-important to ensure every user understands that if software they usually pay for is available free somewhere, they should absolutely avoid installing it. And they should always ensure that legitimate software (such as Chrome) is installed from the original source.

That’s not a problem if you stay within trusted app distribution ecosystems, of course — particularly Apple’s own heavily-policed app stores. But as the company is forced to open up to third-party distribution, that security will be eroded as, at least in some cases, some app developers insist on independent distribution of their software. 

That represents a golden opportunity for malware distributors to try to build legitimate-seeming download sites for these apps. Though it’s possible that Apple’s Notarization system (as it expands) might become an essential tool to protect against this.

While some developers continue to complain about the cost of distribution on Apple’s platforms, it must be stressed that the cost of cybercrime is expected to surpass $10 trillion this year. That means it is in the public interest for app developers — if they really want to play their part to combat cybercrime — to ensure they create and protect secure software distribution systems that do not confuse consumers. 

We all play a part

It’s actually in the national (international) interest. “I think some of the top people predict that the next big war is fought on cybersecurity,” Apple CEO Tim Cook told Time in 2016

Software consumers need to play their part. “As cyber criminals continue to innovate, security solutions must evolve in tandem to provide comprehensive protection,” Check Point Research explains. “Businesses and users alike must take proactive steps to defend against threats, leveraging advanced tools and fostering a culture of caution and awareness.”

Despite this attack, the Mac remains the world’s most secure PC platform. One of the easiest ways for anyone to improve their own security posture is to move to Apple’s platforms. And one of the easiest ways to undermine that security is to install dodgy software, no matter how genuine it appears to be. If it seems too good to be true, it’s too good to be true.

So, don’t download it.

You can follow me on social media! You’ll find me on BlueSky,  LinkedInMastodon, and MeWe

Meta puts the ‘Dead Internet Theory’ into practice

Meta’s mission statement is to “build the future of human connection and the technology that makes it possible.”

According to Meta, the future of human connection is basically humans connecting with AI. 

The company has already rolled out — and is working to radically expand — tools that enable real users to create fake users on the platform on a massive scale. Meta is hoping to convince its 3 billion users that chatting with, commenting on the posts of, and generally interacting with software that pretends to be human is a normal and desirable thing to do. 

Meta treats the dystopian “Dead Internet Theory” — the belief that most online content, traffic, and user interactions are generated by AI and bots rather than humans — as a business plan instead of a toxic trend to be opposed. 

In the old days, when Meta was called Facebook, the company wrapped every new initiative in the warm metaphorical blanket of “human connection”—connecting people to each other. 

Now, it appears Meta wants users to engage with anyone or anything—real or fake doesn’t matter, as long as they’re “engaging,” which is to say spending time on the platforms and money on the advertised products and services.

In other words, Meta has so many users that the only way to continue its previous rapid growth is to build users out of AI. The good news is that Meta’s “Dead Internet” projects are not going well. 

Meta’s aim to get people talking and interacting with non-human AI has taken several forms. 

The Fake Celebrities Project

In September 2023, Meta launched AI chatbots featuring celebrity likenesses, including Kendall Jenner, MrBeast, Snoop Dogg, Charli D’Amelio, and Paris Hilton. 

Users largely rejected and ignored the chatbots, and Meta ended the program. 

The Fake Influencer Engagement Program

Meta is testing a program called “Creator AI,” which enables influencers to create AI-generated bot versions of themselves. These bots would be designed to look, act, sound, and write like the influencers who made them, and would be trained on the wording of their posts. 

The influencer bots would engage in interactive direct messages and respond to comments on posts, fueling the unhealthy parasocial relationships millions already have with celebrities and influencers on Meta platforms. The other “benefit” is that the influencers could “outsource” fan engagement to a bot. 

(“Here at meta, we engage with your fans so you don’t have to!”)

And Meta has even started testing a new feature that automatically adds AI images of users (based on their profile pics) privately into their Instagram feeds, presumably to drive demand and acclimate the public to the idea of turning themselves into AI. 

The Fake Users Initiative

Meta launched its AI Studio in the United States in July 2024; it empowers users without AI skills to create user accounts of invented fake users, complete with profile pics, voices, and “personalities.” 

The idea is that these computer-generated “users” have profiles that exist just like human-user profiles, which can interact with real people on Instagram, Messenger, WhatsApp, and the web.  Meta plans to enable these personas to do the same on Meta’s “metaverse” virtual reality platforms.

A senior Meta executive recently defended the AI-powered fake user concept. “We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do,” Connor Hayes, vice president of product for generative AI at Meta, said in a Financial Times article. “They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform . . . . That’s where we see all of this going.”

Hayes added that while “hundreds of thousands” of such characters have already been created by users, most have been kept private (defeating their purpose of driving engagement).

The Fake Experiences Folly

Meta also plans to release its text-to-video generation software to content creators. This will essentially enable users to place themselves into AI-generated videos, where they can be depicted doing things they never did in places they’ve never been.

The Fake Facebook Folks Fiasco

About a year ago, Meta created and managed 28 fake-user accounts on Facebook and Instagram. The profiles contained bios and AI-generated profile pictures and posted AI-generated content (responsibly labeled as both AI and “managed by Meta”) on which any user could comment. Users could also chat with the bots. 

Recently, the public started noticing these accounts and didn’t like what they saw. Social media mobs shamed Meta into deleting the accounts. 

One strain of criticism was that the fake users simulated human stereotypes, which were found to not represent the communities they were pretending to be part of. 

Also, as with most AI-generated content, the output was often dull, generic, corporate-sounding, wrong, and/or offensive. It didn’t get much engagement, which, for Meta, was the entire purpose for the effort. (Another criticism was that users couldn’t block the account; Meta blamed a “bug” for the problem.)

AI slop is a problem; Meta sees an opportunity 

All this intentional AI fakery takes place on platforms where the biggest and most harmful quality is arguably bottomless pools of spammy AI slop generated by users without content-creation help from Meta. 

The genre uses bad AI-generated, often-bizarre images to elicit a knee-jerk emotional reaction and engagement

In Facebook posts, these “engagement bait” pictures are accompanied by strange, often nonsensical, and manipulative text elements. The more “successful” posts have religious, military, political, or “general pathos” themes (sad, suffering AI children, for example). 

The posts often include weird words. Posters almost always hashtag celebrity names. Many contain information about unrelated topics, like cars. Many such posts ask, “Why don’t pictures like this ever trend?”

These bizarre posts — anchored in bad AI, bad taste, and bad faith — are rife on Facebook.

You can block AI slop profiles. But they just keep coming — believe me, I tried. Blocking, reporting, criticizing, and ignoring have zero impact on the constant appearance of these posts, as far as I can tell. 

And the apparent reason is that Meta’s algorithm is rewarding them. 

Meta is not only failing to stop these posts, but is essentially paying the “content creators” to make them and using its algorithms to boost them. Spammy AI slop falls perfectly into line with Meta’s apparent conclusion that any garbage is good if it drives engagement. 

The AI content crisis

AI content, in general, is a crisis online for a very simple reason: Social media users, content creators, would-be influencers, advertisers, and marketers don’t quite seem to realize that AI-generated content, for lack of a better term, sucks.

AI-generated text, for example, uses repetitive, generic language that doesn’t flow and doesn’t have a “voice.” Word choices tend to be “off,” and the AI usually can’t tell the difference between what’s important and what’s irrelevant. 

AI-generated images are especially problematic. According to multiple studies, people feel more negatively about AI-generated images than real photos. 

Social networks are filled with AI-generated images. Billions have been created using text-to-image AI tools since 2022, many posted online. 

To quantify: A year ago, some 71% of images shared on social media in the US had been AI-generated. In Canada, that figure was 77%. In addition, 26% of marketers were using AI to create marketing images, and that percentage rose to 39% for marketers posting on social.

According to the 2024 Imperva Bad Bot Report by Thales, bots accounted for 49.6% of all global internet traffic in 2023. One-third (32%) of internet traffic was attributed to malicious bots. And 18% came from “good bots” (search engine crawlers, for example). 

In 2023, only 50.4% of internet traffic was human activity. Now, in the first month of 2025, human traffic is definitely a minority of all internet activity. 

The “Dead Internet Theory” people are not only conspiracy theorists, they’re also ahead of the curve. If the theory holds that a majority of online activity is by AI, bots, and agents, then the theory is now objectively true. 

(The theory offers a host of reasons for that outcome that have not been proven true. Proponents believe bots and AI are intentionally created to manipulate algorithms, boost search results, and control public perception.)

Meta cheerfully boasts about its intentional creation of AI bots, but mainly to drive engagement. 

Meta’s fake-user initiatives remind me of its failed “metaverse” programs. 

As with the “Dead Internet Theory,” the “metaverse” concept was a dystopian nightmare dreamed up by novelists as a warning to mankind. The “Dead Internet Theory” is a conspiracy theory that attempts to explain how the internet went horribly wrong. But to Meta, the “metaverse” and “Dead Internet theory” are product roadmaps. 

Meta is proving itself to be an anti-human company that’s working hard to get people away from the real world and trapped for many hours each day, going nowhere, doing nothing, and interacting with no one. 

Meta will fail. The public will reject its dystopian goals.

But the rest of us should learn from their bad example. What the public really wants — something Meta used to understand — is human connection: people connecting to other people.  Advertising, articles, posts, comments, and chats made by people rather than bots are becoming harder to find and, as such, also more valuable.

Because a “connection” with nobody is no connection at all. 

More than 4% PC shipment growth predicted for 2025, but not for what you expect, says IDC

PC sales certainly weren’t going gangbusters in 2024: They only grew a paltry 1% over 2023.

According to new figures from IDC, vendors shipped 262.7 million PCs in 2024. But things did pick up a bit in Q4 2024: Shipments grew 1.8% over the prior year, reaching 68.9 million.

While all this may seem like a modest gain, it still represents progress in a time of economic instability, fear of inflation, geopolitical tensions, and the upcoming US regime change.

“1% growth is actually a pretty good thing in the PC industry right now,” Ryan Reith, group vice president, IDC’s Worldwide Device Trackers, told Computerworld. “That’s what we expected for the year, and actually, the market is shifting back to some recovery.”

The year of refresh

2025 will likely see bigger numbers. IDG expects 4.3% growth in total PC shipments in the coming year. This will largely be due to commercial refreshes that occur even in the “toughest of macro-economic times,” Reith pointed out. Typically, medium-to-large sized companies update their PCs at least every three to four years.

“The commercial refresh usually is pretty resilient because, certainly in developed markets, a lot of medium to large enterprises want to stay ahead,” said Reith.

Indeed, Microsoft has declared 2025 the “year of the Windows 11 PC refresh,” as the tech giant is ending feature and security support for Windows 10 PCs beginning October 14.

However, many factors remain uncertain, including fears of inflation, ongoing geopolitical disputes, and big changes expected with the impending Trump administration. The Consumer Technology Association, for one, estimates that Trump’s proposed steep tariffs on imports — ranging from 10 to 20% for most countries and climbing as high as 100% from China — could increase laptop and tablet prices by as much as 68%.

What about AI PCs?

There has been a ton of hype around AI PCs, as they are set to fundamentally change the way people interact with devices. For instance, built-in AI can perform certain tasks such as information retrieval, while more advanced AI agents can even take autonomous action, leading to significant productivity gains.

Gartner, for instance, has projected that AI PCs will account for 43% of all PCs in 2025. The firm’s analysts estimate that worldwide shipments of AI PCs will total 114 million units this year, representing an increase of more than 165% over 2024. Further, the firm predicts that by 2026, AI laptops will be the only choice of laptop available to large enterprises (compared to less than 5% availability in 2023).

Big tech is certainly betting on this trend. Microsoft, for its part, introduced Copilot+ PCs in May, and Nvidia introduced its AI PC Project Digits this week at CES. Qualcomm and Advanced Micro Devices (AMD) have unveiled their own AI processors and Dell is working on AI hardware, too.

“This is a huge leap of technology, from every aspect of software down to the hardware and everything in between,” said Reith. “This is going to be a fundamental change, in a positive way, in the industry.”

More advanced PCs that can do more than other PCs (and humans, too) might eventually translate to less hardware shipped, he noted. However, it will be a net positive. “There’s going to be a lot of revenue gains from that, from the software side, cloud side, everything else.”

Not so fast…

Still, Reith noted, the industry has gotten a little ahead of itself when it comes to AI PCs. While they someday will become the norm — all modern laptops and desktops, after all, contain some sort of AI — that’s more of a long-term trend.

This is notably because “budgets are constrained across the board,” said Reith. “It doesn’t matter if you’re a tech company, healthcare, whatever. When AI comes up, it’s, ‘Look, how much extra is that going to cost?’ It’s all about the dollar.”

Also, while they’re innovating at an impressive clip, big tech companies haven’t really lived up to the hype, he pointed out. Industry watchers, for instance, thought Microsoft would deliver more around Copilot+, providing concrete use cases through its partnerships and illustrating how enterprises can get returns on their investments.

“Microsoft didn’t deliver, but it didn’t fall on its face,” said Reith. “Even if you under-deliver a little bit in a time when budgets are constrained, it puts a bigger spotlight on, ‘Hey, maybe we can wait a little bit.’”

There are still very, very good PCs out there

IT decision makers don’t need to feel rushed to purchase AI PCs, Reith noted. Don’t rule out PCs the next level down, he advised; there are still “really, really good” products from PC vendors that run Intel’s Meteor Lake processors (introduced in 2023) or AMD chips, among others.

“So don’t feel like you’re buying down,” said Reith. “We have a lot of very, very good PCs; they’re just not the ones that are the latest and greatest and cost 50% more.”

Also, he pointed out, while Microsoft is sunsetting Windows 10, enterprises still have access to an affordable service support extension. “It’s a very, very attractive option, especially right now, if you’ve got good hardware.”

The AI PC buzz is real

Recognizing the dampening of interest (at least for now) in AI PCs, suppliers like Lenovo, HP, Dell, and others are already adjusting and shifting their focus to PCs the next level down in their portfolio, said Reith.

“It’s going to pick up, they’ve kind of paused a little bit on the supply side,” he said. However, “they’re not going to slow down the innovation.” In fact, “they’re innovating like crazy.”

Ultimately, “the buzz is real,” he said. “I think everyone got a little over their heads on the immediate opportunity. It’s just going to be a little bit more prolonged.”

Now, you can create a digital copy of your personality in just two hours

Researchers at Google Deepmind and Stanford University have concluded that a two-hour interview is sufficient to create a realistic AI copy with the same personality as the interviewee.

In an experiment, 1,052 people were interviewed using a questionnaire that addressed everything from personal life events to opinions about society. A digital AI copy was then created and when a new round of questions was asked, it answered the same as its human counterpart in 85% of cases.

According to the researchers, AI copies of real people can be used in a wide range of contexts, but there are also risks with the technology. For example, they can be used for scams.

Apple doubles down on privacy after Siri-snooping settlement

Apple has vehemently denied that it ever abused recordings of Siri requests by using those records for marketing, ad sales, or any of the other creepy nonsense we’re being forced to tolerate with other connected devices.

The company’s denial follows a recent $95 million settlement concerning a widely reported sequence of events when it became known that the company had human contractors grading people’s spoken Siri requests. Many of us were extremely shocked at the nature what was being recorded and shared with those contractors, and to be fair, Apple swiftly took steps to remedy the situation, which it said was necessary to improve Siri’s accuracy.

The plaintiffs claimed that Apple’s systems had been used to trigger ads targeted at them, which Apple denied despite having settled the case. It’s thought the company chose to settle because it wanted to prevent further accusations against its commitments to privacy.

An unforced error with big consequences

The company has always denied that it abused the Siri request records in any way and has constantly pointed out that the recordings were not directly connected to any individual user, which is very unlike the experience you get with other connected devices. That denial wasn’t enough in this case. 

That’s because devices that lack Apple’s commitment to privacy are the ones responsible for ads you might encounter that spookily reflect private conversations you may have had. Apple says its systems don’t do that. 

Some companies deny they do this, but the fact others continue to do so leaves most of us deeply uncomfortable, and erodes trust.

In a statement following the resolution of the lawsuit, an Apple spokesperson said: “Apple has never used Siri data to build marketing profiles, never made it available for advertising, and never sold it to anyone for any purpose. Privacy is a foundational part of the design process, driven by principles that include data minimization, on-device intelligence, transparency and control, and strong security protections that work together to provide users with incredible experiences and peace of mind.”

Apple’s track record is a good one

Apple has committed vast resources to creating privacy protections across its systems. Everything from Lockdown mode to tools to prevent aggressive ad targeting and device fingerprinting represents the extent of its efforts, work that touches almost every part of the company’s ecosystem.

A future looming problem, of course, is that while Apple might be keeping to its pro-privacy promise, not every third-party developer likely shares the same commitment, despite the Privacy Labelling scheme the company has in place at the App Store.

This might become an even bigger problem as Apple is forced to open up to third-party stores. It seems plausible to expect some popular apps sold via those stores might choose to gather user data for profit.

With that monster visible on the horizon, Apple has also confirmed that it has teams working to build new technologies that will enhance Siri’s privacy. It also said, “Apple does not retain audio recordings of Siri interactions unless users explicitly opt in to help improve Siri, and even then, the recordings are used solely for that purpose.”

How Apple already protects Siri privacy

Apple pointed to several protections it already has in place for Siri requests:

  • Siri is designed to do as much processing as possible right on a user’s device — though some requests require external help, many, such as search suggestions, do not.
  • Siri searches and requests are not associated with your Apple Account. 
  • Apple does not retain audio recordings of Siri interactions unless users explicitly opt in to help improve Siri.

Apple has another protection it is putting into place: Private Cloud Compute. This will mean that Apple Intelligence requests made through Siri are directed to Apple’s cloud servers, which offer industry-leading security. “When Siri uses Private Cloud Compute, a user’s data is not stored or made accessible to Apple, and Private Cloud Compute only uses their data to fulfil the request,” the company said.

To some degree, the need to make these statements is a problem Apple foolishly created for itself in the way it initially handled Siri request grading. The manner in which that was done tarnished its reputation for privacy, which is unfortunate given the company knows very well that in the current environment digital privacy is something that must be fought for.

There is a silver lining to the clouded sky. That Apple is now making these statements means it can once again raise privacy as a consideration as we move through the next chapters of AI-driven digital transformation.

All the same, raising the conversation does not in any way guarantee that privacy will win the debate, despite how utterly essential it is to business and personal users in this digital, connected era.

You can follow me on social media! You’ll find me on BlueSky,  LinkedInMastodon, and MeWe

So you want to manage Apple devices without using MDM? Here’s how.

Recently, I was asked a question I haven’t heard in several years: Can you manage Apple devices without using MDM?

The technical answer is yes. You can use configuration profiles and Apple Configurator to do this.

But you really shouldn’t try that approach. With mobile device management (MDM) vendors licensing their software for as little as $1 per device or user per month, MDM should be the go-to option for all but those on the tiniest of shoestring budgets. (There’s also the possibility of using Apple Business Essentials, a stripped down solution from Apple intended for small organizations.) 

MDM and Apple Business Manager (or Apple Business Essentials) allow for zero-touch deployment. IT does not even have to see a device; it can be shipped new in the box to an employee and it will automatically configure and enroll in MDM when querying Apple’s activation servers during startup.

By contrast, managing devices manually can be extremely time consuming because you have to set up each device by hand when installing configuration profiles — and you must touch it every time you need to make changes. Security updates (or any software updates) cannot be forced to install, leaving it up to each user to install them or not. 

When a device is managed via MDM, there’s a constant back and forth communication between the device and your company’s MDM service. This allows a whole host of features, particularly security features such as being able to query the device status, lock/unlock the device, install software updates, and add applications and other content over the air. 

You also gain the ability to securely separate work and personal use of a device and to make use of managed Apple Accounts rather than relying on a user’s personal Apple account. 

Managed Apple Accounts perform the same function as personal Apple IDs, but they’re owned by an organization rather than the end user and they link to an employee’s work-related accounts. They can also be managed in a way that allows users access Continuity features at work and provides a work-related iCloud account. One big advantage here is that work related passwords and passkeys can sync across all of a user’s work devices (and they can be automatically removed from a device if a worker leaves the organization. 

Another consideration to keep in mind if you’re a small shop looking to save a few dollars is that you might not always be small. You may not think you need the features that come with MDM solutions, but as your company grows, your needs will change — and you’ll likely have to go through the headache of migrating away from manual management anyway.

This is the part where I tell you to turn back from trying to manage Apple devices manually. 

But if you’re truly determined to go it without using MDM or you’re really that cash strapped and you have a small number of employees and devices, here’s what you need to know. (Just don’t say you weren’t warned if you go this route and run into problems or security breaches.)

The basic component for managing devices is the configuration profile; it’s an XML file that specifies the various options you want to set up. These profiles have been around since the iPhone 3G launched in 2008 (two years before MDM even existed). These files also underpin MDM configuration, but you get a much broader selection of configuration options and an easier interface via MDM.

Apple Configurator for Mac is a free tool available in the App Store. There is an iPhone version as well that’s used to enroll devices if they’re not eligible for zero-touch deployment — typically, devices bought outside of a business purchase from Apple or an authorized reseller. (The Mac version can also be used for this purpose.)

The latest version of Apple Configurator supports the management of iPhones, iPads and Apple TVs, but — cautionary alert — it does not support managing Macs. (This is another downside to manual device management.)

Apple Configurator allows you to create a blueprint for various device types and to create configuration profiles with a simple-to-use GUI. You can then assign your profiles to blueprints. Configurator also lets you prepare devices to receive configuration profiles; backup and restore devices; determine whether they will work using Apple’s Supervision functions, which provide some additional control over devices; and to install apps. 

Once you’ve set up blueprints and added configuration profiles and apps, you’ll need to connect each device via a USB-to-Lightening cable (for older devices) or with a USB-C cable (for newer devices) and then assign the device to a blueprint. When preparing a device for Apple Configurator, you can choose to remove various steps in Setup Assistant (just as in MDM). You can also set the device name, wallpaper, and home screen layout. 

Managing Macs works essentially the same way — by building configuration profiles. But you need to hand install them on each Mac. Depending on the payload of the profile and whether a user has local admin privileges, the Mac user might be able to delete installed configuration profiles. Keep that in mind.

Apple Configurator can also be used to revive or restore the firmware of Apple devices (including Macs).

Apple provides a user guide that offers additional details and a walk-through of tasks in Apple Configurator.

So, as I noted from the very start, you can see that it’s certainly possible to manage Apple devices manually. But hopefully, you can also now see that there are too many advantages to managing devices using MDM (or Apple Business Essentials) to do it the old-school way. 

From better security to a lighter IT workload and an improved user experience, MDM really can streamline everything needed to keep your fleet of Apple devices up and running.