The researchers highlighted security threats involving the manipulation of virtual objects when users collaborated via mixed reality headsets. The work involved 20 participants from the school, with most having little or no experience with mixed-reality headsets. In many cases, the participants did not know they were being attacked; instead, they blamed technical glitches or latency issues for the problems they encountered.
“Malicious entities could exploit vulnerabilities to disrupt critical collaborations, manipulating users’ perception of the environment, and impairing their ability to coordinate, potentially resulting in physical or psychological harm to users and bystanders,” the researchers said.
There has not been enough focus on potential vulnerabilities within the XR platforms, said Anshel Sag, principal analyst at Moor Insights & Strategy.
“The reality is that a lot of these platforms are pretty closed and it’s hard to evaluate the code,” Sag said.
The study was done using a HoloLens 2 headset, which Microsoft discontinued last year. The HoloLens 2 platform is out of date, Sag noted, something the researchers acknowledged.
“There are only a few collaboration platforms in use today for enterprise and defense, and a good chunk of the potentially vulnerable collaboration tools most likely don’t connect to the open internet,” Sag said. “That’s why I think a lot of the implementations that the government wants to use — or any kind of secure applications like enterprises [rely on] — need to have code evaluations and audits.”
The researchers said the attacks would be difficult for users to comprehend and identify. “An attack might alter the environment for one user without affecting the view of others or disrupt communication between users at a critical moment,” the researchers said.
They noted the possibility of a “click redirection attack,” which they likened to web-based clickjacking. In this case, a malicious party could attack a 3D object in a collaborators’ field of view. When the person tries to move the object, the action affects another 3D object instead.
“The collaborative environment can make the unintended movement of virtual objects a potential cause of mistrust and confusion between the collaborators,” the researchers wrote.
Another attack — called an “object occlusion attack”— involved placing an invisible barrier on 3D objects to prevent interaction from a distance. And a “spatial occlusion attack” expanded that concept by placing an invisible boundary over a larger region and blocking interaction with multiple objects.
Occlusion attacks could affect productivity in projects as collaborators might not have similar fields of view. That kind of attack would force headset users to get closer to virtual objects before they interact with them.
The researchers also launched a latency attack by slowing network speeds between participants’ headsets. The network attack significantly undermined the user experience.
To safeguard virtual systems, the researchers recommended educating users about potential security threats and building in security by design. Safety measures could include auditory cues to identify the location of objects and a warning system to identify security threats.
Additionally, headset developers could include UI changes with toggles and controls that “highlight all objects in the environment similar to basic 3D view management,” the researchers wrote.
The research study was written by Maha Sajid, Syed Ibrahim Mustafa Shah Bukhari, Bo Ji, and Brendan David-John. They could not be reached for comment.
A new report by UK analyst firm Say No to Disinfo and communications firm Fenimore Harper indicates a high risk that AI-generated disinformation could create bank runs that could bring down financial institutions, according to Reuters.
In an experiment, a number of UK customers were shown AI-generated rumors about their bank. Afterwards, a third said they were “very likely” to withdraw their money, with 27% saying they were “quite likely” to do so.
According to the report, spending as little as £10 (about $12.60) on a fake AI message would be enough to persuade customers to withdraw more than $1 million from the bank in question.
Attempts to challenge the power of Elon Musk and his DOGE team to close down government departments have hit an unexpected complication: according to the White House, the entrepreneur is not even in charge of the operation.
That surprising claim was made in court papers filed by the White House on Monday. Far from running DOGE, Musk is simply another “senior adviser to the president,” with no greater authority than any other advisor, according to an affidavit filed by the White House’s Director of the Office of Administration, Joshua Fisher.
“Like other senior White House advisors, Mr. Musk has no actual or formal authority to make government decisions himself. Mr. Musk can only advise the President and communicate the President’s objectives,” Fisher declared in the affidavit.
Musk is not an employee of DOGE, nor its administrator; his status is that of an employee of the White House, Fisher added.
His filing was in response to a complaint filed Feb. 13 by the attorneys general of 14 US states against “Elon Musk in his official capacity,” the US DOGE Service and its temporary organization, and President Trump himself, questioning the apparently unchecked power DOGE and Musk have been handed by Trump.
Their wording didn’t hold back, drawing an unflattering parallel between his behavior and the “despotic power” wielded by Britain’s King George III over the American colonies in the 18th century.
“Mr. Musk’s seemingly limitless and unchecked power to strip the government of its workforce and eliminate entire departments with the stroke of a pen or click of a mouse would have been shocking to those who won this country’s independence,” they said.
Musk did not occupy an office of state and had not been confirmed by the Senate, the states argued. This rendered his actions unconstitutional.
DOGE playbook
If Musk isn’t running DOGE, who is running it? And does this even matter? Unhelpfully, President Trump’s executive order bringing it into existence on day one of his administration never named a head. Nor, as critics have pointed out, did it explain how a department could have so much power or even be called a “department” without having to obtain approval from Congress first.
This is surely deliberate. If it’s not a department, it is not therefore bound by legislation governing freedom of information, privacy and administration. However, the White House’s refusal to acknowledge Musk as the head of DOGE is probably simply a delaying tactic. They will know that successfully identifying Musk as the person directing DOGE is important for his opponents’ legal arguments.
If Musk is not running DOGE, then who should be held responsible for its actions? It’s likely that a judge will eventually point out that someone, somewhere must be accountable for what DOGE is doing.
Exploiting a loophole
The problem with trying to stop Musk and DOGE is that he has attacked the system on several fronts simultaneously, often using unsubstantiated claims of fraud as his motivation. This includes turning up unannounced at the Treasury Department on January 20 and demanding access to payment servers which store the tax returns, social security data and bank account numbers of every adult US citizen. That access was blocked by a judge.
The same modus operandi has been repeated in other departments, creating a moving target for anyone trying to stop him. In response, some officials have chosen to resign rather than give Musk’s team access to data in a way that might not comply with existing data security and privacy rules.
What remains unclear is how much access has been granted, and to whom within DOGE. This has left a feeling of strained uncertainty.
“An internal email sent to BFS [Bureau of the Fiscal Services] IT personnel by the BFS threat intelligence team has identified DOGE access as “the single greatest insider threat risk the Bureau of the Fiscal Service has ever faced,” argued the state attorneys general as part of their recent legal challenge.
Furthermore, “The intelligence team recommended the DOGE members be monitored as an insider threat. Critically, they called for “suspending” any access to payment systems and “conducting a comprehensive review of all actions they may have taken on these systems,” it continued.
“Mr. Musk has gained sweeping and unprecedented access to sensitive data, information, systems, and technological and financial infrastructure across the federal government. This access is seemingly limitless and dependent upon Mr. Musk’s discretion.”
For now, there is nothing to stop Musk beyond a flurry of disconnected lawsuits by organizations and individuals. For its part, DOGE continues to hide in plain sight, exploiting the loophole that by avoiding being a formal department, it sits strangely beyond the usual rules.
Apple has apparently delayed what is arguably its most important Apple Intelligence feature, contextual intelligence, by at least another month. It’s the latest chapter in what history will remember as the company’s most painfully slow, yet strategically significant, introduction yet.
Bloomberg says Apple has hit a variety of obstacles in developing these tools, with the smart features the company wants to introduce not working consistently.
The company is attempting to build on-screen awareness so Siri can act with the content you are seeing — it might save a message address or even run a series of nested commands such as pulling out a half remembered article from those you read the day before to send to a friend.
Apple has one example in which the intelligence extends to person recognition, so Siri might be able to tell you when your mom’s flight is landing, based on an old email containing her flight number and recognition of your relationship.
These are all sophisticated features, but ensuring they work consistently is essential. You don’t want families waiting forlornly for the wrong flight, or mom waiting for a ride that never arrives. Unlike AI-generated news headlines, these tools really need to work before they ship.
And word is, they don’t, at least not yet….
“Hey Siri, what’s that paperclip in Windows called?”
The inevitability of WWDC
The update had been expected to show its face in April with iOS 18.4. Now it won’t appear until one month before WWDC 2025, in iOS 18.5 in May.
That’s almost one full year since those features were first discussed at WWDC and shows the extent to which Apple has been forced to play for time in this deployment. It has managed to make that time, but the delay can’t be a good thing for the company, given it should also be pouring resources into improvements across all its operating systems as it prepares for its annual developer conference in June.
It begs questions such as just how much of the company’s resources are being spent on AI, and what, if any, additional Apple Intelligence tools it will be in position to announce this year.
One thing we do know is that Apple must announce something at WWDC. Developers will want to know the company is moving forward on AI. That means that merely reprising the features the company managed to ship slowly across the last 12 months won’t do. Nor will pointing enthusiastically at the new support for additional languages Apple is expected to introduce.
To maintain relevance amid the clamor about Deep Seek or Open AI, Apple needs to justify what CEO Tim Cook promised in late 2024, when he said: “We’re pouring all of ourselves in here, and we work on things that are years in the making.”
Betting the bank
Apple understands this. Despite shuttering its Apple Car project, the company spent more on research and development in its just-past quarter than it did a year ago. ($8.2 billion versus $7.6 billion). R&D spending goes up most every year at the company and you can bet your bottom dollar (in comparison to Apple’s near infinite ones) that AI is part of that spending plan.
Throwing money at problems doesn’t always yield results, however.
You need resource allocation and tight control to ensure all the different research teams are working effectively together. This has plainly been a challenge at Apple, given the company recently put one of its best, Kim Vorrath, in charge of getting Apple Intelligence to ship on time. Vorrath is working with John Giannandrea, Apple’s senior vice president for machine learning and AI, whose team was reportedly sidelined for access to developer resources until early 2023, according to an earlier Wall Street Journal report. This is no longer true.
Facing the challenge
While Giannandrea’s team builds on the AI-driven tools Apple already has in place, the challenges faced by his group mean they must not only deliver AI in an Apple way, but do so in a way that visibly competes with the larger pure AI companies its rivals are already partnering with.
With so much at stake, it is perhaps better to delay rather than ship anything that does not work. But people’s patience with such delays will not be infinite and with Open AI still threatening to introduce its own device designed by iPod designer Jony Ive, Apple’s execs surely feel a degree of performance anxiety as they struggle to be the real artists they are reputed to be.
width="2193" height="1234" sizes="(max-width: 2193px) 100vw, 2193px">With the second Trump administration, very different cultures are once again clashing in the transatlantic relationship.
rawf8 – shutterstock.com
The recently inaugurated US President Donald Trump has turned the trusting relationship between the United States and Europe on its head, according to the EU’s new Competition Commissioner, Teresa Ribera. Brussels must now ensure reliability and stability, factors that no longer exist in Washington. In an interview with the Reuters news agency, the politician called on Europe to continue negotiating with the White House and listen to the US government’s concerns on trade issues, but not to allow any changes to EU laws to be forced upon it.
“We need to stick to our strengths and principles,” Ribera told Reuters. “We need to be flexible but we cannot transact on human rights nor are we going to transact on the unity of Europe, and we are not going to transact on democracy and values.”
Trump and his followers in the US government had recently criticized the EU for its rules and regulations. The fines imposed by the EU on US technology companies are a kind of punitive tax.
JD Vance: EU restricts freedom of speech
US Vice President J.D. Vance used his appearance at the Munich Security Conference in mid-February for a general reckoning with Europe. He said EU Commissioners were suppressing freedom of expression and restricting access to online platforms and search engines in certain situations with the help of the Digital Services Act.
Ribera reacted to the accusations with incomprehension. “If there is a problem, a point of concern, please explain that,” the EU Commissioner said. “That doesn’t make sense.”
Volker Wissing, Federal Minister for Digital Affairs and Transport, also made it clear that European values are not negotiable, neither through political pressure nor through market dominance. “Anyone who believes that European rules can be dictated from outside is very much mistaken,” emphasized the politician. “The EU Commission must consistently enforce the Digital Services Act (DSA) – without compromises and without deals. Anyone who confuses freedom of expression with the freedom to spread hate and disinformation is misjudging the foundations of our values.”
Ribera announced that the EU will issue decisions in March 2025 on whether Apple and Meta have complied with European rules. Both US companies have been under observation by antitrust watchdogs for around a year. They could face heavy fines if it turns out that they have violated the Digital Markets Act. The EU Commissioner rejected speculation that the decisions could be delayed in view of the massive criticism from the US administration.
The Spanish politician also announced that Trump buddy Elon Musk’s social media platform X would remain under observation. Musk’s role within the US government plays no role in this, she said.
Amazon faces billions in fines in Italy
Amazon is finding out that European authorities don’t take kindly to rules and laws being violated. Public prosecutors in Italy are investigating whether the world’s largest online retailer has cheated tax authorities there out of €1.2 billion in value-added tax (VAT). Since 2019, a law in Italy has obliged e-commerce platforms to pay the VAT incurred by third-party sellers outside the EU if they sell goods in Italy via the platform.
The investigations by the public prosecutor’s office cover the period from 2019 to 2021 and were concluded in December, according to various media reports. Amazon is facing a penatly of over €3 billion, according to the Guardia di Finanza. Amazon will not comment on the investigations, according to a report by the French news portal France 24. However, the online retailer asserts that it is committed to complying with all applicable tax laws.
Amazon’s tax practices have been criticized for years. Despite billions in sales, the company shifts its profits to tax havens such as Luxembourg in order to avoid taxes, complained British Labour MP Margret Hodge back in 2022. The EU Commission, on the one hand, and Amazon and Luxembourg, on the other, have been arguing for years about whether Amazon’s tax advantages in Luxembourg are illegal or compliant with EU state aid rules. Amazon itself asserts that it works in full compliance with local tax laws everywhere.
Generative AI (genAI) projects will move from pilot phase to production for many companies this year, which means the workforce will be affected in ways never before imagined. One of those ways will involve onboarding AI agents as new digital employees.
One of the focus areas for global staffing firm ManpowerGroup has been its proprietary platform, Sophie, which leverages AI to tackle talent screening tasks. The staffing firm sees AI agents as playing a central role in sifting through job applicant data for clients. identify market trends, and offering hiring suggestions. When Sophie provides a recommendation — either for or against a candidate — it also explains the reasoning behindi it.
“We view Sophie as a partner to help you focus on what truly matters: finding the right people and building a workplace grounded in honesty, respect, and mutual confidence,” said Carolyn Balkin, general manager for Global Client Solutions at ManpowerGroup.
The company borrowed a page from its HR best practices on team integration to ensure agents were introduced to the marketing team and understood their roles. It also created feedback loops that enabled simple, two-way feedback between human marketers and agents, which turned out to be key in establishing “collaboration and mutual learning.”
AI adoption also means that practically every employee, whether they’re part of an IT organization or a business group, must become familiar with chatbots and other large language model-related technology to better do their jobs.
What organizations are seeking in new talent has shifted as AI continues to take on the more repetitive predictable tasks, requiring workers to focus more on creating new business value.
ManpowerGroup’s Balkin manages IT, technology and telecommunications industry vertical clients, and she has been advising organizations about what it means to manage AI employees. One of the biggest challenges: finding the kind of talent they’ll need that can work with AI and figuring out how to integrate agents across business groups.
Carolyn Balkin, general manager for Global Client Solutions at staffing firm ManpowerGroup
ManpowerGroup
How has managing employees changed since the adoption of AI in the workplace? “I think you know that’s where the soft skills have really come into play, because it is not just a technology. I was at the Davos Conference recently, and a lot of the conversations were about AI, and a number of organizations talked about. It’s not just a technology anymore. We are looking for individuals that have the industry experience. We can take somebody with industry experience and train them on the technical part of the job.
“It’s a lot harder for us to take somebody with the technical skills and teach them how the industry works. I think there’s a focus on looking at the soft skills: the problem solving, the complex reasoning ability, and communications. Because it’s not just developing AI for the sake of software technology; it’s to address that larger business problem. It’s about looking at all of the business functions, and taking all of that into consideration.
“So, I think it is more than just the technology play now. And therefore, when managing these people, it’s not just managing a technical group anymore. You’re managing people who are bringing a different perspective, a different experience, and different soft skills to play, and it’s about how do you pull all of that together.”
How do you go about managing that other type of AI employee — the digital agent, or the AI itself that is becoming another kind of employee? “We do have some early adopters that have put in place these agent workforces. I do know it has changed how they’re looking at workforce management. It’s about what can my agents do? You’re almost looking at agents as the new intern of the company. What can the agents do transactionally, and then what skills do I need to manage that on top of the agents? So, what technical skills do I need, and what soft skills do I need in my employees to manage those agents? And that becomes the workforce plan.
“Then it’s looking at location strategy. In the past, organizations have led with location in the past; now, it’s about getting the agent strategy right. First, figure out what you can take from your transactional workers and then focus on what skills you need.
“Then you have to consider employee upskilling or reskilling. I think organizations are going to have to become much more proactive on their upskilling and reskilling programs. We’ve heard so much about this for the last couple of years, and I think there’s a gap where organizations believe they have strong programs. But when you talk to employees within these companies, they don’t feel there’s been the opportunities to upskill and reskill. So, I think we’re going to have to see more structure around those programs.”
So, how are you managing the digital employee you call Sophie? “Behind Sophie is a cross-functional group that bridges technical expertise and real-world understanding. AI and machine learning experts collaborate with sales and operational professionals, along with individuals who study how people interact with technology. Together, they work toward maintaining our commitment to fairness and trust by:”
Running ongoing checks to spot hidden biases in how Sophie interprets data.
Protecting personal information through strong security protocols and compliance practices.
Offering transparent decision-making details so you always see why Sophie has chosen a particular path.
So would you say managing your digital workforce or managing your agentic workforce is kind of the next frontier? “I definitely think so. I mean, it’s just a collaboration across the agents. And look how fast AI came on board, and now it’s just getting smarter and harder. You know how it’s collaborating with each other. And you know, you have to teach it, too. It’s not going to be just like humans. It’s not going to be 100% accurate, so you need to monitor it; it’s going to create different jobs. You know, back to your question, will it create jobs or kill jobs? Don’t know yet.
“I think they’ll definitely be different, though, because now you have people looking at the quality of what’s coming out of the agents, testing to see if it’s accurate, training the agent. So it will create a whole new set of roles, and it’s going to affect every industry. In manufacturing, for example, organizations are using AI agents for quality control, and doing things significantly faster than they’ve been able to do in the past.”
What industries are being affected the most quickly? “I would say it’s your tech companies that are probably the early adopters, because for them to sell something, clients want a case study. They want to know where you’ve done this and what the impact has been. So, the tech companies see themselves as client zero in order to demo a lot of these new tools and technologies.”
What kinds of problems is AI introducing from an employee management standpoint? Do you believe every company is a technology company and every employee is a technologist to some extent? “Technically, yes, I do believe that. The problem is [that] the gap is getting wider between those employees who understand AI technology and are willing to learn more about it and those who don’t want to have anything to do with it. But I think everybody will be a technologist, eventually. It’s going to be talent augmented by technology.
“I was recently talking to a business manager, and he said while there’s always going to be an IT group, it’s no longer going to be the harbinger, or the only ones who own the technology.”
You have people in marketing, in advertising, in customer support, all the various branches of a business that need to be tech savvy. What’s needed to manage a workforce where everyone is using AI in one form or another? “I think you need a lot more collaboration across the workforce, because historically it has always operated in a very siloed way. You’d roll technology out from one place to the rest of the organization. As you adopt more AI, you can’t do that anymore.
“A big topic at the Davos conference was agentic AI, and that is really all about collaboration. A lot of the large language models — generative AI — have been historically working in a silo. You ask it a query; it shoots out an answer to you.
“The AI that’s under development at a lot of these organizations today is more the agentic AI, which is collaboration of various AI apps and a collaboration of your various data sets. So, that creates a lot more questions because you’ve got to have governance of all those agents. You’ve got to have platforms and the technology behind that.
“There has to be the governance model in play. You need to look at the business holistically in order to manage AI across all of those areas, so you don’t have department doing one thing that might conflict with what another may be doing. They all really need to be aligned so that they’re functioning with each other.”
Anecdotally, when I talk to folks who are out of work, even people who have years of experience technology, they’re having a hard time finding jobs. What do you see happening? Is it harder to get a technology job now, and what skills are companies looking for? “I think it is harder to land a technology job right now. And I think part of that might just be a reflection on where the market is. I know there’s been a lot of stability in the IT tech sector, but organizations haven’t been hiring on additional talent. And some of that is 2024 seemed to be a settling period where there was a lot of adoption of AI. This year it’s about the impact of AI. And I think organizations, No. 1 are trying to figure out. What does their workforce look like? Where do they need to bring in additional talent?
“And then No. 2, what does that talent look like? And I don’t think they’re there yet. Then you throw in the whole agent workforce, and that adds to the problem.
“There are more mature companies when it comes to AI — the IBMs of the world, the Accentures, the Salesforces; they’re looking at how AI agents are becoming part of their workforce planning. When understanding what your needs are, you first have to consider what the agents will cover those needs, and then figure out what employee skills are needed on top of them.
“And I think that’s the other piece — from a management perspective, it’s become more multifaceted in the approach that companies are taking. They’re not looking for job- centric people anymore. It’s more about the skills-people have.”
When you say less job-centric, what does that mean? “In the past, you would post a job and it would list the tasks of the job. Now, managers are focused more on skills needed to perform in their business. So, these are the skills that we need to support the project.
“I actually had an interesting conversation with a client yesterday, and they were even talking about soft skills, with AI becoming more front and center when it comes to reasoning and problem solving. You assess that along with the technical skills an employee brings to the job. Businesses are looking at assessments that can help them evaluate the soft skills, some of the cognitive reasoning skills [potential hires have].”
Data shows that the number and types of jobs are growing with the advance of AI, but at the same time, there is evidence AI is reducing employee headcount — taking on tasks formerly done by employees. Which do you believe it is? Or is it both? “Is AI is going to reduce workforce sizes, lead to more people being laid off, or is it going to create more opportunities? It’s hard to say. I mean, it could go either way. But I think it’s going to impact more of the transactional roles. It will take a lot of the low-level transactional work away, but what it will also do is allow people to focus on those specialized skills.
“We’ve been talking about how software development is happening so much faster with AI. So, companies are looking for more specialized skill sets. I think there’ll be a shift from the generic skills that companies brought on in the past to more specialized skills that they’re going to need in the future.”
Can you give me some examples of the specialized skill sets? “For example, SAP engineers, SAP architect, AWS skills, and Salesforce skills. Those are some of the software areas that companies are looking for more specialized talent.”
So, you’re saying hiring will be based on skills that are specific to the applications and the AI that is becoming a part of that? “Even cybersecurity. While we’ve been talking about software, cybersecurity is another area that’s going to be very important because you’re opening up some doors with AI related to security and data privacy.”
Where do you even start with that cybersecurity and AI? It seems almost amorphous if AI is in every corner of a business. “There are so many things, and it’s happening so fast. So, we are still learning as fast as we can. We’re trying to understand what the impact of AI will be, and how it will change our business models. Even from a talent organization like ours, which is providing global talent solutions, what does that do for us?
“Now, our company is going to start looking for your talent plus the AI agents you’ll need. So AI becomes part of a hiring solution. There are a lot of companies that are developing AI boot camps for the C-suite executives and opening their eyes to what’s out there. Think about it. At universities like MIT, it used to take teams of scientists years to develop what can now be done in a matter of seconds.
“Right now, companies are taking a step back to discover what the business challenges are that need to be solved because of AI automation. They’re trying to discover the best way to do that. I don’t think there’s a lot of academia programs developed for that. I think a lot of it is pilot programs that involve peers talking about the issues.”
Generative AI (genAI) projects will move from pilot phase to production for many companies this year, which means the workforce will be affected in ways never before imagined. One of those ways will involve onboarding AI agents as new digital employees.
One of the focus areas for global staffing firm ManpowerGroup has been its proprietary platform, Sophie, which leverages AI to tackle talent screening tasks. The staffing firm sees AI agents as playing a central role in sifting through job applicant data for clients. identify market trends, and offering hiring suggestions. When Sophie provides a recommendation — either for or against a candidate — it also explains the reasoning behindi it.
“We view Sophie as a partner to help you focus on what truly matters: finding the right people and building a workplace grounded in honesty, respect, and mutual confidence,” said Carolyn Balkin, general manager for Global Client Solutions at ManpowerGroup.
The company borrowed a page from its HR best practices on team integration to ensure agents were introduced to the marketing team and understood their roles. It also created feedback loops that enabled simple, two-way feedback between human marketers and agents, which turned out to be key in establishing “collaboration and mutual learning.”
AI adoption also means that practically every employee, whether they’re part of an IT organization or a business group, must become familiar with chatbots and other large language model-related technology to better do their jobs.
What organizations are seeking in new talent has shifted as AI continues to take on the more repetitive predictable tasks, requiring workers to focus more on creating new business value.
ManpowerGroup’s Balkin manages IT, technology and telecommunications industry vertical clients, and she has been advising organizations about what it means to manage AI employees. One of the biggest challenges: finding the kind of talent they’ll need that can work with AI and figuring out how to integrate agents across business groups.
Carolyn Balkin, general manager for Global Client Solutions at staffing firm ManpowerGroup
ManpowerGroup
How has managing employees changed since the adoption of AI in the workplace? “I think you know that’s where the soft skills have really come into play, because it is not just a technology. I was at the Davos Conference recently, and a lot of the conversations were about AI, and a number of organizations talked about. It’s not just a technology anymore. We are looking for individuals that have the industry experience. We can take somebody with industry experience and train them on the technical part of the job.
“It’s a lot harder for us to take somebody with the technical skills and teach them how the industry works. I think there’s a focus on looking at the soft skills: the problem solving, the complex reasoning ability, and communications. Because it’s not just developing AI for the sake of software technology; it’s to address that larger business problem. It’s about looking at all of the business functions, and taking all of that into consideration.
“So, I think it is more than just the technology play now. And therefore, when managing these people, it’s not just managing a technical group anymore. You’re managing people who are bringing a different perspective, a different experience, and different soft skills to play, and it’s about how do you pull all of that together.”
How do you go about managing that other type of AI employee — the digital agent, or the AI itself that is becoming another kind of employee? “We do have some early adopters that have put in place these agent workforces. I do know it has changed how they’re looking at workforce management. It’s about what can my agents do? You’re almost looking at agents as the new intern of the company. What can the agents do transactionally, and then what skills do I need to manage that on top of the agents? So, what technical skills do I need, and what soft skills do I need in my employees to manage those agents? And that becomes the workforce plan.
“Then it’s looking at location strategy. In the past, organizations have led with location in the past; now, it’s about getting the agent strategy right. First, figure out what you can take from your transactional workers and then focus on what skills you need.
“Then you have to consider employee upskilling or reskilling. I think organizations are going to have to become much more proactive on their upskilling and reskilling programs. We’ve heard so much about this for the last couple of years, and I think there’s a gap where organizations believe they have strong programs. But when you talk to employees within these companies, they don’t feel there’s been the opportunities to upskill and reskill. So, I think we’re going to have to see more structure around those programs.”
So, how are you managing the digital employee you call Sophie? “Behind Sophie is a cross-functional group that bridges technical expertise and real-world understanding. AI and machine learning experts collaborate with sales and operational professionals, along with individuals who study how people interact with technology. Together, they work toward maintaining our commitment to fairness and trust by:”
Running ongoing checks to spot hidden biases in how Sophie interprets data.
Protecting personal information through strong security protocols and compliance practices.
Offering transparent decision-making details so you always see why Sophie has chosen a particular path.
So would you say managing your digital workforce or managing your agentic workforce is kind of the next frontier? “I definitely think so. I mean, it’s just a collaboration across the agents. And look how fast AI came on board, and now it’s just getting smarter and harder. You know how it’s collaborating with each other. And you know, you have to teach it, too. It’s not going to be just like humans. It’s not going to be 100% accurate, so you need to monitor it; it’s going to create different jobs. You know, back to your question, will it create jobs or kill jobs? Don’t know yet.
“I think they’ll definitely be different, though, because now you have people looking at the quality of what’s coming out of the agents, testing to see if it’s accurate, training the agent. So it will create a whole new set of roles, and it’s going to affect every industry. In manufacturing, for example, organizations are using AI agents for quality control, and doing things significantly faster than they’ve been able to do in the past.”
What industries are being affected the most quickly? “I would say it’s your tech companies that are probably the early adopters, because for them to sell something, clients want a case study. They want to know where you’ve done this and what the impact has been. So, the tech companies see themselves as client zero in order to demo a lot of these new tools and technologies.”
What kinds of problems is AI introducing from an employee management standpoint? Do you believe every company is a technology company and every employee is a technologist to some extent? “Technically, yes, I do believe that. The problem is [that] the gap is getting wider between those employees who understand AI technology and are willing to learn more about it and those who don’t want to have anything to do with it. But I think everybody will be a technologist, eventually. It’s going to be talent augmented by technology.
“I was recently talking to a business manager, and he said while there’s always going to be an IT group, it’s no longer going to be the harbinger, or the only ones who own the technology.”
You have people in marketing, in advertising, in customer support, all the various branches of a business that need to be tech savvy. What’s needed to manage a workforce where everyone is using AI in one form or another? “I think you need a lot more collaboration across the workforce, because historically it has always operated in a very siloed way. You’d roll technology out from one place to the rest of the organization. As you adopt more AI, you can’t do that anymore.
“A big topic at the Davos conference was agentic AI, and that is really all about collaboration. A lot of the large language models — generative AI — have been historically working in a silo. You ask it a query; it shoots out an answer to you.
“The AI that’s under development at a lot of these organizations today is more the agentic AI, which is collaboration of various AI apps and a collaboration of your various data sets. So, that creates a lot more questions because you’ve got to have governance of all those agents. You’ve got to have platforms and the technology behind that.
“There has to be the governance model in play. You need to look at the business holistically in order to manage AI across all of those areas, so you don’t have department doing one thing that might conflict with what another may be doing. They all really need to be aligned so that they’re functioning with each other.”
Anecdotally, when I talk to folks who are out of work, even people who have years of experience technology, they’re having a hard time finding jobs. What do you see happening? Is it harder to get a technology job now, and what skills are companies looking for? “I think it is harder to land a technology job right now. And I think part of that might just be a reflection on where the market is. I know there’s been a lot of stability in the IT tech sector, but organizations haven’t been hiring on additional talent. And some of that is 2024 seemed to be a settling period where there was a lot of adoption of AI. This year it’s about the impact of AI. And I think organizations, No. 1 are trying to figure out. What does their workforce look like? Where do they need to bring in additional talent?
“And then No. 2, what does that talent look like? And I don’t think they’re there yet. Then you throw in the whole agent workforce, and that adds to the problem.
“There are more mature companies when it comes to AI — the IBMs of the world, the Accentures, the Salesforces; they’re looking at how AI agents are becoming part of their workforce planning. When understanding what your needs are, you first have to consider what the agents will cover those needs, and then figure out what employee skills are needed on top of them.
“And I think that’s the other piece — from a management perspective, it’s become more multifaceted in the approach that companies are taking. They’re not looking for job- centric people anymore. It’s more about the skills-people have.”
When you say less job-centric, what does that mean? “In the past, you would post a job and it would list the tasks of the job. Now, managers are focused more on skills needed to perform in their business. So, these are the skills that we need to support the project.
“I actually had an interesting conversation with a client yesterday, and they were even talking about soft skills, with AI becoming more front and center when it comes to reasoning and problem solving. You assess that along with the technical skills an employee brings to the job. Businesses are looking at assessments that can help them evaluate the soft skills, some of the cognitive reasoning skills [potential hires have].”
Data shows that the number and types of jobs are growing with the advance of AI, but at the same time, there is evidence AI is reducing employee headcount — taking on tasks formerly done by employees. Which do you believe it is? Or is it both? “Is AI is going to reduce workforce sizes, lead to more people being laid off, or is it going to create more opportunities? It’s hard to say. I mean, it could go either way. But I think it’s going to impact more of the transactional roles. It will take a lot of the low-level transactional work away, but what it will also do is allow people to focus on those specialized skills.
“We’ve been talking about how software development is happening so much faster with AI. So, companies are looking for more specialized skill sets. I think there’ll be a shift from the generic skills that companies brought on in the past to more specialized skills that they’re going to need in the future.”
Can you give me some examples of the specialized skill sets? “For example, SAP engineers, SAP architect, AWS skills, and Salesforce skills. Those are some of the software areas that companies are looking for more specialized talent.”
So, you’re saying hiring will be based on skills that are specific to the applications and the AI that is becoming a part of that? “Even cybersecurity. While we’ve been talking about software, cybersecurity is another area that’s going to be very important because you’re opening up some doors with AI related to security and data privacy.”
Where do you even start with that cybersecurity and AI? It seems almost amorphous if AI is in every corner of a business. “There are so many things, and it’s happening so fast. So, we are still learning as fast as we can. We’re trying to understand what the impact of AI will be, and how it will change our business models. Even from a talent organization like ours, which is providing global talent solutions, what does that do for us?
“Now, our company is going to start looking for your talent plus the AI agents you’ll need. So AI becomes part of a hiring solution. There are a lot of companies that are developing AI boot camps for the C-suite executives and opening their eyes to what’s out there. Think about it. At universities like MIT, it used to take teams of scientists years to develop what can now be done in a matter of seconds.
“Right now, companies are taking a step back to discover what the business challenges are that need to be solved because of AI automation. They’re trying to discover the best way to do that. I don’t think there’s a lot of academia programs developed for that. I think a lot of it is pilot programs that involve peers talking about the issues.”
Elon Musk’s AI startup xAI has introduced Grok 3, the latest version of its chatbot model, which Musk describes as the most advanced AI system yet.
xAI claims Grok 3 outperforms rival AI models from Alphabet’s Google Gemini, DeepSeek’s V3, Anthropic’s Claude, and OpenAI’s GPT-4o in benchmarks for math, science, and coding.
“About a month ago, Grok 3’s pre-training was completed, and since then, we’ve been working hard to integrate reasoning capabilities into the current Grok 3 model,” the company said during its launch event on Monday.
Musk, speaking alongside three xAI engineers in a live-streamed presentation, said Grok 3 has more than ten times the compute power of its predecessor.
DeepSearch and enterprise push
xAI also introduced DeepSearch, an AI-powered intelligent search engine that functions as a reasoning-based chatbot, explaining its thought process when interpreting queries and formulating responses.
Building on these capabilities, the company plans to roll out Grok-3’s API in the coming weeks, expanding its reasoning and deep search features.
The move signals xAI’s broader push into the enterprise and developer markets, where AI-driven automation and decision-making tools are in high demand.
“Grok 3 is stepping into a fiercely competitive arena alongside ChatGPT and Microsoft Copilot, and its success will depend on real-world performance in code generation, automation, and enterprise AI workflows,” said Abhivyakti Sengar, senior analyst at Everest Group. “If xAI delivers on its promises, it could disrupt the market. However, before enterprises integrate Grok 3, CIOs must rigorously evaluate its security and compliance measures.”
Grok 3 is now available to Premium Plus subscribers on X. xAI is also launching a new subscription tier, SuperGrok, which will provide access via the Grok mobile app and website.
The company is also developing a voice interaction feature aimed at enhancing conversational AI experiences, further expanding its capabilities beyond text-based interactions.
Intensifying AI competition
Grok 3’s launch comes as competition in the AI sector intensifies, with companies racing to develop more powerful and efficient models. Industry analysts see Grok-3’s release as a pivotal moment in this landscape.
“With Grok 3, we expect to see a significant acceleration in R&D innovation across the AI landscape, as it solidifies its leadership position against established players like OpenAI and Google,” said Prabhu Ram, VP of the industry research group at Cybermedia Research. “Grok 3’s advanced capabilities will drive heightened enterprise interest in AI solutions, enabling greater efficiency and smarter decision-making.”
As xAI expands into search and enterprise AI applications, the battle for dominance in generative AI is set to escalate further.
Musk founded xAI in 2023 as a rival to OpenAI, a company he has openly criticized for shifting toward a for-profit model.
With Grok-3’s introduction and upcoming API launch, xAI is positioning itself as a serious contender in the AI market. However, its long-term success will depend on adoption rates, performance benchmarks, and its ability to meet enterprise security and compliance demands. “Understanding how it processes and stores data, ensuring confidentiality, and aligning with regulations like GDPR will be critical,” Sengar said. “A thorough security audit and close collaboration with xAI’s team can help mitigate risks and ensure a smooth deployment.”
Artificial intelligence (AI) has gained significant traction among business leaders keen to explore ways it can drive operational efficiencies and cost savings.
But while top leadership is sold on its potential, it’s a different tale for IT teams working the ground. In Australia, the challenges of implementing AI are particularly pronounced, ranging from limited expertise and siloed operations to the rising tide of cybersecurity risks. It’s no surprise then that in the face of complexity, companies are not sure how to take the first step towards smooth and successful AI deployments.
Australia’s AI challenges
Access to skilled resources, funding issues and keeping ahead of AI’s rapid evolution are just some of the challenges that make it difficult to implement AI solutions uniformly in Australia. For mid-market companies in highly regulated industries, such as finance, energy, and utilities, addressing cybersecurity concerns and responsible AI implementation are also on the list.
“From an AI context, their challenges are similar to other sectors. This includes access to talent, quality of data, integration with legacy systems, change management, and ethical and regulatory concerns. However, they also face heightened cyber threats and fraud, driven by threat actors leveraging AI to become more sophisticated. The consequence of a breach can be significant from both a financial and consumer trust perspective,” explains John Hanna, Neudesic Australia
Ultimately, the breadth of data mid-market companies in finance, energy, and utilities need to deal with is beyond the capabilities of existing systems that rely on the identification of known patterns or human analysis. “By adopting AI, these companies gain the capability to analyse information at scale and speed to identify and stop these threats before they significantly impact the business,” adds Hanna.
To overcome these challenges, Neudesic helps organisations through its expertise, cutting-edge technology, and strong partnerships with Microsoft, having won the Microsoft Partner of the Year award over 20 times. As a global professional services firm, Neudesic is now bringing decades of experience delivering capabilities spanning data and AI, cloud migration and modernisation, application development, and business strategy to Australia.
Hanna shares Neudesic’s approach, which comprises four pillars.
People: Its diverse array of internal experts spanning industries, skillsets, and Microsoft Azure and OpenAI solutions help clients address a wide spectrum of business challenges for any organisation
Approach: It achieves results not only by implementing Microsoft and OpenAI solutions, but also by addressing today’s challenges, identifying tomorrow’s opportunities, and designing the best path forward
Technology: It focuses on innovation to develop solutions that meet clients’ needs while accelerating time to value
Expertise: With 20 years of expertise in Microsoft’s stack, it offers clients expert knowledge to tackle critical IT challenges and unlock new opportunities
Neudesic’s process starts with understanding each client’s business needs, followed by collaborative workshops and rapid prototyping. The team will then develop a roadmap aligned with a client’s goals and ensure ongoing model refinement, data updates, and process improvements.
“We are also backed by IBM and Microsoft. What this means for customers is access to the expertise and experience of experts across both tech stacks dedicated to solving the most critical IT challenges of Australian businesses and capturing new growth opportunities,” says Hanna.
Simplifying critical industry processes with AI
A clear example of how Neudesic is driving AI is in simplifying the Know Your Customer (KYC) process in finance, also known as identity verification.
KYC is where good customer experience is critical, but traditional KYC processes can take days or even weeks. According to a report conducted by financial compliance software company Fenergo, eight out of ten survey respondents would lose clients to an inefficient onboarding process. More than ever, there is a need for streamlined and intelligent document processing solutions to stay competitive.
Neudesic’s Document Intelligence Platform helps automate the KYC process by capturing customer data from various formats, cross-referencing it with databases, and validating the information in real-time. It also streamlines compliance with customer identification programs.
What does this mean for financial organisations? They can now handle high volumes of KYC checks without additional staffing, while automation cuts operational costs. Real-time verification speeds up processes like account openings and loan approvals so that banks can acquire and manage customer assets sooner. What’s more, the platform integrates seamlessly with existing systems like Fenergo for a more robust and efficient workflow.
By partnering with integrators like Neudesic, Australian businesses can deploy AI through a proven, logical methodology and unlock the ability to invest and accelerate AI use based on business demand and available capital.
“Every business dreams big with AI but can stumble when turning ambition into action. Success demands strategy, tailored solutions, and expert guidance. With a trusted partner, businesses can avoid common pitfalls and mistakes that will result in less investment remorse and create business confidence in AI faster than would otherwise be possible,” concludes Hanna.
Anthropic has asked a US court for permission to intervene in the remedy phase of an antitrust case against Google, arguing that the US government’s call for a ban on Google investing in AI developers could hurt it.
Analysts suggest the AI startup’s fears are founded, and that it risks losing customers if the government’s proposal is adopted.
“Its enterprise clients might face uncertainties regarding the continuity of services and support, potentially affecting their operations,” said Charlie Dai, principal analyst at Forrester.
The government’s proposed remedies including the ban on AI investments after the US District Court for the District of Columbia found the search giant guilty of maintaining a monopoly in online search and text advertising markets in August 2024.
The proposed investment ban is aimed at stopping Google from gaining control over products that deal with or control consumer search information, and in addition to preventing further investment in any AI startup would also force it to sell stakes it currently holds, including the $3 billion one in Anthropic.
On Friday, Anthropic filed a request to participate in the remedy phase of the trial as an amicus curiæ or friend of the court.
“A forced, expedited sale of Google’s stake in Anthropic could depress Anthropic’s market value and hinder Anthropic’s ability to raise the capital needed to fund its operations in the future, seriously impacting Anthropic’s ability to develop new products and remain competitive in the tight race at the AI frontier,” the AI startup said in a court filing justifying the request.
It said it had contacted representatives for the plaintiffs in the case — the US government and several US states — seeking to influence the proposal.
Remedy wouldn’t just affect Google
While Anthropic’s primary concern is that the proposed investment ban could hurt the value of the company, it is also worried that it could put it on the back foot against rivals.
“This would provide an unjustified windfall to Anthropic’s much larger competitors in the AI space —including OpenAI, Meta, and ironically Google itself, which (through its DeepMind subsidiary) markets an AI language model, Gemini, that directly competes with Anthropic’s Claude line of products,” the company said in the filing.
Abhivyakti Sengar, senior analyst at Everest Group also shares Anthropic’s view on the effect of the proposed ban.
“Forcing Google to sell its stake in Anthropic throws a wrench into one of the AI industry’s most significant partnerships,” Sengar said, adding that while it might not cause an immediate loss of customers, any disruption to the performance or reliability of Anthropic’s models or its innovation speed could drive business towards its rivals.
The AI startup, additionally, tried to differentiate itself with rivals, such as OpenAI, by pointing out that unlike its competitors it is not owned or dominated by a single technology giant.
“While both Amazon and Google have invested in Anthropic, neither company exercises control over Anthropic. Google, in particular, owns a minority of the company and it has no voting rights, board seats, or even board observer rights,” it said in the filing.
Further, it said that Google doesn’t have any exclusive rights to any of its products despite investing nearly $3 billion since 2022 in two forms, direct equity purchase and purchases of debt instruments that can be converted into equity.
AI was “never part of the case”
Among the arguments that Anthropic makes against the proposed remedy, it notes that neither it nor Google’s other AI investments were ever a part of the case.
“Neither complaint alleged any anticompetitive conduct related to AI, and neither mentioned Anthropic. The only mention of AI in either complaint was a passing reference in the US Plaintiffs’ complaint to AI ‘voice assistants’ as one of several ‘access points’ through which mobile-device users could access Google’s search services,” it said in the filing.
In addition, it claimed that forcing Google to sell its stake could diminish Anthropic’s “ability to fund its operations and potentially depress its market value” as alternative investors deal in millions and not the billions Google invested.
“Forcing Google to sell its entire existing stake in Anthropic within a short period of time would flood the market, sating investors who would otherwise fund Anthropic in the future,” it said in the filing.
Analysts too warned that the future of Anthropic’s operations and its ability to retain customers will depend on the startup’s ability to secure investment if the proposal is adopted.
That, said Everest’s Sengar, “will determine whether it will be a setback or an opportunity for greater independence in the AI race.”
Forrester’s Dai agreed, adding that if Anthropic can quickly reassure its customers and demonstrate a clear plan for continuity and innovation, it may retain their trust and loyalty.