Month: May 2024

How to control employee access to iCloud services

As Apple device use spirals across the enterprise, Apple admins have grown accustomed to maintaining tolerance when it comes to iCloud. But there are some controls they can apply to manage what employees can do with the online service.

Managed or personal Apple ID?

There is a difference between what restrictions can be applied on personal iCloud accounts and Managed Apple IDs. IT has far more control over the latter, but can apply some restrictions to personal devices as well, so long as they are managed by an MDM (Mobile Device Management) system of some kind.

If they are not protected by MDM, then no restrictions can be applied at all.

The big difference is that on personal devices assigned to an enterprise MDM account, IT can use a set of MDM restrictions to reduce access to some iCloud services. Managed Apple IDs have far more power, and can be used alongside personal Apple IDs on employee-owned devices, thanks to Apple’s User Enrollment tools. 

How to control iCloud access with managed devices

Managed Apple IDs cannot access certain iCloud services.  Apple says this is due to “organizational focus and to protect user privacy.” The following services are not available, though in some cases the app might be visible:

  • Find My.
  • Health.
  • Home.
  • Journal.
  • Wallet (though employee badges in Wallet do function).
  • iCloud Mail, iCloud+ and iCloud Family Sharing.

You can also customize access to some other apps using Apple School or Business Manager, Apple Business Essentials, and/or your MDM tools. If your fleet runs the latest operating systems, you might also be able to add further refinements to help lock iCloud access down — for example, whether users can collaborate on Keynote files from within Business Manager. Most MDM services offer similar tools.

The idea is that by preventing people from using these services from within their work-related Managed Apple ID, the natural security of the devices is enhanced. It also means you can deploy your own digital employee experiences on the devices, including use of company email.

Of course, employees with devices that support both personal and managed Apple IDs also have access to all their own personal iCloud services, but not from within your deployed mobile work environment.

What about Personal Apple IDs?

Sensibly, Apple does not let IT restrict use of iCloud on personal devices; someone can access their own iCloud account from any Apple device. 

What Apple does allow is some control of iCloud access from devices enrolled in a company’s MDM system. Using Apple’s provided MDM restriction keys, companies that don’t use Managed Apple IDs can block access to specific iCloud services from a given device. This is a little like using a hammer to crack an egg, but you can block access to the following iCloud services: Address Book, Bookmarks, Calendar, Drive, Keychain, Mail, Notes, Reminders, Photo Library, and Private Relay.

The downside is that by blocking access to these services you effectively limit what your staff can do with a device that is for all intents and purposes their own device, using their own Apple ID. Many workers would likely feel this to be an unwanted intrusion into their personal devices and see such moves as displaying a lack of trust. (IT admins could, of course, argue that they feel forced to deploy such restrictions to prevent exfiltration of valuable corporate or personal data.)

Which approach is best?

For me, if you do need to restrict access to iCloud services across your teams, it feels more appropriate to impose those restrictions via a Managed Apple ID. Doing so provides the maximum benefit — you can control and restrict device use that relates to your business, its services, and data, while also permitting personal use of that device.

The beauty of this approach is that work and personal data on a device is cryptographically separated and stored on different partitions, keeping work data secure and personal data private. While there is no such thing as a guarantee when it comes to device or data security, the combination delivers the best employee experience while enabling close control of any potential data/passcode exfiltration. Apple has also tied this experience up with Focus mode, making it as simple as a tap to switch between the work experience and personal use of the device.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

How to control employee access to iCloud services

As Apple device use spirals across the enterprise, Apple admins have grown accustomed to maintaining tolerance when it comes to iCloud. But there are some controls they can apply to manage what employees can do with the online service.

Managed or personal Apple ID?

There is a difference between what restrictions can be applied on personal iCloud accounts and Managed Apple IDs. IT has far more control over the latter, but can apply some restrictions to personal devices as well, so long as they are managed by an MDM (Mobile Device Management) system of some kind.

If they are not protected by MDM, then no restrictions can be applied at all.

The big difference is that on personal devices assigned to an enterprise MDM account, IT can use a set of MDM restrictions to reduce access to some iCloud services. Managed Apple IDs have far more power, and can be used alongside personal Apple IDs on employee-owned devices, thanks to Apple’s User Enrollment tools. 

How to control iCloud access with managed devices

Managed Apple IDs cannot access certain iCloud services.  Apple says this is due to “organizational focus and to protect user privacy.” The following services are not available, though in some cases the app might be visible:

  • Find My.
  • Health.
  • Home.
  • Journal.
  • Wallet (though employee badges in Wallet do function).
  • iCloud Mail, iCloud+ and iCloud Family Sharing.

You can also customize access to some other apps using Apple School or Business Manager, Apple Business Essentials, and/or your MDM tools. If your fleet runs the latest operating systems, you might also be able to add further refinements to help lock iCloud access down — for example, whether users can collaborate on Keynote files from within Business Manager. Most MDM services offer similar tools.

The idea is that by preventing people from using these services from within their work-related Managed Apple ID, the natural security of the devices is enhanced. It also means you can deploy your own digital employee experiences on the devices, including use of company email.

Of course, employees with devices that support both personal and managed Apple IDs also have access to all their own personal iCloud services, but not from within your deployed mobile work environment.

What about Personal Apple IDs?

Sensibly, Apple does not let IT restrict use of iCloud on personal devices; someone can access their own iCloud account from any Apple device. 

What Apple does allow is some control of iCloud access from devices enrolled in a company’s MDM system. Using Apple’s provided MDM restriction keys, companies that don’t use Managed Apple IDs can block access to specific iCloud services from a given device. This is a little like using a hammer to crack an egg, but you can block access to the following iCloud services: Address Book, Bookmarks, Calendar, Drive, Keychain, Mail, Notes, Reminders, Photo Library, and Private Relay.

The downside is that by blocking access to these services you effectively limit what your staff can do with a device that is for all intents and purposes their own device, using their own Apple ID. Many workers would likely feel this to be an unwanted intrusion into their personal devices and see such moves as displaying a lack of trust. (IT admins could, of course, argue that they feel forced to deploy such restrictions to prevent exfiltration of valuable corporate or personal data.)

Which approach is best?

For me, if you do need to restrict access to iCloud services across your teams, it feels more appropriate to impose those restrictions via a Managed Apple ID. Doing so provides the maximum benefit — you can control and restrict device use that relates to your business, its services, and data, while also permitting personal use of that device.

The beauty of this approach is that work and personal data on a device is cryptographically separated and stored on different partitions, keeping work data secure and personal data private. While there is no such thing as a guarantee when it comes to device or data security, the combination delivers the best employee experience while enabling close control of any potential data/passcode exfiltration. Apple has also tied this experience up with Focus mode, making it as simple as a tap to switch between the work experience and personal use of the device.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Accenture chief software engineer: genAI is critical to the future of app development

Multinational professional services firm Accenture has already spent one-third of a planned $3 billion investment in generative AI (genAI) technology to help it reach internal productivity gains and more efficiently produce client products. The results have been nothing short of remarkable.

After using GitHub Copilot, 90% of developers said they felt more fulfilled with their job — and 95% said they enjoyed coding more with Copilot’s help. And by reducing more routine tasks, it also made them 40% more productive.

The return on investment has been impressive in other ways. In the first six months of this fiscal year, Accenture has secured $1 billion in genAI bookings from clients, over half of which came in during Q2. The company, which employes about 150,000 engineers, is also aiming to double its AI workforce from 40,000 today to 80,000 employees in the future.

After recently completing a gig as Accenture’s head of technology for North America, Adam Burden has now taken on a number of other roles, including being the company’s chief software engineer and global lead for innovation, responsible for all lab-related R&D projects. In short, Burden is responsible for all of Accenture’s incubating business projects, its venture investments, as well as innovation, advisory projects — and the workforce associated with them.

Burden spoke with Computerworld about the challenges of using genAI tools, along with some of its most unexpected benefits.

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?quality=50&strip=all 2508w, https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?resize=183%2C300&quality=50&strip=all 183w, https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?resize=768%2C1257&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?resize=626%2C1024&quality=50&strip=all 626w, https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?resize=939%2C1536&quality=50&strip=all 939w, https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?resize=1252%2C2048&quality=50&strip=all 1252w, https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?resize=426%2C697&quality=50&strip=all 426w, https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?resize=103%2C168&quality=50&strip=all 103w, https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?resize=51%2C84&quality=50&strip=all 51w, https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?resize=293%2C480&quality=50&strip=all 293w, https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?resize=220%2C360&quality=50&strip=all 220w, https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?resize=153%2C250&quality=50&strip=all 153w" width="626" height="1024" sizes="(max-width: 626px) 100vw, 626px">

Adam Burden, Accenture’s chief software engineer

Accenture

How has genAI affected your job and the jobs of software engineers and others who work for you? “It’s already virally infecting us in many ways. It’s become part of people’s natural workflow processes where we have general AI systems that are built into teams. For example, [there’s] one called Amethyst that I use all the time to help me better locate Accenture knowledge sources and resources, ask questions about methodology — that one’s become pretty popular and is in the, I would say…, mainstream.

“And then, you know, when you look at at the average individual’s job, [they’re] using generative AI now to help…write content. They’ve embraced a product in our marketing department called Writer, which also happens to be a ventures investment of ours, as well. And it’s really become kind of a de facto standard to help people write a first draft of content, and help them actually write better.

“When you dive in on software engineering, I would say that true software engineers, coders, and developers are among the most impacted or positively impacted by general AI and how they do their work every day.

“Depending upon how you count the numbers, we have somewhere in the 150,000-to-200,000 range of software engineers. That group uses [Copilot] in a lot of different ways today. Of course, we also have our own internal tools that we use, which are now fully embedded with generative AI. I have a client project team, for example, it has 1,600 people on it that are all using generative AI every day and…they actually deliver their projects as well.”

What are your software engineers using generative AI for primarily? “It’s interesting. It’s beyond the software development. There’s a lot the things that make people nervous about the software development piece itself and the coding. [AI] is good for inspiration, but you’re going to want to check exactly what’s generated really, really well.

“On the other hand, the other pieces of software engineering, which quite frankly this is a very much a pareto principle thing, like 80% of the work that you do has nothing to do with writing the code.

“I would tell you that our software engineers — and I’ve seen some of the data, too — say the code that they actually get out of Copilot and other [genAI tools] is 70% to 80% usable — in some cases, even higher. And, they check it very thoroughly before they use it. This is primarily, I would say, more on internal projects than it is for clients. But they tell us that they’re 40% to 50% more productive with generative AI in the things that they do.”

What’s been your own experience with using AI? “I’ve used it as the chief softare engineer for various purposes. For example, I took what is basically an ecommerce application — this is an open source one called SimplCommerce (it’s one you can get it in GitHub). The AI basically takes and reads in all of the source code, and we’re talking a couple hundred thousand lines of C# code. And the goal was to discover if genAI could help us better maintain applications. What I found was its ability to help me more rapidly take over an application that I wasn’t familiar with was remarkable. I asked it to help me try and find a bug in the code, and it found it right away. But that wasn’t the really cool thing.

“The really cool thing was that I do a lot of pre-engineering work, among other things, and I wondered what it would be like if asked [the genAI] it to do something new. So, I was actually demoing [Copilot] to people in a conference room, and I said ‘OK, I’m going to do the thing that you should never do when you’re showing a demo: I’m going to ask you guys to tell me what you want the demo to do.’ I said, ‘Somebody tell me a feature enhancement that this ecommerce application doesn’t currently do.’ This one person raised their hand and said, ‘I want you to add a wish list feature.’ I actually didn’t know how it’s going to do that, but I started thinking to myself, I know what’s in a wish list; you want to be able to add things to it. You want to be able to delete stuff from it, and that type of thing. You know what it did that really shocked me? It started putting stuff in the user stories I hadn’t thought about. And at the end of the day, it actually built a better product than I would have standalone…. Because it was making suggestions, like: ‘You need to add a feature to post your wish list on social media so that you can get more presence.’ I thought that was actually a really good idea.

“So, my point here is that I think that these tools will actually make us better. They give me a bit of superpowers to a degree to actually be a better software engineer. And this is a microcosm of the experience that our people are now having in the space.”

What are the main reasons you’re using gen AI? To assist with code generation? Update software? Create new apps? Create user stories? “I’d say that the main purposes that we’re using for today is to help us with the user stories as well as the post code-generation piece. We don’t entirely turn it loose on the code generation piece because it’s not quite ready for that yet. But I’d say the pre-software development and post-software development parts of writing code are definitely a big piece of it. But look, in doing that, I’m tackling the 80% of the work that’s out there and I’m getting a ton of benefit as a result.”

Why don’t you fully trust genAI yet? “The primary concern that people have is around the security aspects of it; What was the model that you’re building from actually trained on? So, what we’ve done is we’ve started to build our own small language models that have a very narrow code base. So if you’re using a public model, you just don’t know what kind of provenance is in it. What we have done is where we are using some public code generation models, or even Microsoft Copilot and others, is we have a very prescriptive process of security reviews and other guardrails for when we do actually generate that code from it. I think that’ll get better over time.

“As you get more enterprise-ready type software development engines, I think some tools like the one we’ve seen recently from folks like Devin, and there’s another one from Poolside and others, that they’re going to have more closed software engineering libraries that you’ll have more trust and faith in what they’re actually trained on.

“I can’t point to it [Copilot generated software] and say this particular code that it generated has a big security flaw in it and it was because it was taught against another library. We haven’t exactly seen that scenario yet, but we have seen some weaker code examples or even some bad algorithms that don’t work as well, which is why we continue to put the kind of scrutiny on it that we do today.

“We work in literally hundreds of programming languages because our client’s legacy systems are written in those. And, if you want to use, you know, genAI for literally Pascal generation, Fortran, those types of things, it’s not quite as good at that as it is with more modern languages where there’s more ample available software libraries, like Java for example.”

Do you trust genAI enough to allow it to be used to empower a citizen software workforce where they can create their own business applications? “I think that the no-code, low-code providers that are out there, like Mandix and others, have done a good job with that and they’re starting to combine genAI features into their product sets to actually help those citizen developers work faster.

“…I haven’t yet seen us take and hand genAI over to [the business side] and say, ‘Here’s a code generation engine and a prompt for someone that’s not trained as a software engineer to do that.'” Because, frankly, they would have trouble building software that meets your enterprise standards and that can follow the different architecture patterns and models that are important to your enterprise to fit into the business. Will those models get better and those tools get better? One hundred percent, completely. I see that future out there…, where the no-code, low-code kind of toolset with what we’re seeing happen with generative AI and software engineering.”

What guardrails have you put in place to ensure AI doesn’t cause security, privacy or copyright infringement problems? “We definitely have checks in our software check-in process where the right attributions and other things are taking place and we provide provenance for code that’s actually been written. So we can track and maintain where it comes from. And of course, we maintain all the security aspects of it. And like I said, we don’t really allow unfiltered codes to be generated. We can allow [genAI] to be inspirational and to help us accelerate things, but in terms of just putting it directly into production systems or otherwise, that’s definitely not something that we’re fully engaged in at this point.

“We’re testing in that, and I think we’ll eventually get there. I think it’s going to take some time for us to feel very comfortable about that, because you never know, for example, what open source software licenses you’re inheriting and what this is actually built on. So, you have to be very careful about it. So, until we get more private small language models, if you will, which we’re actually building these from now, I think that people are going to exercise a lot of caution — especially at the code development phase.

“But, it’s great for inspiration if you’re really struggling with solving a problem, like what’s the most efficient algorithm to do X, Y, and Z? It is a great way to actually get some of those things done. Recently, we were testing a quantum computer and we needed an algorithm for the traveling salesman problem — a very classic quantum-type problem. But we wanted it to be able to solve it in a classic architecture we used. We used it to generate that and it was awesome. It was perfect; it would generate something and we could see that it ran really efficiently, and it was awesome. So those type of scenarios, I think, are fair game right now.”

What kinds of increases in productivity and efficiencies are you seeing? “Everybody’s mileage varies on this, but I would say for the demographics that are really embracing this for the pre-code development and post-code development, they’re seeing somewhere around 40% on average. But it depends on the legacy environment too and what it’s actually learned from. So, if there’s no documentation for the code or the application or something like that, of course your productivity is going to be a lot lower. But if it’s a relatively rich environment with a good track record and history, then it does increase productivity a great deal.

“I’ll give you one other thing though, that’s kind of surprising to us. We’ve used this [genAI] in SAP and for other package software too, and we’re actually finding some real benefit from using it with package systems. So, it’s not just the custom software engineering that is providing benefit to you, but also in the package systems, too. Is that 40%? Generally not. It’s a little bit lower than that. But it’s definitely giving us a boost where we’re able to apply it.”

When you say you use genAI in “packaged software,” what does that mean? “Oftentimes, like with SAP, it’s not all that different than software engineering. Sometimes, you’re doing a lot of work, besides the configuration of the SAP system. You’re doing things like KDD’s, which are key decision documents. You still create test scripts and other things. They’re using it for that type of thing and seeing a lot of benefit.”

How have you gone about educating your workforce on AI to ensure it’s being used safety and responsibly? “Massive steps. There’s two different groups that we’re trying to tackle here, right? There’s the ones that will use it right and the ones that will create AI. We’ve made some commitments to double our AI workforce from 40,000 to 80,000.

“We made it $3 billion investment in AI. That’s around people who will build these systems. And then for all the people that would use them, so my software engineers and others, this is a huge initiative that we have right now. We actually have a system internally to get people more conversational with genAI solutions. We call it TQ, or your technology quotient, and we’ve had hundreds of thousands of employees take the TQ training class on generative AI. We have many, many others now that are also fully engaged in more deeper dive classes around how to use generative AI and the different systems it’s running on. So, it’s a massive effort at Accenture to rescale our workforce. We say this a lot: we think that you need to make more investment in the people than in the technology.

“There is no AI workforce to go out and hire from. It doesn’t exist out there. So you have to create your own, and for enterprises we absolutely tell them that this is something that they’ve got to focus and place a huge amount of attention on.”

How do you get your message about training needs out and how do you get employees to engage in that training? “It’s a top down thing for us. So, our CEO has made it a huge priority for our business to be ready for the era of genAI. It’s a key pillar of the way that we’ll approach delivering services to clients in the future. And so it’s actually embedded in a lot of our training materials now. But you also hear it from top down — the messaging from virtually all of our leadership channels — that taking these courses is a priority for our employees. And, of course we have gamification and other things that help us sort of ensure that we’re getting the right penetration across the organization to do this.

“There’s lots of different ways to tackle that. But we like to believe, and we find, that our workforce is usually pretty eager to reinvent themselves regularly. And they’re embracing it pretty readily. I think for other clients or other circumstances, they need help in different types of solutions to incent their workforce to kind of go through this process of learning this. And it’s big. If your job is going to change by having an AI agent work with you as a customer care professional or other things, that is a significant adjustment to the way you currently perform your job.”

Do you feel it’s necessary to clean up your data repositories before rolling out an AI solution? Or can you work on that as you pilot these solutions? “So if you’re talking about like using it as a tool, like how [some workers] uses it for Writer and for other things, you can use those things as you go. Now, if you’re trying to create an enterprise knowledge base and you gradually clean it up and eventually get it ready to load into there and you’re going to use it for Q&A type responses and other things, then I think you’ve got to have a clean data foundation first. It is definitely one of the principles that that we’ve observed. If you haven’t invested in building that, there’s a definitely a prerequisite for you to embrace using generative AI on more of an enterprise scale.

“You could do some proof of concepts and pilots for sure, but if you want to reinvent an entire value chain, for example inside of finance or even in your supply chain part of the business, you’ll find yourself really needing to go and make that investment.

“The truth is a lot of clients have actually gone through this. They’ve invested a lot in the last couple of years in cleaning their data and data lakes and having a better data, data architecture and data foundations. So, their level of readiness is good. It doesn’t mean that there aren’t others that are behind. But I do tell people who are behind, the good news is you can actually use genAI to help you cleanse your data. And that wasn’t a tool that was available a few years ago for people that were doing it in a less efficient way. So maybe you’re actually going to end up cleansing your data and improving your environment, in a faster, more efficient, and perhaps even better way than your predecessors did. There’s the glass-half-full way of looking at it.”

How were you using genAI to clean your data? “You can use generative AI to actually read it and take large volumes of data, for example, and help you identify duplicates and help you identify incorrectly formatted content, such as addresses and other things. And, it’ll actually provide you recommendations for what the cleaned data would look like.

“And if you use your prompt engineers in the right way to where they’re structuring it to let it know what good data looks like, you know, and this is what I expect the output to look like, they can actually output it in a common limited format so you can upload it right into another data model with nice, cleansed data. We also find that it’s not bad for enhancing data enrichment as well. If you want to, for example, move to a nine-digit zip code for everybody, it can pretty easily go in and just apply that to all of your data as well without any fancy tools or other third-party products required.

“Make no mistake, genAI is definitely a great tool to help you with data cleansing and building a better data foundation.”

Will AI be a job killer? “I think it’s going to create different jobs. I look at it this way, if you go back to the 1940s, the biggest employer, the biggest occupation for women was as a switchboard operator in telephone exchanges. And there’s not a lot of switchboard operators left. But we have lots of employed people, and I think history is full of examples like. We’re going to see different jobs.

“Of course, prompt engineer is one of the ones that’s most commonly cited, but there’s lots of other things that will be there. The way that we look at this is it’s going to automate a lot of ordinary and allow people to be more extraordinary. We believe that most people will benefit from having augmentation of their capabilities and they’ll get some superpowers out of it as well.

“So, does it kill jobs? I don’t think so. It’s going to make the jobs different and better and more fulfilling in the end. For me as a software engineer, I now get to work on much harder problems rather than the simpler, more boring and ordinary things I’d typically have to do. And we think that’s a great outcome for people, and we think that’s a great outcome for business as well.”

Accenture chief software engineer: genAI is critical to the future of app development

Multinational professional services firm Accenture has already spent one-third of a planned $3 billion investment in generative AI (genAI) technology to help it reach internal productivity gains and more efficiently produce client products. The results have been nothing short of remarkable.

After using GitHub Copilot, 90% of developers said they felt more fulfilled with their job — and 95% said they enjoyed coding more with Copilot’s help. And by reducing more routine tasks, it also made them 40% more productive.

The return on investment has been impressive in other ways. In the first six months of this fiscal year, Accenture has secured $1 billion in genAI bookings from clients, over half of which came in during Q2. The company, which employes about 150,000 engineers, is also aiming to double its AI workforce from 40,000 today to 80,000 employees in the future.

After recently completing a gig as Accenture’s head of technology for North America, Adam Burden has now taken on a number of other roles, including being the company’s chief software engineer and global lead for innovation, responsible for all lab-related R&D projects. In short, Burden is responsible for all of Accenture’s incubating business projects, its venture investments, as well as innovation, advisory projects — and the workforce associated with them.

Burden spoke with Computerworld about the challenges of using genAI tools, along with some of its most unexpected benefits.

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?quality=50&strip=all 2508w, https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?resize=183%2C300&quality=50&strip=all 183w, https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?resize=768%2C1257&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?resize=626%2C1024&quality=50&strip=all 626w, https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?resize=939%2C1536&quality=50&strip=all 939w, https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?resize=1252%2C2048&quality=50&strip=all 1252w, https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?resize=426%2C697&quality=50&strip=all 426w, https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?resize=103%2C168&quality=50&strip=all 103w, https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?resize=51%2C84&quality=50&strip=all 51w, https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?resize=293%2C480&quality=50&strip=all 293w, https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?resize=220%2C360&quality=50&strip=all 220w, https://b2b-contenthub.com/wp-content/uploads/2024/05/Adam-Headshots-Dec-2023-7300637.jpg?resize=153%2C250&quality=50&strip=all 153w" width="626" height="1024" sizes="(max-width: 626px) 100vw, 626px">

Adam Burden, Accenture’s chief software engineer

Accenture

How has genAI affected your job and the jobs of software engineers and others who work for you? “It’s already virally infecting us in many ways. It’s become part of people’s natural workflow processes where we have general AI systems that are built into teams. For example, [there’s] one called Amethyst that I use all the time to help me better locate Accenture knowledge sources and resources, ask questions about methodology — that one’s become pretty popular and is in the, I would say…, mainstream.

“And then, you know, when you look at at the average individual’s job, [they’re] using generative AI now to help…write content. They’ve embraced a product in our marketing department called Writer, which also happens to be a ventures investment of ours, as well. And it’s really become kind of a de facto standard to help people write a first draft of content, and help them actually write better.

“When you dive in on software engineering, I would say that true software engineers, coders, and developers are among the most impacted or positively impacted by general AI and how they do their work every day.

“Depending upon how you count the numbers, we have somewhere in the 150,000-to-200,000 range of software engineers. That group uses [Copilot] in a lot of different ways today. Of course, we also have our own internal tools that we use, which are now fully embedded with generative AI. I have a client project team, for example, it has 1,600 people on it that are all using generative AI every day and…they actually deliver their projects as well.”

What are your software engineers using generative AI for primarily? “It’s interesting. It’s beyond the software development. There’s a lot the things that make people nervous about the software development piece itself and the coding. [AI] is good for inspiration, but you’re going to want to check exactly what’s generated really, really well.

“On the other hand, the other pieces of software engineering, which quite frankly this is a very much a pareto principle thing, like 80% of the work that you do has nothing to do with writing the code.

“I would tell you that our software engineers — and I’ve seen some of the data, too — say the code that they actually get out of Copilot and other [genAI tools] is 70% to 80% usable — in some cases, even higher. And, they check it very thoroughly before they use it. This is primarily, I would say, more on internal projects than it is for clients. But they tell us that they’re 40% to 50% more productive with generative AI in the things that they do.”

What’s been your own experience with using AI? “I’ve used it as the chief softare engineer for various purposes. For example, I took what is basically an ecommerce application — this is an open source one called SimplCommerce (it’s one you can get it in GitHub). The AI basically takes and reads in all of the source code, and we’re talking a couple hundred thousand lines of C# code. And the goal was to discover if genAI could help us better maintain applications. What I found was its ability to help me more rapidly take over an application that I wasn’t familiar with was remarkable. I asked it to help me try and find a bug in the code, and it found it right away. But that wasn’t the really cool thing.

“The really cool thing was that I do a lot of pre-engineering work, among other things, and I wondered what it would be like if asked [the genAI] it to do something new. So, I was actually demoing [Copilot] to people in a conference room, and I said ‘OK, I’m going to do the thing that you should never do when you’re showing a demo: I’m going to ask you guys to tell me what you want the demo to do.’ I said, ‘Somebody tell me a feature enhancement that this ecommerce application doesn’t currently do.’ This one person raised their hand and said, ‘I want you to add a wish list feature.’ I actually didn’t know how it’s going to do that, but I started thinking to myself, I know what’s in a wish list; you want to be able to add things to it. You want to be able to delete stuff from it, and that type of thing. You know what it did that really shocked me? It started putting stuff in the user stories I hadn’t thought about. And at the end of the day, it actually built a better product than I would have standalone…. Because it was making suggestions, like: ‘You need to add a feature to post your wish list on social media so that you can get more presence.’ I thought that was actually a really good idea.

“So, my point here is that I think that these tools will actually make us better. They give me a bit of superpowers to a degree to actually be a better software engineer. And this is a microcosm of the experience that our people are now having in the space.”

What are the main reasons you’re using gen AI? To assist with code generation? Update software? Create new apps? Create user stories? “I’d say that the main purposes that we’re using for today is to help us with the user stories as well as the post code-generation piece. We don’t entirely turn it loose on the code generation piece because it’s not quite ready for that yet. But I’d say the pre-software development and post-software development parts of writing code are definitely a big piece of it. But look, in doing that, I’m tackling the 80% of the work that’s out there and I’m getting a ton of benefit as a result.”

Why don’t you fully trust genAI yet? “The primary concern that people have is around the security aspects of it; What was the model that you’re building from actually trained on? So, what we’ve done is we’ve started to build our own small language models that have a very narrow code base. So if you’re using a public model, you just don’t know what kind of provenance is in it. What we have done is where we are using some public code generation models, or even Microsoft Copilot and others, is we have a very prescriptive process of security reviews and other guardrails for when we do actually generate that code from it. I think that’ll get better over time.

“As you get more enterprise-ready type software development engines, I think some tools like the one we’ve seen recently from folks like Devin, and there’s another one from Poolside and others, that they’re going to have more closed software engineering libraries that you’ll have more trust and faith in what they’re actually trained on.

“I can’t point to it [Copilot generated software] and say this particular code that it generated has a big security flaw in it and it was because it was taught against another library. We haven’t exactly seen that scenario yet, but we have seen some weaker code examples or even some bad algorithms that don’t work as well, which is why we continue to put the kind of scrutiny on it that we do today.

“We work in literally hundreds of programming languages because our client’s legacy systems are written in those. And, if you want to use, you know, genAI for literally Pascal generation, Fortran, those types of things, it’s not quite as good at that as it is with more modern languages where there’s more ample available software libraries, like Java for example.”

Do you trust genAI enough to allow it to be used to empower a citizen software workforce where they can create their own business applications? “I think that the no-code, low-code providers that are out there, like Mandix and others, have done a good job with that and they’re starting to combine genAI features into their product sets to actually help those citizen developers work faster.

“…I haven’t yet seen us take and hand genAI over to [the business side] and say, ‘Here’s a code generation engine and a prompt for someone that’s not trained as a software engineer to do that.'” Because, frankly, they would have trouble building software that meets your enterprise standards and that can follow the different architecture patterns and models that are important to your enterprise to fit into the business. Will those models get better and those tools get better? One hundred percent, completely. I see that future out there…, where the no-code, low-code kind of toolset with what we’re seeing happen with generative AI and software engineering.”

What guardrails have you put in place to ensure AI doesn’t cause security, privacy or copyright infringement problems? “We definitely have checks in our software check-in process where the right attributions and other things are taking place and we provide provenance for code that’s actually been written. So we can track and maintain where it comes from. And of course, we maintain all the security aspects of it. And like I said, we don’t really allow unfiltered codes to be generated. We can allow [genAI] to be inspirational and to help us accelerate things, but in terms of just putting it directly into production systems or otherwise, that’s definitely not something that we’re fully engaged in at this point.

“We’re testing in that, and I think we’ll eventually get there. I think it’s going to take some time for us to feel very comfortable about that, because you never know, for example, what open source software licenses you’re inheriting and what this is actually built on. So, you have to be very careful about it. So, until we get more private small language models, if you will, which we’re actually building these from now, I think that people are going to exercise a lot of caution — especially at the code development phase.

“But, it’s great for inspiration if you’re really struggling with solving a problem, like what’s the most efficient algorithm to do X, Y, and Z? It is a great way to actually get some of those things done. Recently, we were testing a quantum computer and we needed an algorithm for the traveling salesman problem — a very classic quantum-type problem. But we wanted it to be able to solve it in a classic architecture we used. We used it to generate that and it was awesome. It was perfect; it would generate something and we could see that it ran really efficiently, and it was awesome. So those type of scenarios, I think, are fair game right now.”

What kinds of increases in productivity and efficiencies are you seeing? “Everybody’s mileage varies on this, but I would say for the demographics that are really embracing this for the pre-code development and post-code development, they’re seeing somewhere around 40% on average. But it depends on the legacy environment too and what it’s actually learned from. So, if there’s no documentation for the code or the application or something like that, of course your productivity is going to be a lot lower. But if it’s a relatively rich environment with a good track record and history, then it does increase productivity a great deal.

“I’ll give you one other thing though, that’s kind of surprising to us. We’ve used this [genAI] in SAP and for other package software too, and we’re actually finding some real benefit from using it with package systems. So, it’s not just the custom software engineering that is providing benefit to you, but also in the package systems, too. Is that 40%? Generally not. It’s a little bit lower than that. But it’s definitely giving us a boost where we’re able to apply it.”

When you say you use genAI in “packaged software,” what does that mean? “Oftentimes, like with SAP, it’s not all that different than software engineering. Sometimes, you’re doing a lot of work, besides the configuration of the SAP system. You’re doing things like KDD’s, which are key decision documents. You still create test scripts and other things. They’re using it for that type of thing and seeing a lot of benefit.”

How have you gone about educating your workforce on AI to ensure it’s being used safety and responsibly? “Massive steps. There’s two different groups that we’re trying to tackle here, right? There’s the ones that will use it right and the ones that will create AI. We’ve made some commitments to double our AI workforce from 40,000 to 80,000.

“We made it $3 billion investment in AI. That’s around people who will build these systems. And then for all the people that would use them, so my software engineers and others, this is a huge initiative that we have right now. We actually have a system internally to get people more conversational with genAI solutions. We call it TQ, or your technology quotient, and we’ve had hundreds of thousands of employees take the TQ training class on generative AI. We have many, many others now that are also fully engaged in more deeper dive classes around how to use generative AI and the different systems it’s running on. So, it’s a massive effort at Accenture to rescale our workforce. We say this a lot: we think that you need to make more investment in the people than in the technology.

“There is no AI workforce to go out and hire from. It doesn’t exist out there. So you have to create your own, and for enterprises we absolutely tell them that this is something that they’ve got to focus and place a huge amount of attention on.”

How do you get your message about training needs out and how do you get employees to engage in that training? “It’s a top down thing for us. So, our CEO has made it a huge priority for our business to be ready for the era of genAI. It’s a key pillar of the way that we’ll approach delivering services to clients in the future. And so it’s actually embedded in a lot of our training materials now. But you also hear it from top down — the messaging from virtually all of our leadership channels — that taking these courses is a priority for our employees. And, of course we have gamification and other things that help us sort of ensure that we’re getting the right penetration across the organization to do this.

“There’s lots of different ways to tackle that. But we like to believe, and we find, that our workforce is usually pretty eager to reinvent themselves regularly. And they’re embracing it pretty readily. I think for other clients or other circumstances, they need help in different types of solutions to incent their workforce to kind of go through this process of learning this. And it’s big. If your job is going to change by having an AI agent work with you as a customer care professional or other things, that is a significant adjustment to the way you currently perform your job.”

Do you feel it’s necessary to clean up your data repositories before rolling out an AI solution? Or can you work on that as you pilot these solutions? “So if you’re talking about like using it as a tool, like how [some workers] uses it for Writer and for other things, you can use those things as you go. Now, if you’re trying to create an enterprise knowledge base and you gradually clean it up and eventually get it ready to load into there and you’re going to use it for Q&A type responses and other things, then I think you’ve got to have a clean data foundation first. It is definitely one of the principles that that we’ve observed. If you haven’t invested in building that, there’s a definitely a prerequisite for you to embrace using generative AI on more of an enterprise scale.

“You could do some proof of concepts and pilots for sure, but if you want to reinvent an entire value chain, for example inside of finance or even in your supply chain part of the business, you’ll find yourself really needing to go and make that investment.

“The truth is a lot of clients have actually gone through this. They’ve invested a lot in the last couple of years in cleaning their data and data lakes and having a better data, data architecture and data foundations. So, their level of readiness is good. It doesn’t mean that there aren’t others that are behind. But I do tell people who are behind, the good news is you can actually use genAI to help you cleanse your data. And that wasn’t a tool that was available a few years ago for people that were doing it in a less efficient way. So maybe you’re actually going to end up cleansing your data and improving your environment, in a faster, more efficient, and perhaps even better way than your predecessors did. There’s the glass-half-full way of looking at it.”

How were you using genAI to clean your data? “You can use generative AI to actually read it and take large volumes of data, for example, and help you identify duplicates and help you identify incorrectly formatted content, such as addresses and other things. And, it’ll actually provide you recommendations for what the cleaned data would look like.

“And if you use your prompt engineers in the right way to where they’re structuring it to let it know what good data looks like, you know, and this is what I expect the output to look like, they can actually output it in a common limited format so you can upload it right into another data model with nice, cleansed data. We also find that it’s not bad for enhancing data enrichment as well. If you want to, for example, move to a nine-digit zip code for everybody, it can pretty easily go in and just apply that to all of your data as well without any fancy tools or other third-party products required.

“Make no mistake, genAI is definitely a great tool to help you with data cleansing and building a better data foundation.”

Will AI be a job killer? “I think it’s going to create different jobs. I look at it this way, if you go back to the 1940s, the biggest employer, the biggest occupation for women was as a switchboard operator in telephone exchanges. And there’s not a lot of switchboard operators left. But we have lots of employed people, and I think history is full of examples like. We’re going to see different jobs.

“Of course, prompt engineer is one of the ones that’s most commonly cited, but there’s lots of other things that will be there. The way that we look at this is it’s going to automate a lot of ordinary and allow people to be more extraordinary. We believe that most people will benefit from having augmentation of their capabilities and they’ll get some superpowers out of it as well.

“So, does it kill jobs? I don’t think so. It’s going to make the jobs different and better and more fulfilling in the end. For me as a software engineer, I now get to work on much harder problems rather than the simpler, more boring and ordinary things I’d typically have to do. And we think that’s a great outcome for people, and we think that’s a great outcome for business as well.”

Huawei’s industry-leading ransomware solution first to be Tolly Group-certified

In an increasingly digital economy, ransomware might seem unstoppable without security measures in place to proactively detect and prevent it—especially with the relentless cycle of new phishing scams, malware attacks, and cybersecurity threats.

By 2025, Gartner predicts that at least 75% of IT organizations will have experienced at least one ransomware attack; Cybersecurity Ventures echoes this by expecting an average of one ransomware attack on a business every 11 seconds. The dangers of the digital world are relentless; without protection, there is no such thing as secure data. The reality is that ransomware is hard to detect, have long incubation periods, and may have a costly impact on services and business for long periods.

Breaking down anti-ransomware requirements

But what makes a good solution? It should have the right defense systems on the network side, the host side, and the storage side to best optimize data resilience. Protections for these different levels have different priorities, not unlike the relationship between door access controls and coffers.

A good anti-ransomware solution needs to be multilayered, seamlessly integrated, and have clean backup data for optimal recovery. And most of all, given ransomware’s adaptable, ever-shifting nature, good protection must be tailored to the situation at hand, and be able to utilize a range of flexible portfolio solutions. 

Meet Huawei’s world-class ransomware solution

Leading the industry is Huawei’s Multilayer Ransomware Protection (MRP) Solution. It offers two lines of defense with six layers of protection and is more accurate, more comprehensive, and more lightweight than its predecessor.

The MRP Solution complies with the National Institute of Standards and Technology (NIST) cybersecurity framework for enterprise data resilience, having passed all 21 test cases including detection, blocking, protection, and recovery. In 2024, Huawei’s MRP Solution was the first protection solution of its kind certified by Tolly Group at MWC Barcelona for detecting 100% of ransomware through network-storage collaboration. 

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/05/防勒索证书Pic1-1200x800-1.jpg?quality=50&strip=all 1100w, https://b2b-contenthub.com/wp-content/uploads/2024/05/防勒索证书Pic1-1200x800-1.jpg?resize=300%2C232&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2024/05/防勒索证书Pic1-1200x800-1.jpg?resize=768%2C593&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2024/05/防勒索证书Pic1-1200x800-1.jpg?resize=1024%2C791&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2024/05/防勒索证书Pic1-1200x800-1.jpg?resize=902%2C697&quality=50&strip=all 902w, https://b2b-contenthub.com/wp-content/uploads/2024/05/防勒索证书Pic1-1200x800-1.jpg?resize=217%2C168&quality=50&strip=all 217w, https://b2b-contenthub.com/wp-content/uploads/2024/05/防勒索证书Pic1-1200x800-1.jpg?resize=109%2C84&quality=50&strip=all 109w, https://b2b-contenthub.com/wp-content/uploads/2024/05/防勒索证书Pic1-1200x800-1.jpg?resize=621%2C480&quality=50&strip=all 621w, https://b2b-contenthub.com/wp-content/uploads/2024/05/防勒索证书Pic1-1200x800-1.jpg?resize=466%2C360&quality=50&strip=all 466w, https://b2b-contenthub.com/wp-content/uploads/2024/05/防勒索证书Pic1-1200x800-1.jpg?resize=324%2C250&quality=50&strip=all 324w" width="1024" height="791" sizes="(max-width: 1024px) 100vw, 1024px">

Huawei

Huawei’s MRP Solution is a six-layer systematic system comprising two layers dedicated to networking and the other four to storage: detection and analysis, secure snapshots, backup recovery, and isolation zone protection. Its backup protection uses in-depth parsing to ensure clean data for a strong recovery, with up to 172 TB/h recovery bandwidth. Industry-leading implementation of flash storage and multi-stream backup architecture means significantly faster service recovery and minimal interruptions after detecting malicious encryption. The entire solution has a plug-and-play data card for added flexibility and versatility.

Strong storage protection ensures a strong recovery

One of the key components of a well-integrated ransomware protection solution is a robust storage protection system with multiple fail safes. Where the network is the first layer of security against ransomware, storage is the last line of defense to protect data.

Here, Huawei employs a 3-2-1-1 strategy: three copies of important data, at least two types of storage media, one offsite copy, and one extra copy in the air-gapped isolation zone. The last clean data copy in isolation is used for quick recovery from attacks. Better network-storage collaboration means better proactive defense, including using honeyfiles to attract attackers, and improved recovery speed.

With the growing sophistication of ransomware, businesses can’t afford to underestimate the immediate impact it has on their services—from business outage to data loss—as well as long-term ones such as eroded customer trust and regulatory fines. They must prioritize data resilience and security in an age where data is the backbone of the economy. Huawei offers a world-class ransomware solution that paves the way in proactively defending against changing threats, deploys comprehensive storage protection strategies, and spares no effort in the war against malicious actors.

Learn more about how Huawei’s MRP Solution can work for you here.

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/05/存储防勒索Pic2-1200x800-1.jpg?quality=50&strip=all 1200w, https://b2b-contenthub.com/wp-content/uploads/2024/05/存储防勒索Pic2-1200x800-1.jpg?resize=300%2C200&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2024/05/存储防勒索Pic2-1200x800-1.jpg?resize=768%2C512&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2024/05/存储防勒索Pic2-1200x800-1.jpg?resize=1024%2C683&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2024/05/存储防勒索Pic2-1200x800-1.jpg?resize=150%2C100&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2024/05/存储防勒索Pic2-1200x800-1.jpg?resize=1046%2C697&quality=50&strip=all 1046w, https://b2b-contenthub.com/wp-content/uploads/2024/05/存储防勒索Pic2-1200x800-1.jpg?resize=252%2C168&quality=50&strip=all 252w, https://b2b-contenthub.com/wp-content/uploads/2024/05/存储防勒索Pic2-1200x800-1.jpg?resize=126%2C84&quality=50&strip=all 126w, https://b2b-contenthub.com/wp-content/uploads/2024/05/存储防勒索Pic2-1200x800-1.jpg?resize=720%2C480&quality=50&strip=all 720w, https://b2b-contenthub.com/wp-content/uploads/2024/05/存储防勒索Pic2-1200x800-1.jpg?resize=540%2C360&quality=50&strip=all 540w, https://b2b-contenthub.com/wp-content/uploads/2024/05/存储防勒索Pic2-1200x800-1.jpg?resize=375%2C250&quality=50&strip=all 375w" width="1024" height="683" sizes="(max-width: 1024px) 100vw, 1024px">
Huawei

Zoom brings ‘post-quantum’ end-to-end encryption to video meetings

Zoom is adding “post-quantum” end-to-end encryption to its video and voice meeting software. The aim is to protect communication data sent between its apps once quantum computers are sufficiently power to compromise existing encryption methods. 

Right now, it’s difficult for current or “classical” computers to break the modern encryption algorithms that protect internet communications — that means anything from text messages to online banking or shopping. But security experts are concerned cybercriminals can collect encrypted data now and decrypt it once quantum computers become sufficiently capable, a strategy referred to as “harvest now, decrypt later.”

To secure communications on its meetings apps in the long term, Zoom on Tuesday said it will enhance existing EE2E capabilities available in its Zoom Workplace apps with “post-quantum cryptography.” It’s the first unified communication software vendor to do so, Zoom claimed in a blog post.

For Zoom, this means the use of Kyber 768, a key encapsulation mechanism (KEM) algorithm that’s being standardized by the National Institute of Standards and Technology (NIST). NIST has been working to identify a set of “post-quantum” algorithms that can withstand attacks from future quantum computers. 

Although quantum computers are adept at solving complex mathematical equations, meaning they could decrypt classical algorithms, existing systems are small scale and plagued with high error rates, said Heather West, research manager for quantum computing at IDC’s Infrastructure Systems, Platforms, and Technology Group.

As a result, modern classical algorithms are not yet at risk; that could change as quantum computing advances, enabling systems that can run Shor’s algorithm —a quantum algorithm that, according to one definition, is able to “efficiently factorize large composite numbers” and therefore reduce the time taken to break classical encryption.

“Due to this advantage, there is concern that some entities — specifically state-sponsored actors — are breaching and stealing data with a long-shelf life value now (think financial, government, DOD, etc.) with the intent of using future quantum systems to decrypt it and use it later,” said West.

Several initiatives are now under way to identify and develop post-quantum cryptographic algorithms organizations can deploy to become quantum-resilient. For example, NIST launched a global initiative in 2016 and is expected to release its final recommendations later this year. In 2022, US President Joseph R. Biden Jr. issued two security memorandums (NSM-8 and NSM10) to provide government agencies with the guidance and timeframes to begin implementing post-quantum cryptography.  

As for Zoom’s post-quantum EE2E feature, West said the amount of information transferred via text messages and in virtual meetings “is a rather unexplored territory for post-quantum cryptography [PQC],” but is an important area of focus. “Compromised information using these technologies could lead to national security breaches, the accidental exposure of company trade secrets, and more,” she said. “Zoom has taken this opportunity to identify a current area of data security weakness and develop an industry disruptive PQC solution.”

Even so, West points to “severe limitations” in Zoom’s approach. For example, to be secure, all meeting participants are required to use the Zoom desktop or mobile app version 6.0.10 or higher. “So there is no guarantee that everyone will be using the most up-to-date version…,” she said.

In addition, using Zoom’s post-quantum encryption means participants loseaccess to some key features, such as cloud recording. “For PQC to be effective, not only must it be secure against potential quantum cyber security breaches, but it should also allow for the same performance and utility of the applications and infrastructure than if it weren’t being used. This doesn’t seem to be the case with Zoom’s implementation,” West  said. 

In general, West said all businesses should be considering how to keep encrypted data safe in future.

“Organizations should be taking this risk seriously,” she said. “There seems to be a misconception that if an organization is not investing in quantum computing there isn’t a need to invest in post-quantum cryptography.” 

Cyberattacks using quantum algorithms have the potential to affect all businesses and organizations, she said. Some understand the importance of post-quantum cryptography and are waiting for final standards from NIST to be released, but updating to post-quantum cryptography can be a “laborious process,” so organizations should get started now by inventorying and identifying at-risk data and infrastructure. 

“Partnering with a PQC vendor or consultant can help guide the transition. PQC vendors and consultants can also help to determine what solution is most suitable for the organization,” said West.

Zoom brings ‘post-quantum’ end-to-end encryption to video meetings

Zoom is adding “post-quantum” end-to-end encryption to its video and voice meeting software. The aim is to protect communication data sent between its apps once quantum computers are sufficiently power to compromise existing encryption methods. 

Right now, it’s difficult for current or “classical” computers to break the modern encryption algorithms that protect internet communications — that means anything from text messages to online banking or shopping. But security experts are concerned cybercriminals can collect encrypted data now and decrypt it once quantum computers become sufficiently capable, a strategy referred to as “harvest now, decrypt later.”

To secure communications on its meetings apps in the long term, Zoom on Tuesday said it will enhance existing EE2E capabilities available in its Zoom Workplace apps with “post-quantum cryptography.” It’s the first unified communication software vendor to do so, Zoom claimed in a blog post.

For Zoom, this means the use of Kyber 768, a key encapsulation mechanism (KEM) algorithm that’s being standardized by the National Institute of Standards and Technology (NIST). NIST has been working to identify a set of “post-quantum” algorithms that can withstand attacks from future quantum computers. 

Although quantum computers are adept at solving complex mathematical equations, meaning they could decrypt classical algorithms, existing systems are small scale and plagued with high error rates, said Heather West, research manager for quantum computing at IDC’s Infrastructure Systems, Platforms, and Technology Group.

As a result, modern classical algorithms are not yet at risk; that could change as quantum computing advances, enabling systems that can run Shor’s algorithm —a quantum algorithm that, according to one definition, is able to “efficiently factorize large composite numbers” and therefore reduce the time taken to break classical encryption.

“Due to this advantage, there is concern that some entities — specifically state-sponsored actors — are breaching and stealing data with a long-shelf life value now (think financial, government, DOD, etc.) with the intent of using future quantum systems to decrypt it and use it later,” said West.

Several initiatives are now under way to identify and develop post-quantum cryptographic algorithms organizations can deploy to become quantum-resilient. For example, NIST launched a global initiative in 2016 and is expected to release its final recommendations later this year. In 2022, US President Joseph R. Biden Jr. issued two security memorandums (NSM-8 and NSM10) to provide government agencies with the guidance and timeframes to begin implementing post-quantum cryptography.  

As for Zoom’s post-quantum EE2E feature, West said the amount of information transferred via text messages and in virtual meetings “is a rather unexplored territory for post-quantum cryptography [PQC],” but is an important area of focus. “Compromised information using these technologies could lead to national security breaches, the accidental exposure of company trade secrets, and more,” she said. “Zoom has taken this opportunity to identify a current area of data security weakness and develop an industry disruptive PQC solution.”

Even so, West points to “severe limitations” in Zoom’s approach. For example, to be secure, all meeting participants are required to use the Zoom desktop or mobile app version 6.0.10 or higher. “So there is no guarantee that everyone will be using the most up-to-date version…,” she said.

In addition, using Zoom’s post-quantum encryption means participants loseaccess to some key features, such as cloud recording. “For PQC to be effective, not only must it be secure against potential quantum cyber security breaches, but it should also allow for the same performance and utility of the applications and infrastructure than if it weren’t being used. This doesn’t seem to be the case with Zoom’s implementation,” West  said. 

In general, West said all businesses should be considering how to keep encrypted data safe in future.

“Organizations should be taking this risk seriously,” she said. “There seems to be a misconception that if an organization is not investing in quantum computing there isn’t a need to invest in post-quantum cryptography.” 

Cyberattacks using quantum algorithms have the potential to affect all businesses and organizations, she said. Some understand the importance of post-quantum cryptography and are waiting for final standards from NIST to be released, but updating to post-quantum cryptography can be a “laborious process,” so organizations should get started now by inventorying and identifying at-risk data and infrastructure. 

“Partnering with a PQC vendor or consultant can help guide the transition. PQC vendors and consultants can also help to determine what solution is most suitable for the organization,” said West.

Microsoft declares (PC) war all over again

With AI tools and Qualcomm Snapdragon X Elite chips inside its new Surface Pro laptops (called Copilot+ PCs), Microsoft is making no secret that it wants to compete head-on with the world’s most popular laptop, Apple’s MacBook Air

It look like the PC wars have begun again

Despite this declaration of war, it feels like Microsoft owes a lot to Apple. For example, it’s all-new Recall feature reminds me of something Apple already had in its systems called Time Machine. Like Recall, Time Machine saves versions of everything on your device in an encrypted form and lets you “recall” them later on. The feature has always been tied to the user ID and heavily secured. 

We’ll soon find out if Recall is as well protected.

But it’s not the only nod to Apple’s work Microsoft has made in its latest fan-fueled attack on the Mac: even the processors are based on the Arm chips Apple has used for years now in iPhones, iPads, and Macs. And, just like Apple’s Rosetta on M-series chips, Microsoft has an on-board emulator to run older apps that aren’t yet optimized for Windows on Arm. Microsoft claims 87% of the apps people use most will already be ARM-optimized. Helpfully, Apple’s adoption of Arm in Apple Silicon means most of the world’s biggest developers have already ported applications to Arm.

“We have completely reimagined the entirety of the PC — from silicon to the operating system, the application layer to the cloud — with AI at the center,” wrote Microsoft’s Chief Marketing Officer Yusuf Mehdi. (Arguably, that’s something Apple also already did.) 

Comparisons, comparisons, comparisons

Microsoft shared a range of test results it claims show not only that the new devices compete with Apple’s, but in some cases exceed what the Mac can do. However, as we see each time a tech product gets released, some of the claims seem a little uncertain.

Take performance, for example: Microsoft claims its product can run 58% faster than the MacBook Air M3. The company even ran a side-by-side photo editing test between the two computers to prove its advantage.

It’s worth noting, however, that the Surface device contains a fan, which the MacBook Air does not, which means Microsoft’s system can run at a higher temperature.

Once the inevitable comparative reviews appear, it will be interesting to learn how long you can run such intensive tasks on a Surface in terms of energy consumption and battery life, and how this compares to the same tasks on a Mac. Microsoft says that when it comes to simulated web browsing, you’ll get over an hour more battery life on its device than Apple’s. However, Ars Technica calls Microsoft’s battery life claims “muddy”, saying they need further independent verification.

To some degree, the comparisons might become moot, given Apple is already striding toward equipping Macs with M4 chips; they’re already available in what I see as Apple’s more direct Surface competitor, the iPad Pro

Making Windows…

Microsoft doesn’t see it that way. It believes its Surface Pro devices should be seen as MacBook Air competitors, is buoyed by no-doubt excellent test results, and hopes that by pimping out its systems with AI it has a compelling market proposition with which to tempt enterprise users to stay inside the Windows flock.

(Though even that bid for regained relevance still needs to get past the data sovereignty/privacy problems that beset all the big genAI solutions at the moment. Enterprise users will need to be certain of the cloud-based components of these systems before using them to handle regulated data, I expect.)

All the same, on paper and in keynote at least, Microsoft is making what seems to be one of its sassiest bids yet, once again raising the temperature as the industry prepares for what’s shaping up to be among Apple’s most existentially important WWDC events ever.

There’s a lot to get through.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Microsoft declares (PC) war all over again

With AI tools and Qualcomm Snapdragon X Elite chips inside its new Surface Pro laptops (called Copilot+ PCs), Microsoft is making no secret that it wants to compete head-on with the world’s most popular laptop, Apple’s MacBook Air

It look like the PC wars have begun again

Despite this declaration of war, it feels like Microsoft owes a lot to Apple. For example, it’s all-new Recall feature reminds me of something Apple already had in its systems called Time Machine. Like Recall, Time Machine saves versions of everything on your device in an encrypted form and lets you “recall” them later on. The feature has always been tied to the user ID and heavily secured. 

We’ll soon find out if Recall is as well protected.

But it’s not the only nod to Apple’s work Microsoft has made in its latest fan-fueled attack on the Mac: even the processors are based on the Arm chips Apple has used for years now in iPhones, iPads, and Macs. And, just like Apple’s Rosetta on M-series chips, Microsoft has an on-board emulator to run older apps that aren’t yet optimized for Windows on Arm. Microsoft claims 87% of the apps people use most will already be ARM-optimized. Helpfully, Apple’s adoption of Arm in Apple Silicon means most of the world’s biggest developers have already ported applications to Arm.

“We have completely reimagined the entirety of the PC — from silicon to the operating system, the application layer to the cloud — with AI at the center,” wrote Microsoft’s Chief Marketing Officer Yusuf Mehdi. (Arguably, that’s something Apple also already did.) 

Comparisons, comparisons, comparisons

Microsoft shared a range of test results it claims show not only that the new devices compete with Apple’s, but in some cases exceed what the Mac can do. However, as we see each time a tech product gets released, some of the claims seem a little uncertain.

Take performance, for example: Microsoft claims its product can run 58% faster than the MacBook Air M3. The company even ran a side-by-side photo editing test between the two computers to prove its advantage.

It’s worth noting, however, that the Surface device contains a fan, which the MacBook Air does not, which means Microsoft’s system can run at a higher temperature.

Once the inevitable comparative reviews appear, it will be interesting to learn how long you can run such intensive tasks on a Surface in terms of energy consumption and battery life, and how this compares to the same tasks on a Mac. Microsoft says that when it comes to simulated web browsing, you’ll get over an hour more battery life on its device than Apple’s. However, Ars Technica calls Microsoft’s battery life claims “muddy”, saying they need further independent verification.

To some degree, the comparisons might become moot, given Apple is already striding toward equipping Macs with M4 chips; they’re already available in what I see as Apple’s more direct Surface competitor, the iPad Pro

Making Windows…

Microsoft doesn’t see it that way. It believes its Surface Pro devices should be seen as MacBook Air competitors, is buoyed by no-doubt excellent test results, and hopes that by pimping out its systems with AI it has a compelling market proposition with which to tempt enterprise users to stay inside the Windows flock.

(Though even that bid for regained relevance still needs to get past the data sovereignty/privacy problems that beset all the big genAI solutions at the moment. Enterprise users will need to be certain of the cloud-based components of these systems before using them to handle regulated data, I expect.)

All the same, on paper and in keynote at least, Microsoft is making what seems to be one of its sassiest bids yet, once again raising the temperature as the industry prepares for what’s shaping up to be among Apple’s most existentially important WWDC events ever.

There’s a lot to get through.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

How Apple is playing catch-up on integrating genAI into its products

Apple has been tardy in developing and announcing its plans for using generative AI (genAI), but so what? Apple has been late before in jumping on important tech trends and it has caught up. The arrival of genAI tools and platforms over the past 18 months has been the biggest shift in tech since the advent of cloud computing, and Apple’s Siri is a pathetic example of a virtual assistant in desperate need of a new brain.

Those two realities have led to a lot of criticism by analysts and the media in recent months about Apple’s non-existent AI strategy, and the company’s need to scramble to catch up. (And that was before this week’s big announcements from Microsoft.)

Recent leaks about what the company is up to should ease the Apple genAI angst in the marketplace. The New York Times, for example, reported that Apple will revamp Siri to catch up to its chatbot competitors. And according to Bloomberg, the iPhone maker will reportedly ink a licensing deal with OpenAI for ChatGPT to revitalize its voice assistant.

Apple has hinted at AI-related announcements coming at its annual Worldwide Developers Conference (WWDC) on June 10. Much depends on the specifics. According to the Times, Apple has discussed “licensing complementary AI models that power chatbots from several companies, including Google, Cohere, and OpenAI.” This may be the most significant point for the success of Apple’s AI offerings. Developing Large Language Models (LLMs) that power chatbots is specific to each product, compute intensive, and requires considerable time and effort.

The reality is that Apple is its own ecosystem, and the danger for the company in being late with AI is more about Wall Street’s fleeting perceptions and temporary company valuation than what its customers think. Apple’s stock fell in late April and early May as the company was hit with negative publicity about being out to lunch on AI. But after a recent Apple stock buy-back and the expectation that Apple will announce its plans soon, Apple’s stock is back up.

Aspects of Apple’s probable AI plan

Apple’s AI strategy is beginning to emerge. Bloomberg’s Mark Gurman confirmed the recent rumors:  the company plans to significantly upgrade Siri with AI. (Using OpenAI’s ChatGPT is almost certainly a stopgap measure; Apple will eventually need to fully develop its own LLM and chatbot and fully integrate it with its products.)

Apple also intends to add what Gurman described as “proactive intelligence” features, such as automatic summaries of iPhone notifications, quick synopses of news articles, transcribed voice memos, and improved existing features such as those that automatically populate your calendar. Gurman also reported that Apple might launch genAI-powered editing tools similar to those found on Google’s Pixel and Samsung’s Galaxy S smartphones. And the company is reportedly eyeing high-end AI-enabled chips for its data centers to enable cloud-based genAI features.

Licensing ChatGPT isn’t going to wow anybody with the latest genAI features from OpenAI or Google. Apple will pay that price, at least temporarily. It’s unclear at the moment what else could emerge from any partnership between Apple and OpenAI — there are rumors OpenAI might integrate ChatGPT natively on the iPhone — but the details should become clear at WWDC. 

Finally, according to VentureBeat, Apple researchers have developed a new artificial intelligence system that can “understand ambiguous references to on-screen entities as well as conversational and background context, enabling more natural interactions with voice assistants.”

Called ReALM (Reference Resolution As Language Modeling), it leverages LLMs to handle the complex task of converting reference resolution — including references to visual elements on a screen — into a pure language-modeling problem, VentureBeat reported. This lets ReALM achieve substantial performance gains over existing methods.

Brass tacks

There can be no doubt that the Apple allowed itself to fall behind in the breakneck race for AI supremacy. But the company appears to be moving to rectify that misstep and has taken the only recourse left — licensing a well-regarded chatbot and LLM from another company while it continues to develop an internal approach. This will let Apple dramatically  improve Siri, which hasn’t had a significant upgrade since its introduction in 2011. That’s the burning need.

We’ll have to wait another three weeks or so to get the details of Apple’s AI plans.  But it would be a mistake to score Apple as an AI loser at this point. We’re still just getting started with AI. So is Apple.