The World Wide Web Foundation, the organization whose mission has been to make the web safer and more accessible, has shut down, according to The Register. The foundation, which close its virtual doors Sept. 27, says its mission has largely been fulfilled and other organizations can take over the work.
When the organization was founded in 2009, just over 20% of the world’s population had access to the internet, with few groups working to change that reality. Today, that number has climbed to around 70%, and many organizations are working to raise it higher.
The foundation’s co-founders, World Wide Web inventor Sir Tim Berners-Lee and Rosemary Leith, said in a statement posted on the Foundation’s site that there are other challenges they want to focus on.
In particular, they write, is the social media companies’ model of commoditizing user data and concentrating power on the platforms, which runs counter to Berners-Lee’s original vision for the web. The foundation was wound down so he can focus on decentralized technologies such as the Solid Protocol, a specification that allows users to securely store data in decentralized data storage units known as Pods.
That technology has been under development since at least 2015.
Microsoft on Tuesday announced several updates to its free Microsoft Copilot and paid-for Copilot Pro services aimed at making the personal AI assistant more powerful and easier to converse with.
Among the updates is Copilot Vision. Built natively into Microsoft’s Edge browser, the Vision feature lets Copilot see what a user sees when surfing the web. It can then respond to queries about the contents of a web page in natural language — highlighting reviews to help choose a film on Rotten Tomatoes, to give one of Microsoft’s examples, or assisting with research.
“We believe Copilot can go beyond answering basic questions and generating content, to offering more complete support for you and your tasks,” Yusuf Mehdi, Microsoft’s corporate vice president and consumer chief marketing officer, said in a pre-recorded press briefing.
The service will be limited to a list of pre-approved websites initially, said Microsoft. The company’s AI models won’t be trained on the content Copilot views.
“Increasingly, generative AI assistants are becoming multi-modal (language, vision and voice) and have personalities that can be configured by the consumers,” said Jason Wong, distinguished vice president analyst at Gartner. “We will see even more anthropomorphism of AI in the coming year.”
Mehdi said Microsoft has taken steps to “respect and protect” user privacy when accessing Copilot Vision, which is turned off by default. “You must actively choose to enable the Copilot feature,” he said. “You have clear notification it is on, no conversations or content are stored beyond the active session, and none of the Copilot Vision interactions will be used for training.”
The feature will initially roll out in the United States via Copilot Labs, a new service where paid Copilot Pro subscribers can test upcoming AI capabilities. Copilot Pro costs $20 per month.
Another experimental feature available in Copilot Labs is Think Deeper, which enables Copilot to “reason” and answer more complex questions.
“Think Deeper takes more time before responding, allowing Copilot to deliver detailed, step-by-step answers to challenging questions,” Microsoft’s Copilot Team said in a blog post. “We’ve designed it to be helpful for all kinds of practical, everyday challenges like comparing two complex options side by side. Should I move to this city or that? What type of car best suits my needs? And so on.”
Think Deeper is available now to a limited number of Copilot Pro customers in Australia, Canada, New Zealand, the United Kingdom and the United States.
Microsoft has also announced a refresh of the Microsoft Copilot mobile app, with a UI that is “leaner, simpler, warmer, and all around more approachable,” said Mehdi. The new Copilot app rolls out today.
Conversations with the AI assistant will be more realistic and natural with the introduction of Copilot Voice, Microsoft said. The revamped voice interface promises faster responses and the ability to interrupt when the Copilot is speaking; users can also now select from four different voices to interact with when talking with the Copilot assistant.
“With the new Copilot Voice, you’ll have a smoother and more engaging conversation, because responses are faster and you can easily interrupt and direct your experience,” said Mehdi.
One of the Copilot voices can also be chosen to read out a Copilot Daily news digest — a summary of news from authorized content sources. (Microsoft has partnered with Reuters, Financial Times, German publisher Axel Springer and others.) It will also provide weather forecasts, with a reminder function also in the works.
Copilot Voice is initially available in English in Australia, Canada, New Zealand, the United Kingdom, and the United States. It will expand to more regions and languages soon, Microsoft said. Copilot Daily is rolling out now starting in the United States and the United Kingdom with more countries coming soon.
To help new users get started, Microsoft has released Copilot Discover, which provides guidance on the AI assistant’s features and “conversation starter” suggestions.
Microsoft
The introduction of realistic AI assistants is part of a wider trend, said Wong. Gartner predicts that, by 2026, 80% of the top 100 consumer brands will offer anthropomorphized generative AI agents to drive consumer loyalty.
It’s not just Microsoft Copilot; Google Gemini, OpenAI’s ChatGPT, and X.ai Grok are all developing multi-modal agents that will “entertain, inform and connect the consumer to relevant services and products,” said Wong. “This is the next frontier — and battleground — of customer experience.”
Microsoft on Tuesday announced several updates to its free Microsoft Copilot and paid-for Copilot Pro services aimed at making the personal AI assistant more powerful and easier to converse with.
Among the updates is Copilot Vision. Built natively into Microsoft’s Edge browser, the Vision feature lets Copilot see what a user sees when surfing the web. It can then respond to queries about the contents of a web page in natural language — highlighting reviews to help choose a film on Rotten Tomatoes, to give one of Microsoft’s examples, or assisting with research.
“We believe Copilot can go beyond answering basic questions and generating content, to offering more complete support for you and your tasks,” Yusuf Mehdi, Microsoft’s corporate vice president and consumer chief marketing officer, said in a pre-recorded press briefing.
The service will be limited to a list of pre-approved websites initially, said Microsoft. The company’s AI models won’t be trained on the content Copilot views.
“Increasingly, generative AI assistants are becoming multi-modal (language, vision and voice) and have personalities that can be configured by the consumers,” said Jason Wong, distinguished vice president analyst at Gartner. “We will see even more anthropomorphism of AI in the coming year.”
Mehdi said Microsoft has taken steps to “respect and protect” user privacy when accessing Copilot Vision, which is turned off by default. “You must actively choose to enable the Copilot feature,” he said. “You have clear notification it is on, no conversations or content are stored beyond the active session, and none of the Copilot Vision interactions will be used for training.”
The feature will initially roll out in the United States via Copilot Labs, a new service where paid Copilot Pro subscribers can test upcoming AI capabilities. Copilot Pro costs $20 per month.
Another experimental feature available in Copilot Labs is Think Deeper, which enables Copilot to “reason” and answer more complex questions.
“Think Deeper takes more time before responding, allowing Copilot to deliver detailed, step-by-step answers to challenging questions,” Microsoft’s Copilot Team said in a blog post. “We’ve designed it to be helpful for all kinds of practical, everyday challenges like comparing two complex options side by side. Should I move to this city or that? What type of car best suits my needs? And so on.”
Think Deeper is available now to a limited number of Copilot Pro customers in Australia, Canada, New Zealand, the United Kingdom and the United States.
Microsoft has also announced a refresh of the Microsoft Copilot mobile app, with a UI that is “leaner, simpler, warmer, and all around more approachable,” said Mehdi. The new Copilot app rolls out today.
Conversations with the AI assistant will be more realistic and natural with the introduction of Copilot Voice, Microsoft said. The revamped voice interface promises faster responses and the ability to interrupt when the Copilot is speaking; users can also now select from four different voices to interact with when talking with the Copilot assistant.
“With the new Copilot Voice, you’ll have a smoother and more engaging conversation, because responses are faster and you can easily interrupt and direct your experience,” said Mehdi.
One of the Copilot voices can also be chosen to read out a Copilot Daily news digest — a summary of news from authorized content sources. (Microsoft has partnered with Reuters, Financial Times, German publisher Axel Springer and others.) It will also provide weather forecasts, with a reminder function also in the works.
Copilot Voice is initially available in English in Australia, Canada, New Zealand, the United Kingdom, and the United States. It will expand to more regions and languages soon, Microsoft said. Copilot Daily is rolling out now starting in the United States and the United Kingdom with more countries coming soon.
To help new users get started, Microsoft has released Copilot Discover, which provides guidance on the AI assistant’s features and “conversation starter” suggestions.
Microsoft
The introduction of realistic AI assistants is part of a wider trend, said Wong. Gartner predicts that, by 2026, 80% of the top 100 consumer brands will offer anthropomorphized generative AI agents to drive consumer loyalty.
It’s not just Microsoft Copilot; Google Gemini, OpenAI’s ChatGPT, and X.ai Grok are all developing multi-modal agents that will “entertain, inform and connect the consumer to relevant services and products,” said Wong. “This is the next frontier — and battleground — of customer experience.”
Apple management and security services provider Jamf unveiled a series of tools and features to support Apple admins at its well-attended JNUC event Oct. 1, with a focus on AI, security, and all-new solutions to make it easier to manage large Apple fleets.
“In today’s environment, support of hybrid and remote work is the norm and the need for protecting one’s environment is a must. Jamf recognizes the responsibility of an Apple admin has only grown more complex,” said Jamf CEO John Strosahl.
This year’s announcements certainly seem to reflect the increasing scale of challenges in distributed environments. “We’re streamlining the user experience, making security compliance easier than ever to achieve, and even providing each customer with a seasoned Jamf expert,” Strosahl added.
What did Jamf announce at JNUC 2024?
Improvements discussed at JNUC included:
AI Assistant.
Declarative Device Management.
Compliance Benchmarks.
Self Service+.
Apple’s enterprise-focused director of product marketing, Jeremy Butcher, joined Jamf leadership on stage to share insights into what Apple is itself doing to support enterprise deployments of its products.
These span the gamut of the company’s products, from new APIs and automated enrollment for Vision Pro to new features in Apple Business/Schools Manager that make it possible for admins to disable Activation Lock on managed devices. They also include management tools to enable or disable Apple Intelligence. “There’s obviously so much more, but I think most of you have probably been living it for a while,” he said as he left the stage.
With that, let’s take a look at what’s new:
What is AI Assistant?
Jamf CTO Beth Tschida arguably shared the most significant news. Jamf last year began working with generative AI as it attempted to build tools to optimize Apple admin workflows. Tschida explained the motivation: “All of you are in the middle of a rapidly evolving ecosystem,” she said. “Information comes at you at an overwhelming rate.”
The Jamf AI Assistant aims to help navigate all that data. “Think of it like having a Jamf expert at your side to help you make better decisions,” she said during the keynote speech.
AI Assistant is a powerful natural language interface that will soon be available with retrieval-augmented-generation (RAG) functionality in its initial beta offering. Full functionality and availability in admin portals is set for early 2025.
AI Assistant combines a vast knowledge base powered by RAG and direct integration with Jamf’s product APIs. What that means is that the AI can gather, analyze, and present the most relevant information — and can also actually do things for admins, such as creating subgroups of users for policies. What’s noteworthy about the implementation is that when it does perform a task, it explains what it has done and why in a side window; it’s a big deal, given the black-box decision-making practiced by some AI tools. Enterprise users, in particular, need to know why a certain decision was made.
“AI Assistant will have the ability to decode and contextualize security alerts, providing context about their severity and potential impact, suggest next steps, and take direct action, while keeping a human informed and in control,” said Tschida. “Within seconds, AI Assistant provides a thoughtful response, drawing from its vast knowledge base.”
The bottom line? What in the past would have taken IT admins many steps can now be handled in a few clicks using a natural language request.
AI Assistant has another set of tools: it can help admins identify patterns in breaches and integrate with Jamf’s security features. That’s a big deal, as it makes for faster response times, more accurate threat assessments, and tougher security.
Declarative Device Management with a twist
Jamf has long been an advocate for Declarative Device Management (DDM); like Apple, it believes DDM to be the future of device management. At JNUC, it announced improvements to its Blueprints features, touting a future-ready approach to managing device settings, commands, app installations, and restrictions — all through DDM.
Molly Mosely, Jamf vice president for product strategy, explained what this is capable of: “You can choose common task templates or build your own. You can drag-and-drop items to build your own custom blueprint, managing everything from Safari extensions to passcodes to use of external storage, and more.”
If the demo from JNUC is to be believed, Blueprints should empower Apple admins to easily and swiftly apply policies across their entire fleets (or parts of those fleets). The company can also make new APIs from Apple accessible across all Jamf users just by adding the appropriate API within the blueprint interface.
Compliance Benchmarks
Another useful set of tools to manage an uncertain world helps admins ensure macOS systems are security compliant. Built on the macOS Security Compliance Project and Jamf Compliance Editor, Compliance Benchmarks in Jamf Pro “simplifies building, managing, auditing, and reporting on CIS benchmark compliance,” the company said.
Available first to Macs (coming later to iPads and iPhones), Compliance Benchmarks bake in key compliance measures and gives admins an immediate view of where they and each device are in terms of maintaining such compliance.
The idea seems solid. It enables admins to ensure industry standards are met when it comes to such compliance. “You shouldn’t have to be a security expert to feel confident your data is protected,” Jamf said.
This takes away the complexity of managing multiple products, with there is more to come. Jamf confirmed it is working with partners to give admins much deeper insight into activity on their fleets using telemetry data, enabling much more in depth compliance management and control. By the end of the year, you’ll be able to audit your Macs like never before.
Self Service+
Jamf has offered a Self Service solution for a while. Self Service+ builds on those features, delivering them through a user-focused application that lets those using managed devices find their own way. This means a company-branded, curated overview of apps and content can be made available. What’s new is that end users can also monitor all their notifications and security alerts within the portal. That makes it easier to ensure employees can find what they need and stay informed without needing to work through multiple apps.
It also makes deployment and provisioning of apps and services a great deal easier for IT.
Apple management and security services provider Jamf unveiled a series of tools and features to support Apple admins at its well-attended JNUC event Oct. 1, with a focus on AI, security, and all-new solutions to make it easier to manage large Apple fleets.
“In today’s environment, support of hybrid and remote work is the norm and the need for protecting one’s environment is a must. Jamf recognizes the responsibility of an Apple admin has only grown more complex,” said Jamf CEO John Strosahl.
This year’s announcements certainly seem to reflect the increasing scale of challenges in distributed environments. “We’re streamlining the user experience, making security compliance easier than ever to achieve, and even providing each customer with a seasoned Jamf expert,” Strosahl added.
What did Jamf announce at JNUC 2024?
Improvements discussed at JNUC included:
AI Assistant.
Declarative Device Management.
Compliance Benchmarks.
Self Service+.
Apple’s enterprise-focused director of product marketing, Jeremy Butcher, joined Jamf leadership on stage to share insights into what Apple is itself doing to support enterprise deployments of its products.
These span the gamut of the company’s products, from new APIs and automated enrollment for Vision Pro to new features in Apple Business/Schools Manager that make it possible for admins to disable Activation Lock on managed devices. They also include management tools to enable or disable Apple Intelligence. “There’s obviously so much more, but I think most of you have probably been living it for a while,” he said as he left the stage.
With that, let’s take a look at what’s new:
What is AI Assistant?
Jamf CTO Beth Tschida arguably shared the most significant news. Jamf last year began working with generative AI as it attempted to build tools to optimize Apple admin workflows. Tschida explained the motivation: “All of you are in the middle of a rapidly evolving ecosystem,” she said. “Information comes at you at an overwhelming rate.”
The Jamf AI Assistant aims to help navigate all that data. “Think of it like having a Jamf expert at your side to help you make better decisions,” she said during the keynote speech.
AI Assistant is a powerful natural language interface that will soon be available with retrieval-augmented-generation (RAG) functionality in its initial beta offering. Full functionality and availability in admin portals is set for early 2025.
AI Assistant combines a vast knowledge base powered by RAG and direct integration with Jamf’s product APIs. What that means is that the AI can gather, analyze, and present the most relevant information — and can also actually do things for admins, such as creating subgroups of users for policies. What’s noteworthy about the implementation is that when it does perform a task, it explains what it has done and why in a side window; it’s a big deal, given the black-box decision-making practiced by some AI tools. Enterprise users, in particular, need to know why a certain decision was made.
“AI Assistant will have the ability to decode and contextualize security alerts, providing context about their severity and potential impact, suggest next steps, and take direct action, while keeping a human informed and in control,” said Tschida. “Within seconds, AI Assistant provides a thoughtful response, drawing from its vast knowledge base.”
The bottom line? What in the past would have taken IT admins many steps can now be handled in a few clicks using a natural language request.
AI Assistant has another set of tools: it can help admins identify patterns in breaches and integrate with Jamf’s security features. That’s a big deal, as it makes for faster response times, more accurate threat assessments, and tougher security.
Declarative Device Management with a twist
Jamf has long been an advocate for Declarative Device Management (DDM); like Apple, it believes DDM to be the future of device management. At JNUC, it announced improvements to its Blueprints features, touting a future-ready approach to managing device settings, commands, app installations, and restrictions — all through DDM.
Molly Mosely, Jamf vice president for product strategy, explained what this is capable of: “You can choose common task templates or build your own. You can drag-and-drop items to build your own custom blueprint, managing everything from Safari extensions to passcodes to use of external storage, and more.”
If the demo from JNUC is to be believed, Blueprints should empower Apple admins to easily and swiftly apply policies across their entire fleets (or parts of those fleets). The company can also make new APIs from Apple accessible across all Jamf users just by adding the appropriate API within the blueprint interface.
Compliance Benchmarks
Another useful set of tools to manage an uncertain world helps admins ensure macOS systems are security compliant. Built on the macOS Security Compliance Project and Jamf Compliance Editor, Compliance Benchmarks in Jamf Pro “simplifies building, managing, auditing, and reporting on CIS benchmark compliance,” the company said.
Available first to Macs (coming later to iPads and iPhones), Compliance Benchmarks bake in key compliance measures and gives admins an immediate view of where they and each device are in terms of maintaining such compliance.
The idea seems solid. It enables admins to ensure industry standards are met when it comes to such compliance. “You shouldn’t have to be a security expert to feel confident your data is protected,” Jamf said.
This takes away the complexity of managing multiple products, with there is more to come. Jamf confirmed it is working with partners to give admins much deeper insight into activity on their fleets using telemetry data, enabling much more in depth compliance management and control. By the end of the year, you’ll be able to audit your Macs like never before.
Self Service+
Jamf has offered a Self Service solution for a while. Self Service+ builds on those features, delivering them through a user-focused application that lets those using managed devices find their own way. This means a company-branded, curated overview of apps and content can be made available. What’s new is that end users can also monitor all their notifications and security alerts within the portal. That makes it easier to ensure employees can find what they need and stay informed without needing to work through multiple apps.
It also makes deployment and provisioning of apps and services a great deal easier for IT.
A scientific experiment conducted by researchers at the University of Cambridge and British AI start-up Strategize.inc confirms these fears. Bad CEOs and management consultants, in particular, have to fear being replaced by AI.
‘Maximized without regard to losses’
Here’s how the experiment was done:
The study involved students and experienced banking executives — a total of 344 people.
They went through a simulation based on gamification elements in which they had to make CEO decisions. The quality of these decisions was recorded using various metrics. The participants had to complete several rounds that built on each other in the form of business years. More than 500,000 decision combinations were possible per round.
A digital twin of the US automotive industry was used as the data basis; it included information on car sales and pricing strategies as well as overarching factors such as economic trends and the effects of the COVID-19 pandemic.
The goal: to maximize market capitalization, which results from a combination of sustainable growth rates and free cash flow (and not being fired from the virtual board by meeting various KPIs). “This goal served as a realistic benchmark to measure the actual performance of CEOs,” the scientists wrote.
GPT-4o from OpenAI was then confronted with the same tasks and the results were compared with those of the best two human participants from both groups.
“The results were both surprising and provocative and challenged many of our assumptions about leadership, strategy and the potential of AI when it comes to high-level decision making,” the researchers reported. GPT-4o consistently outperformed the best human participants on almost all recorded metrics, designed products with surgical precision, and kept costs tightly under control. However, the researchers complain: “GPT-4o was dismissed from the virtual board faster than the students.”
They attribute this primarily to so-called “black swan” events: “We integrated these unpredictable shocks to simulate sudden price fluctuations, changes in consumer behavior and supply chain problems,” the scientists said. The top performers among the students approached these risks with caution. They focused primarily on remaining adaptable in uncertain times rather than pursuing short-term profits.
GPT-4o, on the other hand — like the best bank managers — took a different path, as the researchers note: “The AI adopted an optimization mindset and maximized growth and profitability regardless of losses — until it was thrown off course by a market shock.”
AI may be able to learn and iterate quickly in a controlled environment, but it does not cope well with unforeseen, disruptive events that require human intuition and foresight. This does not reflect well on the banking executives, who were also fired from the virtual board more quickly than the students: “Both GPT-4o and the executives succumbed to the same flaw: excessive trust in a system that rewards aggressive ambition, but also flexibility and long-term thinking.”
As for the genAI tools for OpenAI, the researchers nevertheless drew a positive conclusion: “Despite its limitations, GPT-4o delivered an impressive performance. Although the AI was dismissed more often than the best human players, it was still able to hold its own against the best and smartest participants.”
What does this mean for companies?
The researchers draw various conclusions from their experiment. Here is a brief summary:
Generative AI can no longer be ignored as a strategic resource. The experiment shows that even uncoordinated models can provide creative, strategic input, given the right prompts. The bottom line is that AI produces strong results, especially when it comes to generating value for shareholders.
Data quality is crucial. In order for AI to exceed its own expectations in terms of corporate strategy, high-quality data is needed, similar to that used in the experiment. The initiators of the experiment are convinced that a robust data infrastructure is a prerequisite for AI in the boardroom.
AI efficiency is not risk-free. Aggressive maximization strategies could lead to disastrous results without sufficient foresight. Therefore, genAI tools should not work unsupervised nor should people use the tools without foresight, said the researchers.
Accountability and genAI do not mix. The fact that AI systems are difficult or impossible to hold accountable makes transparent guardrails all the more important. According to the scientists, this is the only way to ensure that genAI-based decisions align with corporate values.
Digital twins play a central, strategic role. According to the researchers, digital twins of a corporate ecosystem, “populated” with several LLM agents, could represent a valuable sandbox for AI leadership. This not only ensures a safety buffer in the event of missteps, but also provides CEOs with important insights for better decisions.
Management consultants could be facing disruptive times. With the emergence of “artificial CEOs,” scientists predict hard times for consultants: “Companies like McKinsey could be faced with their services being supplemented or replaced by AI systems.”
Bosses who are open to modern leadership methods have less to worry about in terms of being replaced by an AI system, the researchers said: “AI cannot fully assume the responsibilities of a CEO. But it can significantly improve strategic planning processes and help prevent costly mistakes. The real power of generative AI lies in enriching CEOs’ decision-making and enabling them, through their analysis and simulation work, to focus on their human skills: making strategic, empathetic and ethical decisions.”
The researchers see one main risk for CEOs: “Clinging to the illusion that we will continue to hold the reins alone in the future. The future of leadership is hybrid. The CEOs who will be successful are those who see artificial intelligence as a partner, rather than as competition.”
When California Gov. Gavin Newsom vetoed a key piece of AI oversight legislation Sunday, he said he did so because the measure “falls short of providing a flexible, comprehensive solution to curbing the potential catastrophic risks.”
He then said he “has asked the world’s leading experts on genAI to help California develop workable guardrails for deploying genAI, focusing on developing an empirical, science-based trajectory analysis of frontier models and their capabilities and attendant risks.”
Those would be laudable sentiments if any of them had any chance of actually delivering a more secure and trustworthy environment for Californians. But Newsom, one of the nation’s smarter politicians, well knows that such an effort is a fool’s mission. I could add cynically that the governor merely wants to be seen trying to do something, but why state the obvious?
Problem One: GenAI deployments are already happening and the technology is being deeply embedded into untold number of business operations. It’s all-but-ubiquitous on the major cloud environments, so even an enterprise that has wisely opted to hold off its genAI efforts for now would still be deeply exposed. (Fear not: There are no such wise enterprises.)
The calendar simply doesn’t make sense. First, Newsom’s experts get together and come up with a proposal, which in California will take a long time. Then that proposal goes to the legislature, which means lobbyists will take turns watering it down. What are the chances the final result will be worthy of signature? Even if it is, it arrive far too late to do any good.
Candidly, given how far genAI has progressed in the last two years, there’s a fine chance that had Newsom signed the bill into law on Sunday, it would have still been too late.
Part of the reason for that is because the enforcement focus is on AI vendors, and it is highly unlikely that state regulators will be able to effectively perform oversight on something as complex as genAI development is today.
In his veto message, Newsom pointed to the flaw of vendor oversight, but zeroed in on the wrong reason.
“By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology,” he said. “Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 — at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.”
In short, the governor is arguing that regulators shouldn’t only look at the biggest players, but focus on many of the smaller specialty shops as well. That argument makes sense in a vacuum. But in the real world, regulators are understaffed and under-resourced to effectively manage a handful of major players, much less the many niche offerings that exist. It sounds great spoken from a podium, but it’s not realistic.
Here’s the real problem: No one in the industry — big players included — truly knows what genAI can and can’t do. No one can accurately predict its future. (I’m not even talking about five years from now; experts struggle to predict capabilities and problems five months from now.
We’ve all seen the dire predictions of what might happen with genAI. Some are overblown — remember the extinction reports from February? And some are frighteningly plausible, such as this Cornell University report on how AI training AI could lead to a self-destructive loop. (By the way, kudos to Cornell’s people for comparing it to Mad Cow disease. But to make the analogy work, they created the term Model Autophagy Disorder so they could use the acronym MAD. Sigh.)
There is a better way. Regulators — state, federal and industry-specific — should focus on rules for enterprises and hyperscalers deploying genAI tools rather than the vendors creating and selling the technology. (Granted, the big hyperscalers are also selling their own flavors of genAI, but they are different business units with different bosses.)
Why? First of all, enterprises are more likely to cooperate, making compliance more likely to succeed. Secondly, if regulators want vendors to take cybersecurity and privacy issues seriously, take the fight to their largest customers. If the customers start insisting on the government’s rules, vendors are more likely to fall in line.
In other words, the paltry fines and penalties regulators can threaten are nothing compared to the revenue their customers provide. Influence the customers and the vendors will get the message.
What kind of requirements? Let’s consider California. Should the CIO for every healthcare concern insist on extensive testing before any hospital uses genAI code? Shouldn’t those institutions face major penalties if private healthcare data leaks because someone trusted Google’s or OpenAI’s code without doing meaningful due diligence? What about a system that hurts patients by malfunctioning? That CIO had better be prepared to detail every level of pre-launch testing.
How about utilities? Financial firms? If the state wants to force businesses to be cautious, there are ways of doing so.
Far too many enterprises today are feeling pressured by hype and being forced by their boards to jump into the deep end of the genAI pool. CIOs — and certainly CISOs — are not comfortable with this, but they have nothing to fight back with. Why not give CIOs a tool with which to push back: state law.
Give every CEO an out for not risking their businesses and customers by accepting magical-sounding predictions of eventual ROI and other benefits. Regulators could become CIOs’ new best friends by giving them cover to do what they want to do anyway: take everything slowly and carefully.
Trying to regulate vendors won’t work. But giving political cover to their customers? That, at least, has a real chance of succeeding.
Whatever Apple’s long-term plans for satellite connectivity, one facet that cannot be ignored is that its Messages via Satellite system is already saving lives — including among iPhone users affected last week by the horror of Hurricane Helene.
The aftermath of Helene
While there seems no end to all the bad news playing out worldwide at this time, a sizable chunk of the United States was particularly impacted by Helene. When that storm hit, it wiped out power grids and cell service, wrecked infrastructure and took scores of lives across several US states. The hurricane wiped out communications for inland, leaving victims stranded with no way of getting help (particularly in light of a massive Verizon outage at the same time).
That is, unless people had iPhones. Reports have appeared on social media explaining how compatible iPhones running iOS 18 enabled those impacted by the storm to send and receive messages via satellite to seek help or let family know they were safe. Apple is well aware of the damage wrought by this disaster; company CEO Tim Cook has promised the company will donate to support relief efforts on the ground.
What is Messages via Satellite?
Available in the US with iOS 18 on iPhone 14s or later models, Messages via Satellite allows users to send and receive texts and other communications using iMessage and SMS when a cellular or Wi-Fi connection is not available.
“Messages via Satellite automatically prompts users to connect to their nearest satellite right from the Messages app to send and receive texts, emoji, and Tap backs over iMessage and SMS,” Apple explained. “Because iMessage was built to protect user privacy, iMessages sent via satellite are end-to-end encrypted.”
Messages via Satellite is essentially an extension to the SOS via satellite service Apple introduced in 2022. It’s available at present only in the US and Canada.
How it works
To receive messages, you or your contact must be running iOS 18, iPadOS 18, macOS Sequoia, watchOS 11, visionOS 2, or later.
To use Messages via Satellite, follow these steps:
First, you must be outside with a clear view of the sky and horizon.
Open Messages, and if you have no cellular or Wi-Fi coverage, a prompt appears.
Tap Use Messages via Satellite.
Follow the instructions to connect to a satellite.
You will then need to select Messages from the selection of services that appear.
Enter your message and tap send.
The message is likely to take longer than usual to send.
Contacts receiving your message will see a status message to show you’re using satellite.
You can also use SMS via satellite — just open Settings>Apps>Message and turn on Send as Text Message and then connect to satellite to send. To reply to SMS messages via satellite, requires iOS 17.6 or later.
When will the service be international?
Apple’s partner in satellite connectivity, Globalstar, continues to launch new satellites to support the expanding service. Regulatory filings from that company suggest it hopes to launch an additional 26 satellites by next year, with at least one report claiming it will have 3,000 in place eventually. At least one space expert thinks Apple will eventually choose to widen the network to become a full satellite-based communication service.
It is likely Apple will follow a cadence similar to the manner in which it made Emergency SOS via satellite available once that service was initially launched in the US and Canada. It opened up in France, Germany, Ireland, the UK, Australia, Austria, Belgium, Italy, Luxembourg, New Zealand, Portugal, Switzerland, Spain, and the Netherlands across the following year and in Japan a year later.
A lesson for everyone
All of this is important in terms of saving lives and providing reassurance for families and friends of those in the disaster-hit areas, but the fact that these devices have helped maintain community resilience amid disaster might also be a salutatory lesson in business resilience. After all, other than avoiding platforms characterized by frequent ransomware attacks and spiralling ancillary security support costs, it just might be that smartphones equipped with satellite connectivity could become a vital business asset as we navigate an increasingly uncertain world.
After all, why should SpaceX dominate such an economically and socially essential asset as satellite communications? It makes sense for every business to ensure there are multiple providers of such a strategic essential — particularly to maintain business and community resilience.
The European Commission has appointed a group of AI specialists to outline how businesses should comply with forthcoming AI regulations.
The group includes prominent figures like AI pioneer Yoshua Bengio, former UK government adviser Nitarshan Rajkumar, and Stanford University fellow Marietje Schaake.
Microsoft’s ambitious collaboration app, Microsoft Loop, includes shared workspaces as well as portable content snippets called Loop components. These components can be shared and embedded in multiple Microsoft 365 apps.
What makes Loop so useful is that those shared components can be updated by multiple collaborators, and the contents of these components stay in sync no matter where they’re embedded. One person could edit a component in an Outlook email, while another edits it in a Teams chat, and the latest changes appear in both places.
We have a separate guide that covers Microsoft Loop more broadly and details how to use the Loop app itself. But you don’t actually need the app to use Loop components. That’s because Loop components can be integrated into several Microsoft 365 apps, so you can create, share, and work on them in an app you’re already familiar with. That’s what we’ll cover in this guide.
In this article
What is a Microsoft Loop component?
What apps can I use Loop components in?
Who can use Loop components in Microsoft 365 apps?
Creating a Loop component
Sharing your Loop component
Interacting with a Loop component
Managing your Loop components
What is a Microsoft Loop component?
A Loop component is a portable text card or content snippet — in list, paragraph, table, or another format — that you and your co-workers can collaborate on synchronously or asynchronously.
For example, if you create a Loop component that contains a table, you and your collaborators can add, change, or remove numbers or text, or adjust the table’s formatting. When someone makes a change to the table, you and your co-workers can see it happen, and see who’s doing the change, in real time.
Loop components can be embedded in (and are cross-compatible among) a subset of Microsoft 365 apps including Outlook, Teams, and Word. When you create a Loop component in one of these M365 apps, you can copy and paste the link to it into another M365 app — and will then be able to continue working on the component in that app.
Imagine that you create a Loop component with a task list on it in a Teams chat. After doing this, you copy and paste a link to it into an Outlook email. Any changes that you or others make to the task list in the Teams chat will automatically appear in the email — and the recipient of your email can also make changes to the task list that will appear in the Teams chat.
What apps can I use Loop components in?
The five main Microsoft 365 apps that Loop components can be used in today are OneNote, Outlook, Teams, Whiteboard, and Word, with some limitations:
OneNote: Loop components are gradually rolling out to the OneNote Windows and web apps but are not yet available in the macOS or mobile apps.
Outlook: Loop components are available in the Windows and web apps but not in the macOS or mobile apps.
Teams: Loop components are available in the Teams Windows, macOS, Android, iOS, and web apps.
Whiteboard: Loop components are available in the Whiteboard Windows, web, Android, and iOS apps. In the mobile apps, you can currently only view and edit Loop components; component creation and copy/paste functions will be added in the future. People collaborating with you on a whiteboard who are not your team members are unable to view, edit, create, or copy and paste Loop components.
Word: Loop components are available in the web version of Word, but not in the desktop or mobile apps.
Who can use Loop components in Microsoft 365 apps?
Only users who have Microsoft 365 business, enterprise, or education accounts can embed Loop components in Microsoft 365 apps, and only other users within your organization can use them within M365 apps.
That said, anyone with a free or paid Microsoft 365 account can create and use Loop workspaces, pages, and components in the Loop app, as covered in our Microsoft Loop cheat sheet. Because Loop components are the same whether they’re embedded in a Loop page or a Microsoft 365 app, it’s worth your while to keep reading this story to learn about the various types of Loop components and the elements you can include in them.
Creating a Loop component
Here’s how to start a Loop component in each of the five apps.
In Outlook: You can insert a Loop component inside an email message. If the recipient is in your Microsoft 365 organization or has a Microsoft user account, they can interact with the Loop component when they open your email.
In the toolbar above the email that you’re composing, click the Loop icon. A panel will open listing the Loop components that you can select to insert. (We’ll go over the Loop component types in the next section of the story.)
Howard Wen
In Teams: You can insert a Loop component inside a Teams chat or in a post in a Teams channel. You and others in the chat or channel will be able to collaborate on the component.
On the toolbar for your message, click the Loop icon: it’s at the bottom right for a chat conversation and at the bottom left for a channel post. A Loop component composition window will open in the channel or chat thread. Buttons to insert specific components will appear along the bottom of this window — click the three-dot icon at the bottom right corner to see more selections.
Howard Wen
In OneNote or Word: Set the cursor where you want to embed a Loop component in your document or page. On the toolbar across the top, select Insert and then Loop Component (or Loop Components). A panel will open listing the Loop components that you can select to insert.
Howard Wen
In Whiteboard: Open a whiteboard. Click the three-dot icon on the bottom toolbar. Select Loop components from the small panel that opens.
Howard Wen
After you’ve selected a Loop component (see below for the main types available), a draft of the component appears in the Microsoft 365 app you’re using. Click Add a title and type in a title for your new Loop component.
The Loop component types
Below are the main Loop components that you can insert into the Microsoft 365 apps. Over time, Microsoft may add more components.
Lists: You can insert a list component in bulleted, numbered, or checklist format. To the right of a new bullet point or number in those list types, type in text for the first item on your list and press the Enter key. A second bullet/number will appear below the first, and you can type in the words for your second item. Repeat until you’ve entered all items for your list.
You set up a checklist the same way, but each item has a circle by it. Clicking the circle will insert a checkmark and cross off its corresponding item to mark it as complete. Clicking the circle again will remove the checkmark and strikethrough.
Howard Wen
Paragraph: This inserts a standard text block where you can type words, sentences or multiple paragraphs.
Table: The basic table template has two rows and two columns by default. To insert a new column, move the pointer over a vertical line in the table, then click the plus sign that appears at the top of the line. To insert a new row, move the pointer over the left side of a horizontal line in the table, then click the plus sign that appears.
Howard Wen
To fill in a table, click inside each empty cell, then type to fill it in. To change a column header, move your pointer over it, click the down arrow that appears at its right, select Rename, and type in a new name.
Task list: This is technically a table template with preset headers. Fill in the task names, the names of co-workers you want to assign each item to, and the due dates. When a task is complete, click the circle next to it.
Q&A: This is a list on which you and your co-workers can post questions and answer each another’s questions. Click Ask a new question and type in your question. To reply to a question, click Answer below it and type in your answer.
Howard Wen
Voting table: This is another table template. It helps you present ideas that your co-workers can vote on.
Progress tracker: This table template helps you track projects that you and your co-workers are collaborating on.
Kanban board and Team retrospective: These are similar templates that help you set up your projects as a series of color-designated cards. They feature the same easy-to-use, robust interface.
Howard Wen
Code: In OneNote and Teams you may also see the option to insert a code block, useful for developer collaboration.
Tip: Inside many areas of a Loop component, you can tag a co-worker who’s in your Microsoft 365 organization by typing @ followed by their name. You do this to bring their attention to your Loop component if you want them to view it or collaborate with you on it. They will get a notification through email or Teams.
Adding other elements to your Loop component
If you click the space toward the bottom of your Loop component, the words “Just start typing…” appear. You can type text inside this space if you want to provide more information to append to your Loop component.
Or, if you press the / key, a menu will open that lists several elements that you can add below your Loop component. For example, you can append an additional table or list. But there are other, unique elements that you may find useful:
Date: When you select this, a mini-calendar will open. Click a date on it, and it’ll be inserted as a line of text in your Loop component.
Callout: Select this and type in text that you want to be set off with a lightly shaded background. The callout will also be denoted with a pushpin icon; you can change this icon by clicking it, and on the panel that opens, selecting another icon.
Howard Wen
Table of contents: This is a really useful element when you’re working on a Word document. Select this and a table of contents will be generated based on the paragraphs and section headings of the document.
Divider: If you add several elements, insert divider lines between them to make your Loop component look better organized and less confusing.
Headings: You can insert a bold text heading, choosing from three sizes. Or you can insert a collapsible heading: the first line is the heading, and the second and any subsequent ones are regular formatted text. When you click the arrow to the left of the heading, this will “collapse” the lines of regular text, folding them up into the heading. Clicking this arrow again will reveal them again.
Quote: This is simply text that you want to have set off within your Loop component, bringing more attention to it.
Person: This is another way to tag a co-worker. You can select this instead of typing @.
Howard Wen
Emoji picker: Obviously, this is for inserting an emoji somewhere in your component. Selecting this will pull up a panel filled with lots of emoji that you can scroll through.
Label: You can select from preset labels (such as Not started, In progress, Completed, etc.) to insert and optionally type in a few words of explanation. To create a set of custom labels, select Add label group, then type in a name for the new group along with the individual label options.
The label in a component can be changed later (e.g., from In progress to Completed) by clicking it and selecting another option from the Label panel.
Image: You can insert an image file that’s stored on your PC’s drive or in OneDrive.
As you become more familiar with these elements, you can skip scrolling through the list of elements by typing / followed by the first letter or two of the element you want. To insert an image, for example, type /i and select Image.
Howard Wen
Note that many of these elements can be combined. For example, you can insert a date, emoji, image, or person element inside a table cell. And some elements can be inserted alongside one another, sharing the same line. Go ahead and play around to see which combinations work.
As you add several elements, you can move any of them to a higher or lower spot within the component. Click to select the element, then click-and-hold the six-dot icon to the left of the element. Drag this icon up or down, and then let go where you want the element to be moved.
Sharing your Loop component
Once you’ve assembled your Loop component, you’re almost ready to send it to your co-workers for collaboration. But first, think about who you want to share it with.
Changing your Loop component’s share settings
By default, Loop components are accessible (and editable) by anyone in your organization, but you can change that.
In Outlook: Along the upper left of your Loop component, click your Loop component’s name. (It’ll either be derived from the subject line of your new email or named “Loop component [number]”) On the small panel that opens, select People in [your organization] with this link can edit.
In Teams: Along the top of your Loop component, click People in your organization with the link can edit.
The “Link settings” panel opens.
Howard Wen
Below “Share the link with,” you can select:
Anyone
People in [your organization]
Recipients of this message (if the component is in an Outlook email)
People currently in this chat (if the component is in a Teams chat)
Only [channel name] (if the component is in a Teams channel)
People with existing access
Note: Your organization may have disabled one or more of these options and/or set up different default sharing permissions.
If you’d prefer that other people you share with not be able to make changes to your Loop component, below “More settings,” click Can edit and change it to Can view.
Additionally, you can set an expiration date. On this date, the component will no longer be viewable by the people you’ve shared it with. (This feature is currently available only in Teams.)
In OneNote, Whiteboard, and Word: Components embedded in these three apps use the same share settings that you set up for the entire notebook, whiteboard, or document.
To share a OneNote notebook or Word doc, click the Share button at the upper right of the page. Select Manage Access and on the panel that opens, select Start sharing. In Whiteboard, simply click the Share button at the upper right of the page.
On the panel that opens for any of these apps, type in the names, groups, or emails for people that you want to share the notebook, whiteboard, or document with. To change access permissions, click the Can edit (pencil) icon and change it to Can view.
Sending your Loop component
After you’ve finished setting up your Loop component and its access permissions, you’re ready to share it with your co-workers.
In Outlook: Fill out any other areas in the email body before or after your Loop component. When you’re finished composing your email, click the Send button.
In Teams: Click the arrow button at the lower right. Your Loop component will be inserted into your Teams conversation.
In OneNote, Whiteboard, and Word: Once you’ve shared the notebook, whiteboard, or document as described above, your co-workers will get a notification through email.
Resharing your Loop component in other M365 apps
You can copy your Loop component and embed it into other Microsoft 365 apps. Click the Copy component icon (two overlapping rectangles) at the component’s upper right. This will copy a link to it to your PC clipboard.
Howard Wen
Here’s what happens when you paste this link in another app or location:
When you paste this link inside a Microsoft 365 app that supports Loop, your Loop component will appear inside that app. So if you create a Loop component in a Teams chat, you can paste it inside a different Teams chat or channel, into a new Outlook email, or into a page in OneNote, Whiteboard, or Word. Your co-workers will be able to contribute to your Loop component in the other app or location.
When you paste the link into an app that doesn’t support Loop, a link to open the component in a browser will appear. Your co-workers will still be able to collaborate on the component, but not directly in the app where you pasted the link.
Interacting with a Loop component
The entire point of a Loop component is for you and your co-workers to collaborate on it. If multiple collaborators are looking at the component at the same time, everyone can see changes happen in real time and who’s making the changes. If someone looks at the component later, they’ll see all changes made earlier.
To change items in a Loop component: Click on the text or other element (date, image, table, etc.) you want to change and make your change.
To add an element to a Loop component: Click the space toward the bottom of the Loop component. The words “Just start typing…” appear. You can type in text or press the / key to see the same list of options covered under “Adding other elements to your Loop component” above.
To add a comment to an element: You and your co-workers can add comments to most elements. Click the element to select it, then click the icon of two speech balloons at the lower left of it. On the panel that opens, click Comment and on the card that opens, type a brief comment and optionally select an emoji.
Howard Wen
You can access these additional functions along the top of the Loop component:
To view a Loop component inside a browser: At the upper left of your Loop component, click its name. Your Loop component will open in the Loop app in a new tab in your browser. You can make changes to the Loop component in this browser tab.
Howard Wen
To rename the component, click its name in the title bar at the top of the page. This opens a bar that lists the file location of this component — click the name of the component at the end of this bar to rename the component.
To see where a Loop component is being shared: Click the cloverleaf (“Shared locations”) icon at the upper right of the component to see the apps that your Loop component is being shared in.
To add a Loop component to a Loop workspace: Click the cloverleaf (“Shared locations”) icon at the upper right of the component. On the panel that appears, select Add to Loop workspace and select a workspace to add it to.
To copy (a link to) a Loop component: As noted above, you can embed a Loop component you’ve created in various Microsoft 365 apps. You can also embed a component created by someone else who granted you permission to edit it. Click the dual-rectangle icon to copy a link to it to your PC clipboard, then paste it into another app. (See “Resharing your Loop component in other Microsoft 365 apps” above for details about how this works.)
To see who has access to a Loop component: Click the dual silhouette icon.
Howard Wen
To change the sharing status of a Loop component (in Teams): Move the pointer over your Loop component and click the pencil icon that appears at the upper right of it. Then along the top of your Loop component, click People in your organization with the link…, then follow the instructions above under “Changing your Loop component’s share settings.”
To delete a Loop component: Move the pointer over the Loop component until a toolbar with emojis appears at the upper right of it. Click the three-dot icon, and on the menu that opens, select Delete.
Note: If you created the Loop component, you can delete it. If you reshare a Loop component that someone else created, you can only delete it from the app that you reshared it on.
To pin a Loop component in Teams: Move the pointer over the Loop component until the toolbar with emojis appears at the upper right. Click the three-dot icon, and on the menu that opens, select Pin.
Howard Wen
If you’re in a Teams chat, this will place a horizontal bar with the name of your Loop component along the top of the chat window. Now, no matter how far down the stream of messages or chats has progressed, clicking this bar will jump your view back up to your Loop component.
Pinning a component in a Teams channel is less useful. Instead of pinning a shortcut to the component at the top of the page, it simply places a pushpin icon on the component. It is easier to see that way, but you still have to scroll through the list of posts.
Managing your Loop components
Most of the Loop components you’ve created from within a Microsoft 365 app are stored in your OneDrive and count toward whatever storage limit comes with your Microsoft 365 plan. You’ll find them under “My files” in different folders depending on the app you created them in:
Components created in OneNote are in the OneNote Loop Files folder.
Components created in Outlook are in the Attachments folder.
Components created in Teams chats are in the Microsoft Teams Chat Files folder.
Components created in Whiteboard are stored in the Whiteboards > Components subfolder.
Components created in Word are stored in the Word Loop Files folder.
Howard Wen
(Loop components you create in a Teams channel are not stored in your own OneDrive, but in the SharePoint site for the team that houses the channel, under Documents > [channel name].)
In OneDrive, you can manage your Loop components as you would any other file: right-click a component’s file name to see a menu that lets you copy, delete, or rename it; manage its access settings; and more.
This story was originally published in April 2023 and updated in September 2024.