It will take a while to process all of Apple’s many big WWDC announcements on Monday, but one set of improvements I can’t wait to use are the ones packed inside the Apple Intelligence-augmented Mail app.
Based on what we’ve been told, these enhancements will really help any enterprise professional, knowledge worker, or frankly anybody who uses Mail.
I ask you, who doesn’t struggle with over-full email boxes and important responses we don’t make because “life” gets in the way? It is also interesting that those improvements mean Apple devices will now offer the tools we used to have to pay through the nose for with Grammarly.
Ease the email pain points
As had been rumored, Apple took the wraps off its plans for artificial intelligence (“Apple Intelligence) at its big developers conference. We’ll be writing about them for weeks and months to come. I’m focused now on Apple’s improvements to Mail, which should (I hope) help push the tech out of the way and let us get on with what we need to do. They do so because they begin to answer the biggest problem with email: why is an industry standard solution that’s packed with our data — and that we use every day — now a more actively useful space.
That’s because, in tandem with the company’s on-device contextual intelligence, the information coming into your email box is made more easily actionable with the changes and additions Apple plans.
Together, these should go far toward turning Mail into a central focus space from which users can complete most communication and task-related projects. And by the time the changes ship this fall, I expect Apple will deliver a powerfully integrated mail experience that helps you get work out the way — so you can focus on making better genmojis.
Just take a look at the powerful features Apple Intelligence promises us.
Powerful and useful writing tools
Apple’s new Writing Tools (available in Mail, other Apple apps and to third-party developers via an API) offer a range of functions. They can rewrite and/or proofread what you have already written; summarize your message; bullet point the key points in your message; create tables and lists; and even change the tone of your mail.
The latter feature lets you take what you’ve written and, at the tap of a button, generate (for example) a professional tone that still says what you want to say, while not being overly informal. Better yet, these tools are available system-wide, and none of what you write leaves your system, unless you make and approve a request that wants to use tools provided by OpenAI’s Chat GPT. (Yes, that pre-WWDC rumored tie-up turned out to be correct, as well.)
Smart message prioritization
Apple Intelligence will learn which of your incoming messages are most likely to be important and file them into the Priority Messages view in your Inbox. This will make it much easier to find them.
Apple explains: “On-device categorization organizes and sorts incoming email into Primary for personal and time-sensitive emails, Transactions for confirmations and receipts, Updates for news and social notifications, and Promotions for marketing emails and coupons. Mail also features a new digest view that pulls together all of the relevant emails from a business, allowing users to quickly scan for what’s important in the moment.”
It also handles important notifications the same way.
Summaries in messages and mail
Sometimes we have little time, and yet whoever we’re interacting with has a great deal of complex information to share; the result is a lengthy email. Sure, the whole message should probably be read, but if you’re short of time, you can use Summaries in Messages to get the gist of the entire diatribe. Yes, if you completely rely on summaries you’ll probably miss something, but if you are in a hurry and just need the basics, Apple Intelligence has your back.
Built-in transcription tools
Apple Intelligence lets you generate summaries and transcripts of audio recordings captured with the Notes app or during a phone call. This is going to be popular with a lot of people — particularly researchers, students, and journalists. (If you’re concerned about privacy, all parties in a phone call will be told this activity is taking place.)
The new audio transcription and summarization features in Notes enable a device to take notes for the user, Apple says. This lets them, “stay present in a situation where they need to capture details about what’s happening,” which means you can stay focused in that meeting and still have a useful and usable aide memoire.
Natural Language search
Have you ever spent time searching Mail for a message from a specific person that contains something you vaguely recall, but which you just can’t find using standard search in the app? Apple Intelligence brings natural language search, which should make it easier to find messages, documents, and other items when you can’t quite recall what they were called or where they are.
Contextual Siri
Siri now (finally?) gains semantic search. That means it will be able to understand information and relationships it couldn’t decipher before. Apple seems to promise this extends to understanding and creating language and images, acting across apps, and simplifying and accelerating everyday tasks. You can switch between text and voice to communicate with Siri in whatever way is most appropriate.
What about Grammarly?
Many users rely on Grammarly to improve their writing. That seems like a less essential investment now that Apple’s writing tools exist, and not only do they exist, but they do so at no charge. One more thing? Apple’s writing tools do not collect the writing work you’ve done. To be fair, Grammarly says it protects your data; but as every hacker knows, the best data you can have if you want to stay secure is data that does not exist. Apple delivers.
Whatever next?
Apple execs, over the course of there presentations, introduced a horde of improvements, many of which will I think improve the user experience across the iOS, macOS and iPadOS platforms. Many of the richer experiences it highlighted mean its entire platform ecosystem (with the weird and probably temporary exception of visionOS), is now an AI platform. It also means that whatever Apple can’t yet do on the device can be outsourced easily to partners like OpenAI.
The deal here seems to be that Apple maintains its hold on the intention and the experience, while also delivering access to genAI tools, preserving user privacy, and massively enhancing the email experience to the benefit of all knowledge workers. The integrated AI on your device can deliver highly personalized responses based on your specific data, without anyone other than you and the device knowing anything about you.
“It’s personal, powerful, and private — and integrated into the apps you use every day,” wrote Apple CEO Tim Cook.
That privacy promise is powerful. Though, there is a catch.
As we had heard, you’ll need a Mac or iPad running an M1 or newer chip, or an iPhone 15 series device or newer to use Apple’s AI features, though anyone with a device running iOS 18 will probably enjoy the little blip they might see appear on screen when pressing a side button. Apple’s WWDC announcements show its continued attention to detail, as does the way it has chosen to bring genAI to the rest of us, while working to protect our privacy.
It looks like artificial intelligence (AI) will be the heart of Apple’s announcements at WWDC this week. Now, we think we also know what devices will be needed to support the new — beta — features if you’re a developer given the right to install the new operating systems.
What is expected
Apple is preparing to shake a little of its reality distortion dust. That magic powder will be used to show how the company has already placed AI inside its devices, and to show how with the addition of generative AI (its own and from partners) it now offers the world’s first multi-platform (PC, smartphone, tablet, watch, spatial) AI computing ecosystem.
Apple will call it “Apple Intelligence,” according to Bloomberg.
The company might not fully realize its ambitions, but this is the direction in which it is going. It is also possible that not every feature Apple wants to talk about will be fully active yet — and some might not reach our devices until 2025.
But they’re certainly coming. Apple has, after all, been informed by cutting-edge AI research since the company’s inception, though there’s no doubt that the rate of innovation has accelerated incredibly rapidly in the last two to three years.
The current thinking: Apple now sees this AI work as transformational and thinks it will form the foundations for another decade of product innovation — the USP of which will likely be Apple’s hardware and software excellence all supported by AI and trusted cloud services.
AI improvements will include…
Along with news of partnership with third-party genAI developers, these are some of the improvements coming:
Mail becomes far smarter. Not only will it be able to recommend genuine seeming pre-written emails to help you get through them faster, but it will be capable of automatically filtering emails into specific channels and more. All of these features will be incredibly welcome, and some extend to Messages.
Automatic on-device transcription tools that should make most enterprise professionals happy (most of the time). You’ll never need to argue over what was discussed during a meeting again.
AI-powered tools in Keynote and Pages to help you swiftly put together presentations and documents.
Safari will summarize web pages, Spotlight improves, new health and AI features will appear, notifications and news reports will be summarized, and Siri will get smarter and gain more capabilities.
AI-boosted image editing in Photos and other additions to most of Apple’s own apps.
These tools and features effectively represent a quantum leap forward for what Apple’s hardware can achieve. Those AI-powered gadgets that raised a little attention earlier this year were interesting, but these tools get even better when you can access some of them via the Apple Watch you already wear on your wrist.
What you’ll need
Not every Apple Watch, iPhone or other device is expected to be able to run some or all of the new AI features, and the current speculation is that only the most relatively recent Apple Silicon processors will make the grade.
If that is the case, this is possibly because handling the kind of computational load created by genAI requests is extremely intensive. It takes a lot of computer power to ask ChatGPT for a recipe suggestion, and even some relatively recent devices might not yet be up to the task.
What this means is that to access these services, you will need to be running an iPad or Mac with an M1 chip, or later. These services will also require at least an iPhone 15 series device. I imagine this will spark an upgrade surge as Apple customers seek to try these new services.
In use, Apple is thought to have developed an on-device computational mechanism that will choose whether to process specific tasks on the device or via M2-based servers it is now thought to be putting into place. You’ll be able to opt-in to using these services, and the company will focus on privacy.
That’s the speculation so far, we’ll learn the real deal soon at WWDC 2024.
Microsoft will make its Windows Recall feature opt-in by default after criticism by security and privacy experts.
With Recall, currently in preview, Microsoft wants to let Copilot+ PC users find and retrieve information across any app they’ve accessed. It does so by taking screenshots every few seconds, creating a searchable timeline of everything a user has interacted with on their computer.
Microsoft Executive Vice President Yusuf Mehdi likened Recall to “photographic memory,” when the feature was unveiled in May.
However, the ability to record and store all user data that appears on screen — anything from passwords to confidential messages — drew widespread criticism.
Experts claimed Recall would create a treasure trove of data for hackers, with some comparing it keylogger malware. The UK’s Information Commissioner’s Office, a privacy watchdog, said it had written to Microsoft to “understand the safeguards in place to protect user privacy.”
On Friday, Microsoft announced it would make changes to Recall in response to feedback. Along with turning off the feature by default, Microsoft said users will have to use Windows Hello biometric authentication to enable Recall.
A “proof of presence” is also needed to search in Recall or view a timeline.
In addition, Microsoft will add “just in time” decryption protected by Windows Hello Enhanced Sign-in Security (ESS). This means Recall snapshots will “only be decrypted and accessible when the user authenticates,” Pavan Davuluri, corporate vice president for Windows and Devices, said in a blog post, providing an “an additional layer of protection to Recall data.”
The changes will be made before the Recall feature becomes available on Copilot+ PCs beginning June 18.
“The introduction of an opt-in option has now given users more control, allowing them to choose to activate the Recall feature,” said Pareekh Consulting CEO Pareekh Jain, who expects early adopters will turn on the Recall feature when it launches. “As more people start trusting this feature and their concerns about misuse are addressed, then the majority may choose to opt-in and benefit from it,” he said.
Nvidia last week became the third company ever to exceed a $3 trillion market capitalization. The company’s valuation increased by a trillion dollars in just the last three months.
As the top chipmaker fueling the generative AI (genAI) boom, Nvidia’s over-the-top success feels like a fluke, a blip — a surge based on a bubble.
But I think that’s wrong. In fact, I think the company might be wildly undervalued.
Let’s compare. At the time I wrote this, Apple’s market capitalization was around $3.003 trillion and Nvidia’s was even higher: $3.012 trillion.
The valuation of a company is based on its share price, which itself is based in large measure on the perception of its earning potential in the future.
Apple’s revenue in 2023 was $383 billion (a 3% decline year-over-year from the previous year). Here’s a question: How does Apple double that revenue in the coming years? Sell more iPhones? Add AI to iPhones? Sell more expensive iPhones? Push Apple Vision Pro sales? Deliver more iPads? Add new financial services?
I just don’t see a path for Apple to continue its last decade of growth into the future.
Nvidia, on the other hand, has massive future growth potential. The future of AI processing, the future of self-driving cars, the future of robotics, the future of industrial automation — if you think these realms will expand in years ahead (and they almost certainly will) — then Nvidia’s sales will expand accordingly.
One Nvidia business initiative alone represents the transformation of a $50 trillion industry, according to Jensen Huang, Nvidia’s cofounder and CEO: industrial robots and robotic systems.
Huang, a tech rock star/Steve Jobs-like figure in Taiwan, the country of his birth (here he is signing a groupie’s chest last week), gave a surprisingly Jobsian keynote at 2024 Computex.
He even echoed Jobs’ “iPod, phone, internet communicator. Are you getting it?” (Huang’s version was: “computer, acceleration libraries, pre-trained models.”)
During the keynote, he laid out a breathtaking vision.
The Physical AI concept
Huang unveiled a groundbreaking concept called “Physical AI,” which he described as “AI that understands the laws of physics.”
His vision for “Physical AI” involves a complex virtual environment simulating real-world physics (simulated gravity, inertia, friction, temperature and other factors) and virtual people, objects and environments — say, a factory floor) where exact digital replicas of physical robots and robotic systems “learn” by testing thousands of options and scenarios, then retaining the solutions in software that will control actual robots and robotic systems.
One output from “Physical AI system” is the creation of generalist embodied intelligent AI agents that can operate in both virtual and physical environments.
This is all based on the “digital twin” idea I told you about more than a year ago. In a nutshell, digital twin environments enhance factories in nine major ways:
1. Real-time monitoring and analysis
2. Predictive maintenance
3. Production optimization
4. Quality control and defect detection
5. Enhanced decision-making
6. Training and safety
7. Space and workflow planning
8. Lifecycle management
9. Simulation and testing
The idea could revolutionize the field of robotics and autonomous systems. At the heart of this innovation lies Nvidia’s Omniverse, a powerful platform that combines real-time physically based rendering, physics simulation, and genAI technologies. Nvidia calls Omniverse “the operating system for Physical AI.”
The “Physical AI” concept represents a big change in the way robots and autonomous machines learn. In the past, humanoid, factory and other kinds of robots are tested in physical labs. Balancing robots start tethered, to protect when they fall as they learn to navigate. This painstaking process involves countless hours of trial and error. (Here’s what that looks like at Boston Dynamics.)
In a “Physical AI” scenario, a similar process takes place with a robot’s digital clone or twin in virtual space. The trial and error is radically accelerated without risk to people or equipment, and a vastly larger number of attempts can be made during the training. Once the robot learns or is programmed to navigate the virtual world flawlessly, that software is applied to a real robot, which can be fully updated with all that experience and “knowledge.”
In the “Physical AI” version of Omniverse, digital twin factories can train the robots themselves, and model the development of robots that work together with human workers for greater efficiency and safety, according to Nvidia.
The Omniverse platform integrates several Nvidia technologies, including Metropolis vision AI and Isaac AI for robot development, the Isaac Manipulator and Project GR00T, for simulation and testing.
Look at the stars aligning for Nvidia. The company leads the industry in AI hardware and software, and solutions for data centers, cloud computing, and edge computing. It sells a lot of this stuff, and wants to sell a lot more.
For the foreseeable future, Nvidia will continue to dominate the market for AI chips used for training and processing AI chatbots and a thousand other AI applications.
In the realm of automation and robotics systems, Huang’s claim that just about everything will become an AI-controlled robot in the future, from cars to restaurants to tractors — and that those robots will be built by robots in robotic factories is likely to be realized.
Huang’s further claim that AI-automated robotic systems will be most cost-effectively trained and optimized in digital twin “Physical AI” environments also checks out.
But here’s the mind-blowing part. Nvidia doesn’t have any competition in the “Physical AI” space. And the barriers to entry are gigantic.
The only conclusion: all roads lead to massive future upside growth for Nvidia.
Nvidia will make the chips that power the robots, the chips that power the “Physical AI” environment, the chips that power self-driving cars and the software and platforms, and AI that will enable companies to buy and use all those processors.
It’s Nvidia’s world now. Figuratively, and also digitally. (Huang also unveiled the concept of “Earth 2” — a “digital twin of the Earth” that would enable humanity to “predict the future of our planet.”)
Nvidia’s “Physical AI” idea is the killer concept of our generation, the foundational model upon which our robotic, automated future will be built.
Trust. It’s the critical word when talking about artificial intelligence in just about all of its forms. Do end-users or executives trust what generative AI (genAI) says? Presumably they do or they would never bother using it.
But is AI indeed trustworthy? (I’ll give you the Cliff Notes version now: No, it’s not.)
But before we even need to consider how trustworthy genAI results are, let’s start with how trustworthy the executives running the companies that create AI algorithms are. This is a serious issue because if enterprise IT executives can’t trust the people who make AI products, how can they trust the products? How can you trust what AI tells your people, trust what it will do with their queries, or be OK with whatever AI wants to do with all of your sensitive data.
Like it or not, when you use their software, you’re saying you’re comfortable with delivering all of that trust.
Let’s start with OpenAI. These days it might be better called “Open” AI? To have a company that doesn’t share the sources it uses to train its model call itself Open is challenging the definition of that word. But I digress.
The journal Artificial Intelligence and Law LiveScienceoffered a detailed look into OpenAI’s claims that GPT-4 did amazingly well on the bar exam. Stunner: It was a lie. A statistical lie, but a lie, nonetheless.
“Perhaps the most widely touted of GPT-4’s at-launch, zero-shot capabilities has been its reported 90th-percentile performance on the Uniform Bar Exam,” the publication reported. “Although GPT-4’s UBE score nears the 90th percentile when examining approximate conversions from February administrations of the Illinois Bar Exam, these estimates are heavily skewed towards repeat test-takers who failed the July administration and score significantly lower than the general test-taking population.
“When examining only those who passed the exam — i.e. licensed or license-pending attorneys — GPT-4’s performance is estimated to drop to 48th percentile overall, and 15th percentile on essays.”
It also noted that GPT-4, by its very nature, cheated. “Although the UBE is a closed-book exam for humans, GPT-4’s huge training corpus largely distilled in its parameters means that it can effectively take the UBE open-book, indicating that UBE may not only be an accurate proxy for lawyerly competence but is also likely to provide an overly favorable estimate of GPT-4’s lawyerly capabilities relative to humans.” the publication said.
It also concluded that GPT-4 did quite poorly in the written sections, which should come as no surprise to anyone who has asked ChatGPT almost anything.
“Half of the Uniform Bar Exam consists of writing essays and GPT-4 seems to have scored much lower on other exams involving writing, such as AP English Language and Composition (14th–44th percentile), AP English Literature and Composition (8th–22nd percentile) and GRE Writing ( 54th percentile). In each of these three exams, GPT-4 failed to achieve a higher percentile performance over GPT-3.5 and failed to achieve a percentile score anywhere near the 90th percentile,” the publication noted.
Then there’s the matter of OpenAI CEO Sam Altman, who last year was briefly the company’s former CEO. One of the board members who fired Altman has finally gone public and explained her rationale: Altman lied to the board — a lot.
“The board is a nonprofit board that was set up explicitly for the purpose of making sure that the company’s public good mission was primary, was coming first — over profits, investor interests, and other things,” OpenAI former board member Helen Toner said on “The TED AI Show” podcast, according to a CNBC story. “But for years, Sam had made it really difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board.”
Toner said Altman gave the board “inaccurate information about the small number of formal safety processes that the company did have in place” on multiple occasions. “For any individual case, Sam could always come up with some kind of innocuous-sounding explanation of why it wasn’t a big deal, or misinterpreted, or whatever. But the end effect was that after years of this kind of thing, all four of us who fired him came to the conclusion that we just couldn’t believe things that Sam was telling us, and that’s just a completely unworkable place to be in as a board — especially a board that is supposed to be providing independent oversight over the company, not just helping the CEO to raise more money.”
Let’s put this into context. Since the first company hired its first CIO, IT execs and managers have struggled to trust vendors. It’s in their nature. So, a lack of trust regarding technology is nothing new. But AI — and specifically genAI and all of its forms — are being given capabilities and data access orders of magnitude more extensive than any software ever before..
And we are being asked to grant this all-but-unlimited access to software that’s been trained on an extensive and secret list of sources — and what it does with the data it captures is also vague and/or secret.
What protects the enterprise from all of this? Vendor-delivered guardrails or, sometimes, in-house-crafted IT-written guardrails. And in the least surprising development ever, companies are now creating applications that are explicitly designed to circumvent guardrails. (They work quite effectively at that task.)
Abliteration is a technique “that can uncensor any LLM without retraining. This technique effectively removes the model’s built-in refusal mechanism, allowing it to respond to all types of prompts. This refusal behavior is mediated by a specific direction in the model’s residual stream. If we prevent the model from representing this direction, it loses its ability to refuse requests. Conversely, adding this direction artificially can cause the model to refuse even harmless requests.”
That means that guardrails as protection mechanisms are relatively easy to overrule or circumvent. And that brings us back to the issue of trust in these companies and their leaders.
As more reports come out, that trust is getting harder and harder to deliver.
DuckDuckGo has released an AI-powered portal to some of the most popular chabots and said it will not disclose or otherwise use what users type into the window to train up large language models — the basis for the generative AI (genA) tech.
“Chats are private, anonymized by us,” DuckDuckGo lead designer Nirzar Pangarkar wrote in a blog post. “Our mission is to show the world that protecting your privacy online can be easy.”
DuckDuckGo AI Chat currently allows users to access four popular AI chatbots: Open AI’s GPT 3.5 Turbo, Anthropic’s Claude 3 Haiku, Meta Llama 3, and Mistral’s Mixtral 8x7B) — the latter two, open-source models. The optional AI Chat feature is free to use within a daily limit, and can easily be switched off.
The feature can be accessed through duck.ai or duckduckgo.com/chat on a user’s search results page under the Chat tab, or via the !ai and !chat bang shortcuts. “They all take you to the same place,” Pangarkar said.
The company is also working to add access to more chat models and browser entry points. “We’re also exploring a paid plan for access to higher daily usage limits and more advanced models,” Pangarkar said.
In its blog, the company cited a recent report from the Pew Research Center showing adults in the US have a negative view of AI’s impact on privacy, even as they’re feeling more positive about the technology’s potential impact in other areas. About 80% of those familiar with AI indicated its use by companies will lead to personal information being used in ways they won’t be comfortable or for which it wasn’t originally intended to be used.
At the same time, Pew research also shows a steady uptick in the share of US adults using chatbots for work, education, and entertainment.
DuckDuckGo
Last year, the search engine released an AI-powered instant answer service called DuckAssist as part of a larger plan to integrate AI across its lineup. DuckAssist uses technology from ChatGPT maker OpenAI as well as Anthropic to generate its own answers to certain types of question.
DuckAssist, however, isn’t a chatbot; the company insists it was an upgrade to its Instant Answers feature, which allows users to submit a query and get comprehensive answers without the need to click on a result.
“We believe people should be able to use the Internet and other digital tools without feeling like they need to sacrifice their privacy in the process,” Pangarkar said. “So, we meet people where they are, developing products that add a layer of privacy to the everyday things they do online. That’s been our approach across the board — first with search, then browsing, email, and now with generative AI via AI Chat.”
Reports that Chinese companies are exploiting a loophole in export controls that limit the sale of high-performance computer chips to China could prompt the administration to crack down on the sale of AI-related cloud computing services to Chinese companies.
The US maintains a significant lead in AI hardware development, which is crucial for training AI models.
Semiconductor export controls already limit sales of high-end GPUs that can be used in the development of artificial intelligence technologies. These US export controls for AI technology, first introduced in October 2022 and updated in October 2023, are designed to balance national security interests with enabling wider research and development work.
General-purpose AI software, untrained algorithms, and datasets without military applications are out of scope of the controls.
The rules — administered by the US Department of Commerce’s Bureau of Industry and Security (BIS) — also block the export of semiconductor manufacturing equipment to prohibited countries including Russia, North Korea and China.
Cloud computing loophole
However, the existing rules omit the provision of chips-as-a-service, or cloud computing, creating a potential loophole for Chinese companies to benefit from the chips as long as they remain on US soil.
ByteDance, the Chinese firm behind TikTok, is reportedly renting US-based servers containing Nvidia’s H100 chips from Oracle’s cloud service. The cloud-based platform is being used to train AI models.
Once developed, it would be difficult to block the export of AI models from the US to China, US-based cloud providers and a former Nvidia employee told The Information.
Two smaller American cloud providers reportedly declined offers to rent servers with Nvidia’s H100 chips to ByteDance and China Telecom because the proposed deals went against the spirit of export control rules.
The Biden administration is reportedly preparing to close this loophole by restricting Chinese companies’ access to US cloud computing services.
As July 2023 that the US Department of Commerce was considering requiring cloud service providers such as AWS, Microsoft Azure, and Google Cloud to seek permission before they provide cloud computing services to companies linked to China or other restricted countries.
It was referred to the House Foreign Affairs committee, where it remains, although the revelations of The Information’s investigation may prompt renewed action.
However, some caution against a blanket ban.
Michael Robert, a cybersecurity specialist and AI expert and a senior technical contributor at GTA Boom, told Computerworld that while national security concerns are valid, the wider impact should be considered: Complete bans could backfire by damaging goodwill, so more tailored approaches to regulation should be considered.
“Mandating disclosure of customers and workloads involving controlled technologies ensures oversight, for example, while leaving room for cooperation,” Robert said.
Even if the US tightens up export regulations to restrict Chinese companies’ access to AI development platforms in the cloud, it will also need to grapple with companies such as Alibaba and Tencent that run US-based data centres, both of which are reportedly in the market for high-end GPU rigs.
WWDC 2024 may yet define Apple’s next decade, so most in tech are tuning in to see what the company says. Here is a brief rundown of what we are hearing now:
Apple Intelligence
The channel making the most noise this year is the one broadcasting that Apple will unleash its take on artificial intelligence (AI) and generative AI (genAI) across its products. We’ve written a lot about these claims, but in brief expect:
Apple’s own on-device genAI tools, likely including on-device transcription, powerful Xcode code-completion tools, automated replies and categorization in Messages and Mail, and more.
Bloomberg’s Mark Gurman predicts an add-on AI service called “Apple Intelligence,” which will likely beef up service revenues. (Morgan Stanley analyst, Erik Woodring earlier predicted a fee-based service of this kind could bring around $8 billion to Apple’s service revenues.)
Spatial computing
Apple will also apply AI to visionOS. To do so, it will lean into its existing technologies in machine vision intelligence, augmenting these with LLM-based contextual understanding and new AI-based user interface developments.
Apple gave a glimpse of some of these before the event, when it explained how it will become possible to control its devices using just a glance. A new Reader Mode on the iPhone will make it possible to have documents read to you, which also makes sense as a visionOS feature — think Geordi La Forge. (A new Live Captions feature that puts real-time captions in your view will also feature here.) Developers also expect new visionOS APIs and development tools to extend the versatility of their apps.
And Apple will also introduce more of its own fully native apps, perhaps with AI-generated fully immersive environments, which will be a game changer down the lane.
Vision Pro goes international
Apple apparently doesn’t plan to introduce any hardware at the show, but I’m not confident of that; I distinctly remember hearing that before, only to be surprised by a new Mac. What I do think is coming is an expansion into international availability for Vision Pro, which is something developers hoping to work on the platform need. At present, developers cannot get hold of these devices outside the US.
Software updates
Apple will introduce updated versions of all its operating systems, iOS, macOS, iPadOS and everyone else. These will include simplified Settings/System Preferences apps and more customization options on the Home screen, while most of Apple’s existing apps will be augmented with AI features. Photos could get AI-based editing tools, similar to Smart Eraser on the Pixel. This could also extend to the Health app, which may get better at making active recommendations to Fitness+ subscribers.
And maybe, just maybe, Siri on the HomePod will get a little better at telling the difference between Neubaten and Parton.
A Passwords app
One interesting claim sees Apple break out its own Passwords app for Macs, iPads, and iPhones. Based on iCloud Keychain, this would let users import passwords from rival services such as 1Password and LastPass. It might be best to see this as an improved UI for a service Apple already provides via its Passwords function in Settings. It is expected the Passwords app will also work on Windows and visionOS — but will Apple also introduce the app for Android?
A new Calculator?
My favorite Calculator tip may become historical as there may be plans to overhaul Apple’s Calculator app. Improvements could include Notes app integration, improved unit conversions, and a sidebar where you can track or fix errors or go back to earlier in the sum.
Under the wire
One snippet of news that’s not picked up too much attention is the discovery of Thread radios inside Apple’s most recent iPads and Macs. We don’t yet know why this support is here, but as Thread is the primary wireless protocol for unified smart home standard, Matter, I’ll hazard a guess this forms the foundations for improved smart home functionality.
What about partnerships?
A partnership with genAI company OpenAI is one thing, but as Apple battles regulators everywhere, might the company intend on opening up a little more? If it does, it could benefit from offering some products and services across multiple platforms. The expected announcement of RCS support in Messages could be the thin end of a wedge that also includes Apple TV+ for Android. Is it significant that this will be the first year Apple makes all its WWDC videos available on YouTube?
WWDC 2024 takes place June 10 beginning with a keynote speech at 10 am PT/i pm ET. The Keynote address will be available to stream on apple.com, the Apple Developer app, the Apple TV app, and the Apple YouTube channel.
Microsoft’s cloud storage, OneDrive, works both as a web app that you use through a browser and as a storage drive integrated into File Explorer in Windows 10 and 11. When you upload a file or folder to the OneDrive web app, it becomes available on your Windows PC through File Explorer, and vice versa. You can also access it on your smartphone or tablet (via the OneDrive app for Android, iPhone, or iPad) and even on a Mac (via the OneDrive Mac app) if any of these devices are signed in with the same Microsoft account.
OneDrive is handy when you’re collaborating with others, too. You can share files or folders in your OneDrive with anyone by sending them a web link to it. If it’s a Microsoft Office file, then you and others can collaborate on it in real time in the Excel, PowerPoint and Word web apps. Users with certain Microsoft 365 subscriptions can also use the desktop versions of these Office applications to work together on the file.
Microsoft recently introduced a new interface to the OneDrive web app that includes several features not available in OneDrive in Windows. We’ve covered how to use OneDrive in Windows in a separate guide. This story explains how to work with OneDrive in a web browser and make the most of the new interface.
We’ll focus on using OneDrive with a Microsoft 365 subscription for business; the version for personal use is similar but with fewer features. Also note that while you can use OneDrive for Web in any modern browser, some features seem to work better in Chrome or Edge.
Get started with OneDrive
To use OneDrive, you need a Microsoft account. If your company uses Microsoft 365 or you have an Outlook.com account, then you have a Microsoft account. If not, you can sign up for one for free.
With a free Microsoft account, you get 5GB of OneDrive storage. You can upgrade to 100GB storage or more by subscribing to a Microsoft 365 plan, starting at $2 per month. Business customers can subscribe to a 1TB OneDrive for Business plan for $5 per user per month or opt for a Microsoft 365 plan. (See all the Microsoft 365 plans for home, small business, and enterprise use.)
Get to know the new OneDrive for Web interface
The first step is to sign in to OneDrive with your Microsoft user account. If you’re already signed in to your account, you can go directly to the OneDrive for Web app in your browser.
After you sign in, the OneDrive for Web home screen is shown. Along the top of the main pane are cards that highlight files that may be important to you, as determined by Microsoft 365’s AI. This may include documents you’ve been working on with co-workers, items that you open frequently, or projects that someone has tagged your name to. You can click the action button on a card (e.g., Open or Go to task) to open that item inside the corresponding app in a new browser tab.
In the left column, right below your name or username, you’ll see that Home has been selected. This view lists the files you’ve recently opened in the main pane, whether they’re your own files or they’ve been shared with you. You can see at a glance who owns each file and recent actions taken on each. Files with the most recent activity appear first.
IDG
Above the main pane is a row of buttons that let you filter items in the files list by file type (Word, Excel, PowerPoint, or PDF). You can also type a word into the “Filter by name or person” box at the right to search for filenames containing that word.
Below Home in the left column are several options that let you display your files in the main pane in the following ways:
My files: This view lists all your files and folders. When you click a folder name to open it, the files in it are shown in the main pane. To navigate out of the folder, click My files again in the left column, or click a folder name in the “breadcrumb” hierarchy path that’s shown above the main pane. For example, “My files > Pictures > Research” indicates that you’re viewing files in the Research folder. You can click either Pictures or My files to go back up the folder hierarchy.
Shared: These are files that you are sharing with others and that other people are sharing with you. As in Home view, you can use the buttons above the main pane to filter by document type, and/or you can type into the “Filter by name or person” box to search for filenames containing a particular word or files shared by a particular person.
Favorites: This shows all the files or folders that you’ve marked as favorites in the main pane. You can favorite your own OneDrive files and folders as well as those shared with you in OneDrive.
To favorite a file or folder, move the pointer over the file or folder and click the star icon that appears to the right of it. Click the star icon again to un-favorite the file or folder.
Recycle bin: Here you’ll see files you’ve deleted from your OneDrive.
People: In this view, you see a list of people who have shared files with you, with these files listed to the right. This is handy when you remember who shared a file with you but not when or what the filename is. To quickly zero in on a person, type their name in the “Filter by person” box at the upper right.
IDG
Meetings: These are files that were shared in Microsoft Teams meetings that you started or took part in. Files attached to scheduled meetings that haven’t happened yet will also be listed.
Media: A new view that’s rolling out gradually (so you may not have it yet), Media lets you browse the images and videos you’ve stored in OneDrive.
Quick access: When you open files and folders stored in SharePoint document libraries, those libraries are added to the Quick access list; click the name of any library to open it and browse its files and folders.
Note that OneDrive for Web is integrated into the new Outlook for Windows app, the Outlook web app, and the new Microsoft Teams app (for Windows, macOS, and web). The OneDrive icon is on the vertical toolbar at the left edge of each application. Clicking it opens OneDrive in the main pane of Outlook or Teams, with the same layout as described above. The new look is also present when you open OneDrive in the microsoft365.com portal (formerly office.com).
IDG
Store or create files and folders in OneDrive for Web
To upload a file from your computer to OneDrive, click the large + Add new button at the upper left. From the menu that opens, click either Files upload or Folder upload. The web browser will open a file manager for you to select the files or folders on your PC that you want to upload to OneDrive.
IDG
From the + Add new menu, you can also select Folder. OneDrive prompts you to type in a name for the new folder. You can optionally choose a color for your folder as well, then click Create. The folder will appear in the main pane.
The + Add new menu also has options to create a Microsoft 365 file, such as an Excel spreadsheet, PowerPoint presentation, or Word document. When you select one of these, the web version of that Microsoft 365 app opens in a new browser tab with a blank spreadsheet, presentation, document, etc. inside it, so you can get right to work creating content in it. This new file will immediately appear in your OneDrive.
Tip: OneDrive will upload or create files and folders wherever you happen to be in your folder hierarchy when you click + Add new. If you’re on the home screen in OneDrive, for example, it will place the file or folder at the top level of your OneDrive storage. If you want the file or folder to be within another folder, click My files and navigate to that location first, then click + Add new. (You can also drag-and-drop files and folders to move them to another location, just as you would in Windows File Explorer or Mac Finder.)
When you add a file or folder to OneDrive for Web, it is stored in the cloud and you can also access it through File Explorer in Windows 10 or 11 or through Finder in macOS (if you have the OneDrive app installed). If you rename, move, or delete a file or folder in OneDrive via either the web app or File Explorer/Finder, you’ll see those same changes in the other interface; they’re just two different ways to access the same content stored in the cloud.
Access your OneDrive files offline
Until now, if you wanted to make files stored in OneDrive available on your computer without an internet connection, you couldn’t do it from the web interface. Instead, you needed to go to File Explorer or Finder on your computer, right-click the file, and select Always keep on this device; that action downloads the file to your computer and stores it locally so it’s available when you’re offline.
However, Microsoft recently announced a new offline mode in OneDrive for Web that lets you store files locally directly from the web interface — and even lets you use the Home, My files, Shared, Favorites, People, and Meeting views offline. This feature is just beginning to roll out to OneDrive business accounts.
Open OneDrive files on the web or in a desktop app
To open a Microsoft 365 file, such as a document, presentation, or spreadsheet, from OneDrive, just click it. By default, it will open in a new browser tab in the corresponding web app. Click an Excel file, for example, and the spreadsheet will open in the Excel web app.
If you’re subscribed to a Microsoft 365 plan that lets you use its desktop applications, you can open the spreadsheet in the Excel desktop app installed on your PC. To do this, right-click the Excel file in OneDrive for Web. From the menu that opens, select Open and then Open in app. (Note that ad-blockers and some browser privacy settings may interfere with this feature.)
IDG
This also works for many other file types. For example, you can right-click a PDF and select the desktop app that you use to edit PDFs (such as Adobe Acrobat).
Create file shortcuts in OneDrive for Web
You can create a shortcut to any file you have access to in OneDrive. For example, you might want to create a shortcut to a file shared with you by another person. Or say you want to have easy access to files located in several different folders, but you don’t want to move the files or make copies of them. You can create several shortcuts organized in a single folder.
A shortcut is treated as its own file in OneDrive. You can delete it or rename it, but these actions won’t rename or delete the file that it’s linked to. Think of a shortcut as a file that works like a web link. When you click it, it opens the file that it’s linked to.
To create a shortcut to a file, move the mouse pointer over the file and click the three-dot icon that appears next to its name or over its thumbnail. On the menu that opens, select Add shortcut and select the folder where you want the shortcut to be saved.
IDG
Share files or folders in OneDrive for Web
Move the mouse pointer over the file or folder you want to share and click the icon of the arrow over the square.
IDG
The Share panel opens. From this point on, sharing works exactly as it does in OneDrive in Windows.
Note that if you’re using a Microsoft 365 account that’s owned by your company, the options for sharing a web link to a file or folder in your OneDrive may be restricted by your IT administrator. Users with individual Microsoft accounts may see slightly different screens and options than those shown here, but the sharing process is similar.
Share a file or folder with specific people
In the Share panel, you can invite specific people to access the file or folder in your OneDrive. Enter their email addresses in the first field. If they’re in your Outlook contacts, you can start typing their name and select from the suggestions that pop up.
IDG
Click the pencil icon to the right to change the access level to your file or folder. Depending on your Microsoft account or Microsoft 365 account, you may see some or all of these options:
Can edit: the people you’ve invited can view your file or folder (and its contents), download it, forward its link to others, and make changes to it (including contents in a folder). For example, if it’s a Word document, then a person viewing it can edit it with Word. This also means that when they edit your file or folder, their changes overwrite the original copy in your OneDrive.
Can view: invitees can view your file or folder, download it, and forward its link to others — but they can’t make changes to the original file or folder (or its contents) in your OneDrive.
Can’t download: invitees can view the file or folder but can’t download it.
You can also enter a brief message for the recipients to read, then click the Send button. An email will be sent to the recipients that contains a link to your file or folder that only they can open.
Share a file or folder with all your co-workers
If, instead of inviting specific people, you want to share the file or folder with everyone in your organization, click the gear icon just to the right of the “Copy link” button at the bottom of the panel. A “Link settings” panel appears.
Under “Share the link with,” select People in [your organization name] to share the file or folder with all your co-workers.
In the “More settings” area below, you’ll see the same access permission options as on the main Share panel — so you can, for instance, change Can edit to Can view. After you’ve made your selections, click the Apply button. This returns you to the Share panel, where you can click Send to send the invitation email.
Share a file or folder via public link
Another way to share a OneDrive file or folder is with a public link. We strongly recommend not using this method with files or folders that contain sensitive data. (Some organizations turn off this capability.)
On the Share panel, you can click the Copy link button, and a link to your file or folder is copied to your PC clipboard. You can then share this link with other people — but before you do, it’s wise to think about sharing permissions. By default, anyone who clicks the public link can view your file or folder (and its contents), download it, forward the link to others, and make changes to the file or folder (including contents in a folder).
To change this access setting, click the gear icon just to the right of the “Copy link” button. This calls up the “Link settings” panel. In the “More settings” area, you can change the access permissions, set an expiration date after which the public link will no longer work, and/or password-protect the file or folder.
IDG
(Or, if you change your mind about sharing the link publicly, you can choose a different recipient group: people in your organization, people who already have access to the file or folder, or people you specifically invite.)
After you’ve made your selections, click the Apply button, which returns you to the Share panel.
Click the Copy link button. You can now share this link with other people by pasting it into a document, email, message, etc.
Note: For the quickest way to create a link to publicly share a file or folder in your OneDrive, right-click the file or folder and select Copy link from the menu that appears.
Stop or manage sharing for a file or folder
Select My files in the left column. Move the pointer over the shared file or folder, click the three-dot icon, and select Manage access. On the Manage Access panel that opens, you can click Stop sharing to stop sharing the file or folder completely.
IDG
You can also manage the access permissions for any individual or group with access to the file. On the People tab, click the permission next to any person’s name to change it — for instance, from Can edit to Can view. You can do the same for groups by going to the Groups tab.
To manage shared links, click the Links tab. To stop sharing a public link, for example, simply click the trash can icon by the link. Or click the gear icon if you want to change the access settings for the file or folder.
OneDrive for Web: More features on the way
As Microsoft continues its inexorable push to the cloud, it has poured considerable resources into beefing up the web versions of its Microsoft 365 apps, and OneDrive is no exception. The new interface turns OneDrive for Web into a robust tool for productivity and collaboration — and Microsoft isn’t done yet.
In addition to the aforementioned offline mode, Microsoft has announced several powerful new features coming to OneDrive for Web later this year, including enhanced search with additional filters; the ability to use templates when creating new Word, Excel, and PowerPoint files; and a slew of generative AI capabilities with Copilot for OneDrive (for users with a Copilot subscription).
Using OneDrive from within Windows is convenient and covers all the basics, but if you haven’t used the web app in a while, it’s worth another look. You just might find that the new web interface boosts your productivity — and may do so even more as additional features roll out.
Hey. You. Yes, you there — the one with your overly moist eyeballs pointed at the screen. What if I were to tell you that that the browser you rely on for all of your web-based exploring on Android had oodles of extra features — top-secret settings that’d add awesome powers into your mobile browsing adventures and make wiggling your way around this wacky ol’ web of ours meaningfully faster, more enjoyable, and more productive?
Well, provided you’re using Google’s Chrome browser for Android, that’s as true as true can be. And best of all, it doesn’t take much to uncover all of Chrome’s carefully concealed treasures — if you know where to look.
The six settings on this page will make your Android-based web browsing more powerful, more efficient, and generally just more pleasant. They’re all just sitting there waiting to be found, too — so really, why not take advantage of what they have to offer?
Before we spelunk any further, though, one quick word of warning: All of these settings are connected to Chrome’s flags system, which is a home for under-development options that are still actively being worked on and aren’t technically intended for mainstream use. The flags system is meant for expert users and other similarly informed (and/or insane) folk who want to get an early look at advanced items. It also evolves pretty regularly, so it’s entirely possible some of the settings mentioned here may look different from what I’ve described or even be gone entirely at some point in the not-so-distant future.
What’s more, Chrome’s flags system has loads of advanced options within it, some of which could potentially cause websites to look weird, Chrome itself to become unstable, or even your ears to start spewing a delightfully minty steam. (Hey, you never know.) So in other words: Proceed with caution, follow my instructions carefully, and don’t mess with anything else you encounter in this area of the browser unless you actually understand it and know what you’re doing.
Got all that? Good. Now, let’s give your browser some spiffy new superpowers, shall we?
Chrome Android setting #1: Your custom web step-saver
One of my favorite tucked-away Chrome features is the relatively recent addition of a custom button for the browser’s toolbar. Have you found that yet?
The feature adds an extra shortcut into your browser’s top bar — and what makes it especially cool is that the shortcut can be for whatever function you use the most, with a list of possibilities that keeps growing.
At this point, you can set the button to serve as a one-tap command for sharing a page, starting a new tab, starting a new voice search, firing up an instant page translation, or adding the currently opened page into your browser bookmarks.
You can also opt to have Chrome decide for you and dynamically change that button based on which of those actions it thinks you’re most likely to use at any given moment.
JR Raphael, IDG
The customizable button actually seems to be available without any under-the-hood tinkering for many Android-appreciating animals at this point. If you already have it, you can dig up the option and take total control of it by tapping the three-dot menu icon in Chrome’s upper-right corner, selecting “Settings” followed by “Toolbar shortcut,” and then activating the toggle at the top of the screen that comes up next.
JR Raphael, IDG
If you aren’t seeing that section in your Chrome Android settings, don’t fret! With the flick of a few quick switches, you can force that shiny new system to show up for you:
First, type chrome:flags into your Chrome Android app’s address bar.
Then type adaptive button into the search box on the screen that comes up.
See the line labeled “Adaptive button in top toolbar customization”? Tap the box beneath that and change its setting from “Disabled” to “Enabled.”
Tap the blue Relaunch button at the bottom of the screen.
Now, while we’re thinking about improvements that’ll affect your entire Android web browsing experience…
Chrome Android setting #2: A dark mode upgrade
Android’s Dark Theme is a delightful way to make your virtual world a little easier on the eyes, especially in the evening hours or anytime you’re in a dim setting — say, a server room deep in the bowels of your company workplace or maybe a gigantic vat of Velveeta after you’ve been shrunken down to a tiny Lego-character size.
But oddly enough, using the system-wide Dark Theme toggle doesn’t actually affect the web. Most sites still show up bright as day and as a harsh contrast to the dim, mellow vibes the rest of Android offers up in that context.
Well, here’s the fix:
Once again, type chrome:flags into your browser’s address bar.
Now type dark into the search box on the screen that comes up.
See the line that says “Darken websites in themes settings”? Change its setting from “Default” to “Enabled.”
Smack that splendid Relaunch button at the bottom of the screen. (Don’t worry. It likes it.)
Once your browser restarts, head back into its settings (via that three-dot menu icon in the upper-right corner). Tap “Theme,” and you should see a new checkbox beneath the “System default” option:
JR Raphael, IDG
Make sure that box is checked and active. Then, all that’s left is to activate your device-wide Dark Theme — either via the Quick Settings toggles connected to your notification panel or within your system settings — and go pull up any ol’ website you want. You should see the site magically transformed into a dark motif for your peeper-pleasing pleasure.
JR Raphael, IDG
Ooh, ahh, etc.
Chrome Android setting #3: Zippier zooming
Let’s face it: Most of us aren’t getting any younger. (I say “most of us” ’cause there’s always that one dude who somehow seems to age backwards and look better with every passing year. We’re on to you, Josh from accounting.)
And sometimes, certain websites have a tendency of making their text Too Damn Small™ for our aging eyes. (Not that I’ve ever had that problem or anything…)
Android’s got ample options for increasing text size on a browser-wide basis or even across your entire device, but the reality is that text size isn’t always the same from one virtual stomping ground to the next. And on the web in particular, one site’s squintily sized text standard might live alongside another site’s perfectly fine word size choices.
Fear not, though, for I’ve got your back. Google’s Chrome Android app has a still-hidden option that makes it exceptionally easy to adjust text size on a site by site basis as you’re working your way around the World Wide Webbitudes:
Start by typing chrome:flags into your browser’s address bar (feeling familiar yet?).
Next, type zoom into the search box on the screen that comes up.
Find the line labeled “Accessibility Page Zoom” and change its setting to “Enabled.”
Sing a jaunty little sea shanty, for good measure.
And finally, tap the Relaunch button at the bottom of the screen.
Now, get this: Once your browser comes back, you can open up any site, anywhere on the web, and look in Chrome’s main three-line menu icon to find a new “Zoom” option.
JR Raphael, IDG
Tap that son of a gibbon and tap it good, and you’ll get a floating zoom bar right atop whatever page you’re viewing.
JR Raphael, IDG
Best of all? Whatever changes you make to that site’s zoom settings should stick and then continue to apply for that specific site and that site only moving forward.
While we’re thinking about easy browsing experience enhancements, let’s take a sec to treat ourselves to some effortless aural pleasure.
You’d never know it, but the Chrome Android app now has an integrated option for transforming any page you’re viewing into an on-demand personal podcast — so you can hear the text read aloud as you go about your morning commute, your post-lunch yoga and/or yogurt session, or your post-Zoom-meeting dancing break when you’re sure no one can see you. (I’m not the only one who does that, right?)
Once activated, you’ll find the option within the main Chrome menu to listen to any compatible page. All you’ve gotta do is tap it, aaaand…
JR Raphael, IDG
How ’bout that? Chrome will read all the text out loud to you, just like your own adult version of storytime (with up to 97% more Important Business Information™!)
Here’s how to make it happen:
Type — yup, you guessed it — chrome:flags into your browser’s address bar.
Type read aloud into the search box on the screen that comes up.
Tap the line labeled “Read Aloud” and change its setting from “Disabled” to “Enabled.”
Tap the Relaunch button at the bottom of the screen.
After your browser restarts, just open up any ol’ article and tap that three-dot menu icon in Chrome’s upper-right corner. You should see the new “Listen to this page” option there and waiting.
As an added bonus, you can also now find the read-aloud command as an option within the custom address bar button we went over at the start of this saga — if you want even easier on-demand access to it.
And don’t forget, too, about Android’s system-wide Reading Mode, which lets you load an entire reading-optimized interface for practically anything on your device — on the web or beyond — and then optionally have it read aloud to you from there, too.
Over the years, Android’s gotten much smarter about sensitive device permissions by introducing more nuance into the equation. When it comes to things like location, you can opt to allow an app only limited access — for a single session — rather than having to make an all-or-nothing, never-or-forever-style decision.
Amazingly, this same concept is only just now starting to make its way to the web. And as of this moment, the onus is still on you to manually activate it if you want the advantage of that added flexibility on the Chrome Android front.
Luckily, it’s easy to do:
Crack those phalanges and type chrome:flags into your browser’s address bar once more.
This time, type one time permission into the search box at the top of the screen.
Find the “One time permission” line and change its setting from “Default” to “Enabled.”
Press your purty little pinky into the Relaunch button at the bottom of the screen.
Now, the next time a site asks to access your location, mic, or camera, you can opt to allow it to do so only that one time — without granting it eternal permission to that level of access.
JR Raphael, IDG
Much more sensible, wouldn’t ya say?!
Chrome Android setting #6: Limitless screenshots
Last but not least in our list of secret Chrome Android treasures is a subtle but significant improvement that’s made a definite difference in my day-to-day web doings.
It’s a 10-second tweak that lifts a long-standing limitation around when you can capture screenshots on the web — specifically allowing you to snag a screenshot whilst viewing a site in Chrome’s incognito mode, if you’re ever so inspired.
I tend to use incognito mode anytime I want to see a page without being signed into the associated site, whether that’s Google or a company-connected web page. And I’ve found myself frustrated more than a few times when I try to capture a screenshot in that scenario and end up with a blank, useless image as the result.
So here’s the fix:
One last time, type chrome:flags into your browser’s address bar.
Type incognito into the search box at the top of the next screen.
Find “Incognito Screenshot” and change its setting from “Default” to “Enabled.”
And hammer down your ham-scented hand onto that Relaunch button at the bottom.
Told ya it was easy, right? Now you can capture away all around the web, incognito or not — and you can smugly enjoy the knowledge that you’re a step ahead and enjoying under-development features before anyone else even knows about ’em.
And yes, that includes Josh from accounting — that fresh-faced but stale-browsered showoff.
Hey, small victories, right? We’ll take what we can.