Combined with its existing solutions, Apple’s strategic approach to artificial intelligence (AI) deployment could make a radical difference to public health. Here is how it could achieve that.
Apple has already told us that achieving better health through better choices is fundamental to its approach. “Our goal is to empower people to take charge of their own health journey,” said Dr. Sumbul Desai, Apple’s vice president of health, in 2023.
While knowledge is power, anyone who’s ever sent themselves into a tidal wave of panic when searching for information on their own symptoms online should know that applying it effectively isn’t always easy. Everyone is different, with varying polarities around health. What works for some might work more effectively if optimized and personalized for others, reflecting unique characteristics such as age, weight, or gender.
Multiple studies show how making better choices can help keep you healthy. It has already been shown that the iPhone and Apple Watch can help identify early onset of dementia, Parkinsons, respiratory diseases, and sleep apnea. More recently, the Apple-introduced Vitals app seems to be providing people with early warnings that they’re about to get sick; the company has also created tools to empower Apple’s customers with better insights into their own mental health. Apple’s vision for health straddles all its devices, including AirPods Pro, which now act as bona fide hearing aids and hearing test systems.
What problems might these technologies alleviate?
The World Health Organization predicts diabetes will impact 1.3 billion people by 2050, up from 830 million in 2022. Cardiovascular disease kills 17.9 million people each year. The third biggest killer, chronic respiratory disease, affects around seven in every 100 people on earth. The estimated cost of chronic disease is expected to reach $47 trillion globally by 2030. What all three conditions have in common is that they can in part be mitigated by early intervention lifestyle changes and better self-care decisions.
Better health, one step at a time
Sure, it’s not a panacea — people near you will still suffer from health problems. But positive lifestyle changes can mitigate, prevent, and manage these conditions, at least some of the time. But, ultimately, it’s not just the lives saved when using Emergency SOS via Satellite from a remote location that matter, it’s also the many that may never encounter problems as a direct result of taking 10,000 steps a day and closing all the Activity rings on their Apple Watch.
The Health app is a major component of all of this. Think of it as a digital hub. Not only does it gather information from all your devices, but it also sucks in data from some third-party services and has the capacity to share and ingest information with health professionals. All those insights are private and personal to you, and Apple wants to keep it that way.
All of its systems aim to gather as little data as possible about you. When it comes to health, the intention is to ensure your data doesn’t enter the surveillance economy, (though Apple’s privacy commitment could yet be torn apart by clumsy regulation).
But is it safe?
In taking this approach, Apple is grappling with the biggest challenge to wider deployment of AI. In response to the ever-corroding experience of intrusive surveillance advertising and the challenge of privacy protection in a digital age, people are reluctant to share health data. By crafting systems that don’t require direct access to your data, Apple has an opportunity to unlock the potential benefits of personal health AI without also creating another attack surface against digital privacy.
The risk is that if the company is forced to open up its systems, it might also be forced to open up your personal health data to third-party firms with which you don’t have the same depth of trust. With that in mind, it’s understandable the company might not introduce these systems if regulators insist on exposing personal information to outside companies less committed to privacy.
To avoid this, Apple must convince governments that the benefits of digital privacy far outweigh the costs of removing it. It needs to be able to build a health OS that can support third-party developers while also protecting user data. The prize? The opportunity to build a powerful personalized preventative AI-augmented health care anyone can hook themselves into for the price of an Apple One subscription. The risk? An incredibly intrusive exfiltration of personal information.
Apple launched its highly anticipated Vision Pro “spatial computing” headset last Feb. 2 amid significant hype — and hopes it could finally push virtual or mixed reality into mainstream use.
But with a price tag of $3,499 and a staggered rollout to countries outside the US, the idea of widespread adoption was always optimistic; fewer than 500,000 of the devices have been sold to date, according to Bloomberg. (Others have provided similar estimates for first-year sales, though Apple itself declined to comment.)
“When you get right down to it, the numbers aren’t terrifically huge — I wouldn’t consider that something to jump up and down about,” said Ramon Llamas, research director for IDC’s Devices and Displays team. He described the Vision Pro as a first iteration that, like other first-gen Apple devices, will take some time to find a broader audience.
“You’ve got to start somewhere, and Apple swung for the fences,” Llamas said. “They did a very good job in terms of UI and also in terms of display. So, do I consider it a flop? Absolutely not.”
JLStock / Shutterstock
Despite shortcomings — the high price and a paucity of use cases, chief among them — it’s too early to write off the device or Apple’s broader strategy around augmented and virtual reality (AR/VR), according to Llamas. “Apple is very quick to iterate and improve, such that a Gen. 2 or Gen. 3 Vision Pro is going to make the current device look quaint,” he said.
Vision Pro sales represent a small fraction of the number of iPhones sold each year (passing 200 million annually in recent years). The same goes for other devices in Apple’s line-up: iPads, Macbooks, and Apple Watches sell in the tens of millions a year. But those are well-established products, whereas the market for AR/VR devices is in its infancy: around 6.8 million devices were sold globally in 2023, according to the latest IDC data available, compared to more than 1 billion smartphones. (IDC expected about 9.7 million AR/VR devices to be sold in 2024.)
IDC expects continued growth for AR/VR devices during the next few years.
IDC
It could be more than a decade before widespread adoption occurs, said Tuong Nguyen, director analyst and part of Gartner’s Emerging Technology and Trends team. “So, a few hundred thousand [Vision Pro devices sold], I think that’s plenty good,” he said. “It’s a great start to a long journey.”
Vision Pro’s growing pains
Reviews of the Vision Pro at launch pegged it as an impressive feat of engineering with significant drawbacks that preclude regular use. Those issues included a lack of content, short battery life, the neck-straining weight, and — perhaps the biggest drawback — the price. Most saw it more as a glimpse at the future of computing rather than a mainstream device.
Nearly a year since launch, the Vision Pro remains a devise still in search of identity, with a key use case that has so far remained elusive.
“There was a lot of hype when the Apple Vision Pro was first announced,” said Avi Greengart, president and founder of Techsponential. “And that hype hasn’t quite been matched with a killer app, or a set of killer apps, that have made people say: ‘Forget the cost or the comfort, I must have this device.’ But that’s how platforms evolve — VisiCalc didn’t show up on day one, or Excel. So, it will take time.”
Apple has cast a broad net in offering up potential uses for the headset.
Entertainment has been a prominent one, and the Vision Pro has been praised for its immersive — albeit solitary — entertainment experience. But Apple has struggled to cultivate an ecosystem of apps and content that can keep users returning to the headset. A Wall Street Journal report in October highlighted the difficulty in attracting developers to the platform, with native apps from big names such as YouTube and Netflix missing, though HBO Max and Disney+ are available.
“Some of the most remarkable entertainment experiences are already available for the Apple Vision Pro, but they tend to be relatively short,” said Greengart. “We don’t yet see regular sports or music content that you can experience, like a subscription to your favorite NBA or NFL team where you get 50-yard line seats.”
The slow uptake has put some consumer-facing companies off creating Vision Pro apps, said Jan Solecki, head of product strategy and growth at Nomtek, a software development agency that focuses on mobile and AR/VR apps. “They’re just waiting to see how the platform will perform: they’re not rushing to launch,” he said. Nomtek talked to a number of meditation app providers that considered the Vision Pro “and decided, ‘We don’t really have that user base there,’” he said.
Aside from video and virtual environments, one notable feature in the VisionOS 2.0 release is the ability to turn old photos into “spatial” 3D images; it’s a relatively simple addition, yet has proven popular with users.
Google Trends shows interest in the Vision Pro peaked around the time it launched, then flatlined after.
Google
Though gaming is often a primary use for virtual of mixed reality devices, but it’s not one Apple seems particularly interested in pursuing. Apple’s eye detection and hand gesture inputs are well suited to some tasks, but most games require a controller and the Vision Pro is hampered by a lack of hardware support. To perhaps remedy that, Apple is reportedly partnering with Sony to enable the use of its PlayStation VR hand controllers.
Personal productivity is another potential use, and one Apple has leaned into in its marketing materials. Work apps available on the Vision Pro include Microsoft Word, Excel, and PowerPoint alongside Apple’s own productivity tools.
VisionOS updates in recent months have made it easier to use the headset in conjunction with a Macbook laptop, including the ability to connect a Bluetooth mouse and view a Macbook keyboard while in a virtual setting. There’s also wide-screen mode (and with the VisionOS 2.2 release an ultrawide screen option). “You can array windows all around your space, physical space, and create the equivalent of a six-monitor setup, and that’s pretty exciting,” said Greengart.
Apple
The weight of the device — about 1.3 pounds — makes it harder for people to use it for long periods. But replacement head straps developed by third-party vendors do promise some relief, said Greengart.
“One of the things that has made the biggest improvement is not a software update, but a hardware one,” he said, pointing to options from the likes of Annapro and medical device manufacturer ResMed. Still, neither of them “can completely negate the fact that this is still a rather heavy computer you’re strapping to your face,” he said. “That is one of the biggest constraints of the device today. It is not the most comfortable thing to wear over long periods of time.”
ANNAPRO says its AVP Strap for Vision Pro can reduce facial pressure by about 60%-90%.
ANNAPRO
Of course, those complaints can leveled at all mixed-reality headsets, and Apple has been lauded for the Vision Pro’s interface. “There’s no question that the price and the weight are inhibitors, but the user experience you get on an Apple Vision Pro is as of yet unmatched by anything else on the market,” said Greengart.
Apple targets the enterprise
Apple also sees the Vision Pro as an enterprise tool, and has pitched as being useful for collaboration, employee training, and remote assistance for frontline workers.
“I would say they focus more on enterprise than they usually do with other products they’ve launched in the past, at least early on,” said Nguyen.
With a less-than-expected consumer interest in the Vision Pro, Nomtek shifted its attention to developing apps for business customers. “Very early on, we noticed a switch in Apple’s strategy to move more towards the enterprise customer.… With that in mind, we’ve been also following this strategy and targeting enterprises,” said Solecki.
The company has worked on a variety of business-focused projects, including developing a Vision Pro app that provides step-by-step guidance and training for maintenance technicians at a jet manufacturer firm. A building material manufacturer is also exploring the development of a Vision Pro app to aid machinery maintenance for hundreds of factories around the world, Solecki said.
One API opens access to the main camera feed, allowing developers to create an “anomaly detection” app for a production worker to detect faulty components, according to Apple. Another enables QR code scanning and detection; that could be useful for a warehouse worker scanning bar codes to verify packages have the correct item without the need for a hand-held scanner. It’s also now possible for developers to exceed the default limits on the Vision Pro’s processors to handle more demanding scenarios, such as rendering a high-fidelity, mixed-reality display of a race car. (That kind of use, however, can reduce battery life and increase fan noise.)
Teams users can join meetings using their Persona via the Vision Pro.
Microsoft
The addition of device management capabilities earlier this year also made it easier for enterprises to deploy multiple Vision Pro headsets, said Solecki, with more parameters to manage and restrict usage. “We are looking to implement this for an airline as an in-flight entertainment solution — we can really narrow down what they can do and they cannot do, to only access the apps that we’ve approved,” he said.
More generally, the Vision Pro launch served to reignite business interest in mixed reality.
“Regardless of the actual sales of Apple Vision Pro, it has been very good for the XR scene as a whole industry,” said Solecki. “One company that came to us and wanted to build something on Vision Pro said: ‘We tried HoloLens years ago, and we didn’t like it and we just dropped it, but now Apple has released [the Vision Pro] and we want to try it.’ …People got excited again by an XR headset.”
“I think Apple has a lot to do on both a software and hardware front before the Vision Pro will become a ‘must have’ device, even at a pilot level, at the typical US business,” Lewis Ward, senior research analyst at IDC and the report author, told Computerworld last year.
Some industry sectors, such as finance and healthcare, remain more bullish, however. “The interest is there [from businesses], but there are still many hurdles that need to be overcome to make it viable for an enterprise,” said Nguyen. “One [is] price; two, we’ll just call it content: what can I do with this to make it worth this investment?”
If Vision Pro is just the start, what’s next?
Almost as soon as the Vision Pro launched, rumors of follow-up devices began to spread, including talk of a lower-priced version that strips out some premium features, such as the front-facing EyeSight display.
A cheaper version could help attract a wider audience of users (and developers), at least until a “killer app” arrives that convinces people to invest serious money in a headset. “I don’t know if they will release something cheaper, but I will say they need to in order to get any meaningful adoption beyond what they’ve gotten already,” said Nguyen.
Image: CCS Insight slide
There’s also talk of Apple developing lightweight augmented reality glasses — the Holy Grail for Apple and others. But, similar to Meta’s Project Orion prototype, any augmented reality glasses from Apple are likely years away from release.
In the meantime, the existence of the Vision Pro and Apple’s presence in the market could spur wider innovation. It “reinvigorated the competition” said Greengart, pointing to Meta CEO Mark Zuckerberg, who publicly talked up his company’s devices as a more affordable rival to the Vision Pro.
In addition, Google recently announced its Android XR operating system, which will be used in a new Samsung headset due to launch this year. The device and OS bear a resemblance to Apple’s own hardware and software, with mixed reality pass-through and a multi-screen interface.
Android XR places greater emphasis on the use of artificial intelligence — a strategy Apple’s likely to pursue, too, with potential to integrate Apple Intelligence into the mixed reality headset.
“The competitive environment is what fosters the innovation,” said Nguyen. “That’s an opportunity for you to improve your product, your solution, your offering, whatever it happens to be.
“One thing that all can agree on is that the Vision Pro has given the market a much-needed boost. The announcement and launch advanced us a few steps closer to creating the conditions necessary for meaningful adoption growth.”
Today, on New Year’s Day, we have a brief moment to pause and prepare — and set ourselves up for success.
From a tech perspective, that means taking the time to clean up and optimize your smartphone setup. That way, when the inevitable craziness hits, you’ll be ready to tackle whatever comes your way with smart, sensible systems and all the best apps already in place and ready to serve you.
We’ve already thought through the top Android tips and Google Android app tricks from 2024 — and even the most noteworthy Pixel-specific advice from the past year. Today, it’s time to shift our focus and look at some of the most exceptional (and often off-the-beaten-path) third-party Android apps that can really expand your experience and grant you some exceptionally effective new productivity powers.
Take a peek through the following standout suggestions — 44 awesome apps to explore, spread out over a dozen different articles! — and for even more Android Intelligence, make sure you’re set to receive my free Android Intelligence newsletter, too. You’ll get three new things to try in your inbox every Friday, and you’ll get my game-changing Android Notification Power Pack as a special welcome bonus.
With about 60 seconds of simple setup, you can have Google’s Gemini AI genie sum up your incoming notifications this instant — no matter what Android device you’re using.
Why stop with the home screen? This wow-worthy widget wonder will make whatever Android device you’re using infinitely more efficient — in a way that only Android could provide.
Whether you’re dealing with mumblings from meetings, noises from notifications, or music from commute-time streaming, you’ve never experienced sound on your phone like this.
Most AI apps are buzzword-chasing hype-mongers. These eight off-the-beaten-path supertools — while not entirely Android-specific — are rare exceptions.
A very happy New Year to you. Here’s to many new geeky, Googley adventures ahead!
Give yourself the gift of endless Android Intelligence in 2025 with my free weekly newsletter — three new things to try in your inbox every Friday and six powerful new notification enhancements the second you sign up!
Need ideas or motivation to help you build a spreadsheet in Google Sheets? You can browse through the templates that are included in this office app and select one to customize. But a more intriguing option is to use the tool in Sheets called Help Me Organize. Powered by Google’s generative AI technology, Gemini, you can use it to generate a template that’s more tailored for you.
Based on a brief description that you write (referred to as a “prompt”), Help Me Organize generates a table with headings, placeholder text, and possible formulas in its cells that you can then adjust to your needs. It’s mainly designed to create templates for project management. But you can tease it to make templates that include some formulas and tables that can be used to create charts.
This guide explains how to use Help Me Organize and provides tips for getting best results.
Who can use Gemini AI in Google Sheets
If you have a Google Workspace account, the Gemini AI tools that include Help Me Organize are available as an add-on — called Gemini for Google Workspace — for an extra subscription charge. If you have a regular Google personal account, you can pay for a Google One AI Premium subscription to have access to these tools. Or, for no cost, you can sign up for access to Workspace Labs with your Google account to be permitted to try out Help Me Organize.
How to access Help Me Organize in Google Sheets
You access the Help Me Organize tool from a right side panel that you open while in a spreadsheet in Google Sheets. The spreadsheet can have existing data on it. But for generating templates, it’s best to use Help Me Organize on a new, blank spreadsheet or on a new sheet in an existing spreadsheet. You can add a new sheet to a spreadsheet by clicking the + sign that’s toward the lower-left corner of the opened spreadsheet.
To launch the “Help me organize” panel, click Insert and select Help me organize at the very bottom of the menu that opens.
In the “Help me organize” panel that opens to the right of the page, a large text entry box invites you to write a prompt inside it. Some example prompts that are meant to show you how you can write your own cycle through this box.
When you open the “Help me organize” panel, its entry box shows example prompts.
Howard Wen / IDG
How to use Help Me Organize
Click inside the entry box on the “Help me organize” panel, type a description of the kind of template you want Gemini to generate, and click Create.
Type a prompt into the box on the “Help me organize” panel and click Create.
Howard Wen / IDG
Depending on the complexity of your prompt, it may take several seconds for the AI to generate a template — but it may not be able to generate anything. If it’s unable to, try entering your description again but use fewer words.
Gemini may “think” for a while as it generates a template.
Howard Wen / IDG
How to insert a template generated by Gemini
If Gemini produces a result, the template will appear over your spreadsheet. It’ll start from the upper-leftmost cell, with the template’s columns and rows spreading out from here.
Review the template Gemini generated, then insert it in your spreadsheet or start over.
Howard Wen / IDG
You can scroll through the template to see what you think of it. Keep in mind that you should always consider what Gemini generates as a rough draft that you’ll need to modify to make it more suitable for your use (such as replacing placeholder text and scrutinizing and modifying any formulas). It is a template, after all.
Scroll to the bottom of the template — you’ll find a small toolbar attached to it. If you like this template, click Insert. It’ll then be inserted into your spreadsheet.
If you don’t, click the X. The template will be removed from your spreadsheet. You can try writing another prompt in the “Help me organize” panel. Note that if you create a new template, you can’t go back to the previous version.
Optionally, you can rate if you like this template or not by clicking the thumbs up or thumbs down icon. Your feedback is used to help train Gemini to produce results in the future that may be more preferable.
Once you’ve inserted a template in your spreadsheet, you can tweak it however you like: change heading names, add rows or columns, adjust formulas, enter real data, and so on. See “How to use Google Sheets for project management” for details on working with templates in Sheets.
How to write a prompt in Help Me Organize
Unsure about how to write a prompt? Need inspiration? Here are some general tips that can elicit useful templates from Gemini:
1. First, describe a specific project that you want to track.
Examples:
budget breakdown
business travel itinerary
payroll schedule
2. Describe or specifically name headings that you’d like to see in the template.
budget breakdown that includes in the following order: revenue, rent, utilities, internet, expenses
business travel itinerary with sections for travel to airport, airline, flight number, hotel, and so on
payroll schedule for employees named Mike, Pedro, Shawna, and Tasha
Including specifics in a prompt will help Gemini generate a better template.
Howard Wen / IDG
3. Use numbers and math formulas.
a table depicting 12 months with 3 categories per month
payroll schedule that’s monthly across one year
a table that calculates compound interest at 3.5% over 3 years
Gemini can create a template that includes formulas.
Howard Wen / IDG
4. Describe dropdowns, lists, task lists, or to-dos.
a dropdown with selections that include Greek, Japanese, Italian for a business luncheon
a project tracker with task lists assigned to people
Gemini can’t generate charts directly, but you can prompt it to create a template (table) that you can then derive a chart from. Examples:
a bar chart with 9 labels
a line chart with 4 categories
a pie chart depicting 3 categories
Describe a chart you’d like to create.
Howard Wen / IDG
Insert the generated template in your spreadsheet.
Next, select the template by clicking its top-leftmost cell.
Then, on the menu bar over your spreadsheet, click Insert > Chart. By default, a pie chart will be generated. The “Chart editor” panel will also open along the right of the page, so you can change the pie chart to another type or make other adjustments to it.
A pie chart based on a template that Gemini generated from a prompt.
Howard Wen / IDG
It’s worth noting that in this pie chart example, Gemini went beyond what was asked for, breaking each of the three categories into three sections with different colors. Thus, the resulting pie chart has nine sections instead of three. That’s unlikely to be what most people would be looking for from the original prompt — a good illustration of why you always need to check and adjust Gemini’s output, or simply discard it and start over.
6. Don’t be afraid to describe something complicated.
a budget for at least 12 departments in my office for one year and assign supervision to employees
a project manager for 8 salespersons who have to sell seashells to the 10 biggest cities in the US Midwest with an April deadline
a weekly restaurant employee work schedule for 10 back-of-house kitchen employees and 6 front-of-house employees over 4 weeks
Gemini can often handle complicated prompts.
Howard Wen / IDG
Remember that the best way to use Help Me Organize (or any generative AI tool) is to experiment and play around with the wording of your prompts. You never know — Gemini may surprise you with a result that’s better or more useful than what you originally envisioned.
In summary
Keep these tips in mind to write prompts that will trigger Gemini to give you the best (or at least most interesting) results in Help Me Organize:
Define exactly what you want to use the template for. How would you describe it in three words?
Use headings or numbers (such as dates or math formulas). These can imply columns and rows in the template.
If you want the template to have a dropdown or other list type, describe it.
Use Gemini to generate a table that you can then derive a chart from.
Don’t be afraid to experiment — even if your request sounds complicated.
As with all AI-generated content, the templates created with Help Me Organize should never be seen as final — but they can give you a big head start for all sorts of spreadsheet-related tasks, from setting up complex schedules to creating charts to performing time-oriented calculations.
This article was originally published in February 2024 and updated in December 2024.
Need ideas or motivation to help you build a spreadsheet in Google Sheets? You can browse through the templates that are included in this office app and select one to customize. But a more intriguing option is to use the tool in Sheets called Help Me Organize. Powered by Google’s generative AI technology, Gemini, you can use it to generate a template that’s more tailored for you.
Based on a brief description that you write (referred to as a “prompt”), Help Me Organize generates a table with headings, placeholder text, and possible formulas in its cells that you can then adjust to your needs. It’s mainly designed to create templates for project management. But you can tease it to make templates that include some formulas and tables that can be used to create charts.
This guide explains how to use Help Me Organize and provides tips for getting best results.
Who can use Gemini AI in Google Sheets
If you have a Google Workspace account, the Gemini AI tools that include Help Me Organize are available as an add-on — called Gemini for Google Workspace — for an extra subscription charge. If you have a regular Google personal account, you can pay for a Google One AI Premium subscription to have access to these tools. Or, for no cost, you can sign up for access to Workspace Labs with your Google account to be permitted to try out Help Me Organize.
How to access Help Me Organize in Google Sheets
You access the Help Me Organize tool from a right side panel that you open while in a spreadsheet in Google Sheets. The spreadsheet can have existing data on it. But for generating templates, it’s best to use Help Me Organize on a new, blank spreadsheet or on a new sheet in an existing spreadsheet. You can add a new sheet to a spreadsheet by clicking the + sign that’s toward the lower-left corner of the opened spreadsheet.
To launch the “Help me organize” panel, click Insert and select Help me organize at the very bottom of the menu that opens.
In the “Help me organize” panel that opens to the right of the page, a large text entry box invites you to write a prompt inside it. Some example prompts that are meant to show you how you can write your own cycle through this box.
When you open the “Help me organize” panel, its entry box shows example prompts.
Howard Wen / IDG
How to use Help Me Organize
Click inside the entry box on the “Help me organize” panel, type a description of the kind of template you want Gemini to generate, and click Create.
Type a prompt into the box on the “Help me organize” panel and click Create.
Howard Wen / IDG
Depending on the complexity of your prompt, it may take several seconds for the AI to generate a template — but it may not be able to generate anything. If it’s unable to, try entering your description again but use fewer words.
Gemini may “think” for a while as it generates a template.
Howard Wen / IDG
How to insert a template generated by Gemini
If Gemini produces a result, the template will appear over your spreadsheet. It’ll start from the upper-leftmost cell, with the template’s columns and rows spreading out from here.
Review the template Gemini generated, then insert it in your spreadsheet or start over.
Howard Wen / IDG
You can scroll through the template to see what you think of it. Keep in mind that you should always consider what Gemini generates as a rough draft that you’ll need to modify to make it more suitable for your use (such as replacing placeholder text and scrutinizing and modifying any formulas). It is a template, after all.
Scroll to the bottom of the template — you’ll find a small toolbar attached to it. If you like this template, click Insert. It’ll then be inserted into your spreadsheet.
If you don’t, click the X. The template will be removed from your spreadsheet. You can try writing another prompt in the “Help me organize” panel. Note that if you create a new template, you can’t go back to the previous version.
Optionally, you can rate if you like this template or not by clicking the thumbs up or thumbs down icon. Your feedback is used to help train Gemini to produce results in the future that may be more preferable.
Once you’ve inserted a template in your spreadsheet, you can tweak it however you like: change heading names, add rows or columns, adjust formulas, enter real data, and so on. See “How to use Google Sheets for project management” for details on working with templates in Sheets.
How to write a prompt in Help Me Organize
Unsure about how to write a prompt? Need inspiration? Here are some general tips that can elicit useful templates from Gemini:
1. First, describe a specific project that you want to track.
Examples:
budget breakdown
business travel itinerary
payroll schedule
2. Describe or specifically name headings that you’d like to see in the template.
budget breakdown that includes in the following order: revenue, rent, utilities, internet, expenses
business travel itinerary with sections for travel to airport, airline, flight number, hotel, and so on
payroll schedule for employees named Mike, Pedro, Shawna, and Tasha
Including specifics in a prompt will help Gemini generate a better template.
Howard Wen / IDG
3. Use numbers and math formulas.
a table depicting 12 months with 3 categories per month
payroll schedule that’s monthly across one year
a table that calculates compound interest at 3.5% over 3 years
Gemini can create a template that includes formulas.
Howard Wen / IDG
4. Describe dropdowns, lists, task lists, or to-dos.
a dropdown with selections that include Greek, Japanese, Italian for a business luncheon
a project tracker with task lists assigned to people
Gemini can’t generate charts directly, but you can prompt it to create a template (table) that you can then derive a chart from. Examples:
a bar chart with 9 labels
a line chart with 4 categories
a pie chart depicting 3 categories
Describe a chart you’d like to create.
Howard Wen / IDG
Insert the generated template in your spreadsheet.
Next, select the template by clicking its top-leftmost cell.
Then, on the menu bar over your spreadsheet, click Insert > Chart. By default, a pie chart will be generated. The “Chart editor” panel will also open along the right of the page, so you can change the pie chart to another type or make other adjustments to it.
A pie chart based on a template that Gemini generated from a prompt.
Howard Wen / IDG
It’s worth noting that in this pie chart example, Gemini went beyond what was asked for, breaking each of the three categories into three sections with different colors. Thus, the resulting pie chart has nine sections instead of three. That’s unlikely to be what most people would be looking for from the original prompt — a good illustration of why you always need to check and adjust Gemini’s output, or simply discard it and start over.
6. Don’t be afraid to describe something complicated.
a budget for at least 12 departments in my office for one year and assign supervision to employees
a project manager for 8 salespersons who have to sell seashells to the 10 biggest cities in the US Midwest with an April deadline
a weekly restaurant employee work schedule for 10 back-of-house kitchen employees and 6 front-of-house employees over 4 weeks
Gemini can often handle complicated prompts.
Howard Wen / IDG
Remember that the best way to use Help Me Organize (or any generative AI tool) is to experiment and play around with the wording of your prompts. You never know — Gemini may surprise you with a result that’s better or more useful than what you originally envisioned.
In summary
Keep these tips in mind to write prompts that will trigger Gemini to give you the best (or at least most interesting) results in Help Me Organize:
Define exactly what you want to use the template for. How would you describe it in three words?
Use headings or numbers (such as dates or math formulas). These can imply columns and rows in the template.
If you want the template to have a dropdown or other list type, describe it.
Use Gemini to generate a table that you can then derive a chart from.
Don’t be afraid to experiment — even if your request sounds complicated.
As with all AI-generated content, the templates created with Help Me Organize should never be seen as final — but they can give you a big head start for all sorts of spreadsheet-related tasks, from setting up complex schedules to creating charts to performing time-oriented calculations.
This article was originally published in February 2024 and updated in December 2024.
In 2024, the surge in generative AI (genAI) pilot projects sparked concerns over high experimentation costs and uncertain benefits. That prompted companies to then shift their focus to delivering business outcomes, enhancing data quality, and developing talent.
In 2025, enterprises are expected to prioritize strategy, add business-IT partnerships to assist with genAI projects and move from large language model (LLM) pilots to production instances. And small language models will also likely come into their own, addressing specific tasks without overburdening data center processing and power.
Organizations will also adopt new technologies and architectures to better govern data and AI, with a return to predictive AI, according to Forrester Research.
Predictive AI uses historical data and techniques such as machine learning and statistics to forecast future events or behaviors, said Forrester analyst Jayesh Chaurasia. GenAI, on the other hand, creates new content — such as images, text, videos, or synthetic data — leveraging deep learning methods such as generative adversarial networks (GANs). Chaurasia predicts the AI pendulum will swing back to predictive AI for over 50% of use cases.
LLMs are, of course, central to genAI, helping enterprises tackle complex tasks and improve operations. Forrester reported that 55% of US genAI decision-makers with a strategy use LLMs embedded in applications, while 33% purchase domain-specific genAI apps. Meanwhile, SLMs are quickly gaining attention.
The rise of small and mid-sized language models should enable customers to better meet the trade-offs on accuracy, speed and costs, said Arun Chandrasekaran, a distinguished vice president analyst with Gartner Research, noting that “Most organizations are still struggling to realize business value from their genAI investment.”
Gartner
In the coming year, SLM integration could surge by as much as 60%, according to a Forrester report.
As nearly eight-in-10 IT decision makers report software costs rising over the past year, they’re looking to SLMs because they’re more cost-effective and offer better accuracy, relevance, and trustworthiness by training on specific domains. They’re also easier to integrate and excel in specialized industries such as finance, healthcare, and legal services.
By 2025, 750 million apps are expected to use LLMs, underscoring the genAI market’s rapid growth. Forrester predicts the market will grow in value from $1.59 billion in 2023 to $259.8 billion by 2030,.
Even with that growth, many AI experts argue that LLMs may be excessive for automating workflows and repetitive tasks, both in terms of performance and environmental impact. A Cornell University study found that training OpenAI’s GPT-3 LLM consumed 500 metric tons of carbon, the equivalent of 1.1 million pounds.
As enterprises face challenges meeting expectations, gen AI investments in 2025 will likely shift toward proven predictive AI applications like maintenance, personalization, supply chain optimization, and demand forecasting. Forward-thinking organizations will also recognize the synergy between predictive and generative AI, using predictions to enhance generative outputs. That approach is expected to boost the share of combined use cases from 28% today to 35%, according to Forrester.
SLMs use fewer computational resources, enabling on-premises or private cloud deployment, which natively enhances privacy and security.
While some SLM implementations can require substantial compute and memory resources, several models can have more than 5 billion parameters and run on a single GPU, Thomas said.
Gartner Research defines SLMs differently, as language models with 10 billion parameters or less. Compared to LLMs, they are two to three orders of magnitude (around 100-1,000x) smaller, making them significantly more cost-efficient to use or customize.
SLMs include Google Gemini Nano, Microsoft’s Orca-2–7b and Orca-2–13b, Meta’s Llama-2–13b, and others, Thomas noted in a recent post, arguing that SLM growth is being driven by the need for more efficient models and the speed at which they can be trained and set up.
Gartner
“SLMs have gained popularity due to practical considerations such as computational resources, training time, and specific application requirements,” Thomas said. “Over the past couple of years, SLMs have become increasingly relevant, especially in scenarios where sustainability and efficiency are crucial.”
SLMs enable most organizations to achieve task specialization, improving the accuracy, robustness, and reliability of genAI solutions, according to Gartner. And because deployment costs, data privacy, and risk mitigation are key challenges when using genAI, SLMs offer a cost-effective and energy-efficient alternative to LLMs for most organizations, Gartner said.
Three out of four (75%) of IT-decision makers believe SLMs outperform LLMs in speed, cost, accuracy and ROI, according to a Harris Poll of more than 500 users commissioned by the start-up Hyperscience.
“Data is the lifeblood of any AI initiative, and the success of these projects hinges on the quality of the data that feeds the models,” said Andrew Joiner, CEO of Hyperscience, which develops AI-based office work automation tools. “Alarmingly, three out of five decision makers report their lack of understanding of their own data inhibits their ability to utilize genAI to its maximum potential. The true potential…lies in adopting tailored SLMs, which can transform document processing and enhance operational efficiency.”
Gartner recommends that organizations customize SLMs to specific needs for better accuracy, robustness, and efficiency. “Task specialization improves alignment, while embedding static organizational knowledge reduces costs. Dynamic information can still be provided as needed, making this hybrid approach both effective and efficient,” the research firm said.
In highly regulated industries, such as financial services, healthcare and pharmaceuticals, the future of LLMs is definitely small, according to Emmanuel Walckenaer, CEO of Yseop, a vendor that offers pre-trained genAI models for the BioPharma industry.
Smaller, more specialized models will reduce wasted time and energy spent on building large models that aren’t needed for current tasks, according to Yseop.
Agentic AI holds promise, but it’s not yet mature
In the year ahead, there is likely to be a rise in domain-specific AI agents, “although it is unclear how many of these agents can live up to the lofty expectations,” according to Gartner’s Chandrasekaran.
While Agentic AI architectures are a top emerging technology, they’re still two years away from reaching the lofty automation expected of them, according to Forrester.
While companies are eager to push genAI into complex tasks through AI agents, the technology remains challenging to develop because it mostly relies on synergies between multiple models, customization through retrieval augmented generation (RAG), and specialized expertise. “Aligning these components for specific outcomes is an unresolved hurdle, leaving developers frustrated,” Forrester said in its report.
A recent Capital One survey of 4,000 business leaders and technical practitioners across industries found that while 87% believe their data ecosystem is ready for AI at scale, 70% of technologists spend hours daily fixing data issues.
Still, Capital One’s survey revealed strong optimism among business leaders about their companies’ AI readiness. Notably, 87% believe they have a modern data ecosystem for scaling AI solutions, 84% report having centralized tools and processes for data management, 82% are confident in their data strategy for AI adoption, and 78% feel prepared to manage the increasing volume and complexity of AI-driven data.
And yet, 75% of enterprises attempting to build AI agents in-house next year are expected to fail, opting instead for consulting services or pre-integrated agents from existing software vendors. To address the mismatch between AI data preparedness and real-world complexities in 2025, 30% of enterprise CIOs will integrate Chief Data Officers (CDOs) into their IT teams as they lead AI initiatives, according to Forrester Research. CEOs will rely on CIOs to bridge the gap between technical and business expertise, recognizing that successful AI requires both solid data foundations and effective stakeholder collaboration.
Forrester’s 2024 survey also showed that 39% of senior data leaders report to CIOs, with a similar 37% reporting to CEOs — and that trend is growing. To drive AI success, CIOs and CEOs must elevate CDOs beyond being mere liaisons, positioning them as key leaders in AI strategy, change management, and delivering ROI.
A growing interest in multi-modality — and upskilling
Emerging use cases for multi-modality, particularly image and speech as modalities in both genAI inputs and outputs, will also see more adoption in 2025.
Multimodal learning, a subfield of AI, enhances machine learning by training models on diverse data types, including text, images, videos, and audio. The approach enables models to identify patterns and correlations between text and associated sensory data.
By integrating multiple data types, multimodal AI expands the capabilities of intelligent systems. These models can process various input types and generate diverse outputs. For example, GPT-4, the foundation of ChatGPT, accepts both text and image inputs to produce text outputs, while OpenAI’s Sora model generates videos from text.
Other examples include medical imaging, patient history, and lab results that can be integrated to enhance pateitn diagnosis and treatment. In financial services, multimodal AI can analyze customer phone queries to assist contact center employees in resolving issues. And in the automotive industry inputs from cameras, GPS, and LiDAR can be integrated by AI to enhance autonomous driving, emergency response, and navigation for companies, such as Tesla, Waymo and Li Auto.
“In the year ahead, you’ll need to put your nose to the grindstone to develop an effective AI strategy and implementation plan,” Forrester said in its report. “In 2025, organizational success will depend on strong leadership, strategic refinement, and recalibration of enterprise data and AI initiatives commensurate with AI aspirations.”
In 2024, the surge in generative AI (genAI) pilot projects sparked concerns over high experimentation costs and uncertain benefits. That prompted companies to then shift their focus to delivering business outcomes, enhancing data quality, and developing talent.
In 2025, enterprises are expected to prioritize strategy, add business-IT partnerships to assist with genAI projects and move from large language model (LLM) pilots to production instances. And small language models will also likely come into their own, addressing specific tasks without overburdening data center processing and power.
Organizations will also adopt new technologies and architectures to better govern data and AI, with a return to predictive AI, according to Forrester Research.
Predictive AI uses historical data and techniques such as machine learning and statistics to forecast future events or behaviors, said Forrester analyst Jayesh Chaurasia. GenAI, on the other hand, creates new content — such as images, text, videos, or synthetic data — leveraging deep learning methods such as generative adversarial networks (GANs). Chaurasia predicts the AI pendulum will swing back to predictive AI for over 50% of use cases.
LLMs are, of course, central to genAI, helping enterprises tackle complex tasks and improve operations. Forrester reported that 55% of US genAI decision-makers with a strategy use LLMs embedded in applications, while 33% purchase domain-specific genAI apps. Meanwhile, SLMs are quickly gaining attention.
The rise of small and mid-sized language models should enable customers to better meet the trade-offs on accuracy, speed and costs, said Arun Chandrasekaran, a distinguished vice president analyst with Gartner Research, noting that “Most organizations are still struggling to realize business value from their genAI investment.”
Gartner
In the coming year, SLM integration could surge by as much as 60%, according to a Forrester report.
As nearly eight-in-10 IT decision makers report software costs rising over the past year, they’re looking to SLMs because they’re more cost-effective and offer better accuracy, relevance, and trustworthiness by training on specific domains. They’re also easier to integrate and excel in specialized industries such as finance, healthcare, and legal services.
By 2025, 750 million apps are expected to use LLMs, underscoring the genAI market’s rapid growth. Forrester predicts the market will grow in value from $1.59 billion in 2023 to $259.8 billion by 2030,.
Even with that growth, many AI experts argue that LLMs may be excessive for automating workflows and repetitive tasks, both in terms of performance and environmental impact. A Cornell University study found that training OpenAI’s GPT-3 LLM consumed 500 metric tons of carbon, the equivalent of 1.1 million pounds.
As enterprises face challenges meeting expectations, gen AI investments in 2025 will likely shift toward proven predictive AI applications like maintenance, personalization, supply chain optimization, and demand forecasting. Forward-thinking organizations will also recognize the synergy between predictive and generative AI, using predictions to enhance generative outputs. That approach is expected to boost the share of combined use cases from 28% today to 35%, according to Forrester.
SLMs use fewer computational resources, enabling on-premises or private cloud deployment, which natively enhances privacy and security.
While some SLM implementations can require substantial compute and memory resources, several models can have more than 5 billion parameters and run on a single GPU, Thomas said.
Gartner Research defines SLMs differently, as language models with 10 billion parameters or less. Compared to LLMs, they are two to three orders of magnitude (around 100-1,000x) smaller, making them significantly more cost-efficient to use or customize.
SLMs include Google Gemini Nano, Microsoft’s Orca-2–7b and Orca-2–13b, Meta’s Llama-2–13b, and others, Thomas noted in a recent post, arguing that SLM growth is being driven by the need for more efficient models and the speed at which they can be trained and set up.
Gartner
“SLMs have gained popularity due to practical considerations such as computational resources, training time, and specific application requirements,” Thomas said. “Over the past couple of years, SLMs have become increasingly relevant, especially in scenarios where sustainability and efficiency are crucial.”
SLMs enable most organizations to achieve task specialization, improving the accuracy, robustness, and reliability of genAI solutions, according to Gartner. And because deployment costs, data privacy, and risk mitigation are key challenges when using genAI, SLMs offer a cost-effective and energy-efficient alternative to LLMs for most organizations, Gartner said.
Three out of four (75%) of IT-decision makers believe SLMs outperform LLMs in speed, cost, accuracy and ROI, according to a Harris Poll of more than 500 users commissioned by the start-up Hyperscience.
“Data is the lifeblood of any AI initiative, and the success of these projects hinges on the quality of the data that feeds the models,” said Andrew Joiner, CEO of Hyperscience, which develops AI-based office work automation tools. “Alarmingly, three out of five decision makers report their lack of understanding of their own data inhibits their ability to utilize genAI to its maximum potential. The true potential…lies in adopting tailored SLMs, which can transform document processing and enhance operational efficiency.”
Gartner recommends that organizations customize SLMs to specific needs for better accuracy, robustness, and efficiency. “Task specialization improves alignment, while embedding static organizational knowledge reduces costs. Dynamic information can still be provided as needed, making this hybrid approach both effective and efficient,” the research firm said.
In highly regulated industries, such as financial services, healthcare and pharmaceuticals, the future of LLMs is definitely small, according to Emmanuel Walckenaer, CEO of Yseop, a vendor that offers pre-trained genAI models for the BioPharma industry.
Smaller, more specialized models will reduce wasted time and energy spent on building large models that aren’t needed for current tasks, according to Yseop.
Agentic AI holds promise, but it’s not yet mature
In the year ahead, there is likely to be a rise in domain-specific AI agents, “although it is unclear how many of these agents can live up to the lofty expectations,” according to Gartner’s Chandrasekaran.
While Agentic AI architectures are a top emerging technology, they’re still two years away from reaching the lofty automation expected of them, according to Forrester.
While companies are eager to push genAI into complex tasks through AI agents, the technology remains challenging to develop because it mostly relies on synergies between multiple models, customization through retrieval augmented generation (RAG), and specialized expertise. “Aligning these components for specific outcomes is an unresolved hurdle, leaving developers frustrated,” Forrester said in its report.
A recent Capital One survey of 4,000 business leaders and technical practitioners across industries found that while 87% believe their data ecosystem is ready for AI at scale, 70% of technologists spend hours daily fixing data issues.
Still, Capital One’s survey revealed strong optimism among business leaders about their companies’ AI readiness. Notably, 87% believe they have a modern data ecosystem for scaling AI solutions, 84% report having centralized tools and processes for data management, 82% are confident in their data strategy for AI adoption, and 78% feel prepared to manage the increasing volume and complexity of AI-driven data.
And yet, 75% of enterprises attempting to build AI agents in-house next year are expected to fail, opting instead for consulting services or pre-integrated agents from existing software vendors. To address the mismatch between AI data preparedness and real-world complexities in 2025, 30% of enterprise CIOs will integrate Chief Data Officers (CDOs) into their IT teams as they lead AI initiatives, according to Forrester Research. CEOs will rely on CIOs to bridge the gap between technical and business expertise, recognizing that successful AI requires both solid data foundations and effective stakeholder collaboration.
Forrester’s 2024 survey also showed that 39% of senior data leaders report to CIOs, with a similar 37% reporting to CEOs — and that trend is growing. To drive AI success, CIOs and CEOs must elevate CDOs beyond being mere liaisons, positioning them as key leaders in AI strategy, change management, and delivering ROI.
A growing interest in multi-modality — and upskilling
Emerging use cases for multi-modality, particularly image and speech as modalities in both genAI inputs and outputs, will also see more adoption in 2025.
Multimodal learning, a subfield of AI, enhances machine learning by training models on diverse data types, including text, images, videos, and audio. The approach enables models to identify patterns and correlations between text and associated sensory data.
By integrating multiple data types, multimodal AI expands the capabilities of intelligent systems. These models can process various input types and generate diverse outputs. For example, GPT-4, the foundation of ChatGPT, accepts both text and image inputs to produce text outputs, while OpenAI’s Sora model generates videos from text.
Other examples include medical imaging, patient history, and lab results that can be integrated to enhance pateitn diagnosis and treatment. In financial services, multimodal AI can analyze customer phone queries to assist contact center employees in resolving issues. And in the automotive industry inputs from cameras, GPS, and LiDAR can be integrated by AI to enhance autonomous driving, emergency response, and navigation for companies, such as Tesla, Waymo and Li Auto.
“In the year ahead, you’ll need to put your nose to the grindstone to develop an effective AI strategy and implementation plan,” Forrester said in its report. “In 2025, organizational success will depend on strong leadership, strategic refinement, and recalibration of enterprise data and AI initiatives commensurate with AI aspirations.”
Microsoft is a little sneaky. Sure, there’s just one “big” update for Windows 11 each year. But Microsoft’s Windows team is always working on something, and new Windows 11 features are arriving on PCs every month — even outside of those high-profile updates.
So as we arrive at the end of 2024, let’s review the most interesting and useful new features that have shown up on Windows 11 in the past year. I bet you’ll find at least a few features you haven’t yet discovered!
And be sure to sign up for my free Windows Intelligence newsletterfor even more useful knowledge. I’ll keep you up to date on all the interesting new features you can find and explore as we get into the new year.
New Windows 11 features #1—3: Phone integration powers
Windows now lets you use your Android phone as a webcam. It all happens entirely wirelessly, if you like — no cables! The setup process is quick, and it’s particularly useful if you’re not a big fan of your PC’s on-board webcam quality.
Windows can now pop up a notification whenever you take a new photo or screenshot on your Android phone. Then you can click that notification to immediately transfer the photo and open it for viewing, editing, or sharing on your PC.
Once you’ve connected your Android phone to your PC for the above wireless features, you’ll also see your phone pop up in File Explorer. You can transfer files to and from your Android phone right from File Explorer — wirelessly!
New Windows 11 features #4—5: Windows taming tricks
Windows 11’s built-in Widgets menu got some big updates in 2024. It’s worth giving it a second chance now that you can tweak various options to hide the viral article feed and customize it further.
Windows now shows stock prices and sports updates alongside the weather on your lock screen — it’s a recent update, too! (That said, many people won’t be fans of this.)
Microsoft’s PowerToys package isn’t included with Windows, but it’s an honorary part of the operating system as far as I’m concerned. In 2024, Microsoft released an especially powerful application-launcher-and-arranger named PowerToys Workspaces. It’s a big productivity upgrade for many people.
Speaking of PowerToys, Microsoft also launched an even more powerful and flexible “New+” menu that lets you quickly create new files and folders from templates right in File Explorer. I’m already putting this one to good use myself.
Microsoft’s Photos app includes a new Generative Erase feature that works on all Windows 11 PCs — and on Windows 10 PCs, too! You can select objects in photos and use this AI-powered erase to get rid of them.
Some of the newest PCs are branded “Copilot+ PCs,” and they have access to new AI-based Windows features. Again, these don’t work on most Windows 11 PCs — these need a new Copilot+ PC. Here are the AI features you can use on those PCs.
Recall is the most controversial Windows feature of the year. It’s still available only in testing form right now — but I’m experimenting with it, and you can see how it works.
Okay — technically this one is from December 2023, but who’s counting? All Windows 11 PCs include AI features, even if they aren’t Copilot+ PCs. This guide reveals what you can use today on any Windows 11 PC.
Stay tuned: 2025 promises to be even more of a busy year when it comes to Windows development. We’ll explore it all together, every step of the way.