Google parent Alphabet’s $2 billion investment in AI firm Anthropic has caught the eye of the UK’s antitrust regulator.
On Tuesday, the UK’s Competition and Markets Authority (CMA) opened an inquiry into whether Alphabet’s partnership with Anthropic created a “relevant merger situation” that threatened competition within the fast-growing market for cloud-delivered AI products and services.
Invitation to comment
The CMA is inviting industry comments before a deadline of Tuesday, August 13, in advance of the launch of its formal investigation. The outcome of the inquiry will decide whether or not the regulator orders remedial actions or otherwise intervenes in the market.
Google told Computerworld that Anthropic is free to partner with other cloud technology providers and hyperscalers, effectively arguing that competitive concerns were misplaced.
“Google is committed to building the most open and innovative AI ecosystem in the world,” the tech giant said in a statement. “Anthropic is free to use multiple cloud providers and does, and we don’t demand exclusive tech rights.”
In a statement, Anthropic told Computerworld it intends to “cooperate with the CMA and provide them with the complete picture about Google’s investment and our commercial collaboration.
“We are an independent company and none of our strategic partnerships or investor relationships diminish the independence of our corporate governance or our freedom to partner with others,” the company said. “Anthropic’s independence is a core attribute, integral both to our public benefit mission and to serving our customers wherever and however they prefer to access Claude.”
Smaller players in the cloud computing market argue that powerful partnerships threaten the development of competition in the AI marketplace.
‘Virtual monopoly’
Josh Mesout, chief innovation officer of UK-based cloud computing firm Civo, told Computerworld, “As an industry we should be cautious over powerful partnerships as they pose a threat to the entire ecosystem by suffocating competition and innovation.”
He added, “We cannot surrender AI to a virtual monopoly before it has really started.”
Maintaining a diverse and competitive landscape in artificial intelligence is important not least because of the diverse and far-reaching applications of AI technologies across multiple industry sectors.
“Over-dependence on a handful of major firms could stifle innovation, limit consumer choice, and potentially lead to a monopoly that favors Big Tech,” Mesout warned.
“To keep the market fair and open, regulators should be eyeing these types of partnerships warily,” he said. “Otherwise, we risk AI following the path of cloud, where hyperscalers run unchecked and leave a broken, locked-in, and stifled market in their wake.”
To get a sense of what a smarter Siri in iOS 18.1 might look like once it appears, Open AI just introduced a new voice mode in its app, albeit in limited alpha, meaning not every user will get ahold of the new tech.
Delayed for a month in response to quality concerns, this is a test of the company’s Advanced Voice Mode on ChatGPT; it’s available to iPhone users who subscribe to the $20 per month ChatGPT Plus service using its GPT-4o model.
The company warns that it might make mistakes and says access and rate limits are subject to change. It isn’t expected to be universally available across all users until the end of the year, and should be available to Mac, iPhone, and iPad users once it appears. Subscribers accepted to the alpha group will get an alert in the app and an email inviting them to take part in the test.
“We’ll continue to add more people on a rolling basis and plan for everyone on Plus to have access in the fall,” OpenAI said.
What does Advanced Voice Mode do?
Effectively, it’s a more powerful chatbot that delivers more natural, real-time conversations with a degree of contextual awareness, which means it can understand and respond to emotion and non-verbal cues. It is also capable of processing prompts more swiftly, which significantly reduces the latency within conversations, and lets you interrupt it to get it to change what it says at any time.
OpenAI first demonstrated the new mode in April, when it showed how the tool can recognize different languages simultaneously and translate them in real time. During that demo, employees were able to interrupt ChatGPT, get it to tell stories in different ways, and more. One thing the bot can no longer do is sound like Scarlet Johansson — it now supports only four preset voices in order to prevent it being used for impersonation. OpenAI has also put filters in place to block requests to generate music or other copyrighted audio, reflecting legal challenges raised against song-generating AI firms such as Suno.
Video and screen sharing capabilities are not yet available.
How it works
If you are a ChatGPT Plus subscriber running the latest version of the app, and are accepted to the test, you can access the bot from within the app by tapping the Voice icon at the bottom of the screen. You can then switch between the new Advanced mode and the existing Standard mode (better for longer sessions) using an interface at the top of the screen. Privacy concerns mean many Apple users might prefer to access these features via Apple Intelligence.
What about privacy?
Apple Intelligence puts additional safeguards in place to protect people’s privacy. As Wired points out, ChatGPT’s user agreement at present appears to want to use your voice and images for training purposes. In a remarkably quotable line, AI consultant Angus Allan calls it a “data hoover on steroids. Their privacy policy explicitly states they collect all user input and reserve the right to train their models on this,” he said.
This is less a problem when used with Apple Intelligence, as ChatGPT requests are anonymized and data from those sessions is not used to train ChatGPT models, according to Apple. If that proves true, many Apple users will eventually gravitate to accessing ChatGPT via their Apple AI as the safest way to use it.
All eyes now will turn to Google, which is expected to introduce similar features within Google Gemini AI soon — features that might also end up being integrated inside Apple Intelligence. The battle of the bots is heating up.
Open AI on Tuesday began rolling out a preview version of Advanced Voice Mode to paying ChatGPT customers. With the help of the new function, users get access to a “hyper-realistic voice” that sounds exactly like a human.
After a demonstration in May, Open AI was accused of copying the voice of actress Scarlett Johansson without permission, prompting the company to remove the voice in question. Ever since, users have to make do with four voices, which are called Juniper, Breeze, Cove and Ember.
The idea now is that all Plus users will get access to Advanced Voice Mode in the fall, Techcrunch reports.
To everyone’s surprise, Apple is turning weakness into strength with its evolving approach to artificial intelligence (AI), as it becomes one of the biggest open-source research contributors in the field.
Apple last week released its DCLM (dataComp for Language Models) models on HuggingFace. This aims to be a solution for data training and curation, enabling new models to be trained and tested with relatively few “training tokens.”
‘Best performing truly open-source models’
“Our baseline model is also comparable to Mistral-7B-v0.3 and Llama 3 8B on MMLU (63% & 66%) and performs similarly on an average of 53 natural language understanding tasks while being trained with 6.6x less compute than Llama 3 8B,” the team wrote. “Our results highlight the importance of dataset design for training language models and offer a starting point for further research on data curation.”
Apple’s researchers worked with peers from the University of Washington, the Toyota Research Institute, Stanford, and others on the project.
“To our knowledge these are by far the best performing truly open-source models (open data, open weight models, open training code),” Apple machine learning researcher Vaishaal Shankar wrote when announcing the news.
Reaction seems positive. “The data curation process is a must-study for anyone looking to train a model from scratch or fine-tune an existing one,” said Applied AI Scientist Akash Shetty
Opening up, strategically
The model seems to compete strongly with other models of its type, including Mistral-7B, and approaches the performance of models from Meta and Google — even though it is trained on smaller quantities of content.
The idea is that research teams can use the tech to create their own small AI models, which can themselves be embedded in (say) apps and deployed at low cost.
While it is unwise to read too much into things, Apple’s AI teams do seem to have embraced a more open approach to research in the field. That makes sense for a company allegedly racing to catch up to competitors, of course; it also makes sense in another way, because the company that contributes and maintains code to the open source community puts itself in a strong position for future research in terms of contact with peers and fostering future goodwill.
Collaboration counts
That alone is a small, but remarkable, step for Apple, which has a reputation for prizing secrecy above all else. That secrecy has, we’ve been intermittently told in recent years, been a big problem for Apple’s research teams, who wanted to work in a more collaborative way with others in the cutting-edge industry.
Apple seems to have listened, which is why I think it is now turning what was once a disadvantage into an advantage. In the short-term, the company wants to promote effective innovation in AI while it develops its own solutions under the Apple Intelligence brand.
It might also hope to position itself as a source technology provider powering many open-source projects. Ensuring good technologies are widely available to the open-source community could help prevent other entities owning too much of the core technology.
The Apple release is just the latest in a string of such releases to have emerged since the company intensified its focus on AI research. The company has now published dozens of models, including most recently its OpenELM and CoreML models. The latter models are optimized to run generative AI (genAI) and machine learning applications on device.
While nothing has been stated to this effect, the cards Apple is showing indicate it is working more closely with researchers outside the company. And it’s investing on the development of edge AI — ironically, the direction the industry will inevitably head toward as the real-life problems of power and water consumption, copyright, and privacy present existential challenges to the future evolution of the server-led AI space.
Windows laptops have one big advantage over Macs: touchscreens. I’m not saying you should switch to a touch-first PC experience and throw away your mouse. But too many people discount the usefulness of a touchscreen PC. Those touchscreens can be a big productivity boost.
While I won’t be trading my mouse and keyboard for an all-touchscreen Windows experience any time soon, I always appreciate a touchscreen on a laptop. Whether you’re working with documents, browsing the web, or just watching videos, you might be impressed at just how useful your computer’s touchscreen can be.
Of course, not every modern Windows PC has a touchscreen. But many do — and I know many people aren’t using them to their full potential. Let’s change that, shall we?
Want more Windows PC tips and tricks? Check out my free Windows Intelligence newsletter to get three new things to try every Friday and a free in-depth Windows Field Guide.
Windows touchscreen tip #1: Sign documents
Let’s start with one of my favorite uses for a touchscreen: quickly signing a document. Yes, there are other ways to do so — you could painstakingly try to draw your signature on your laptop’s touchpad, use an app on your phone, paste in a scanned image of your signature, or even print the document and scan it back in.
But in my experience, the most convenient way to sign a document on the average modern laptop is with your finger — directly on the touchscreen. You can sign a PDF document in Microsoft Edge or use an application like Adobe Reader. Or, you might be asked to sign a document using a web-based signing service like DocuSign. Either way, providing a signature is much easier if your PC has a touchscreen.
Chris Hoffman, IDG
Windows touchscreen tip #2: Scroll through documents and web pages
A touchscreen is seriously underrated for simply scrolling around in a document or web page. This is especially true when you’re away from your desk — maybe you’re using your laptop on your lap, perhaps you’re crammed into tight quarters in an airline seat, or maybe you’re catching up on some late-night work emails in bed.
Rather than attempting to scroll with the touchpad, it’s often extremely convenient to hold the laptop in a way that lets you scroll around with a finger. This is especially true if you have a flexible laptop that can rotate its hinge 360 degrees, adapting better to close quarters.
I’ll admit it: I find myself sometimes scrolling around on web pages using my finger, even when I’m sitting at my desk with a touchscreen laptop. Give it a try if you haven’t already.
Windows touchscreen tip #3: Zoom in and out
You can also quickly and easily zoom with a touchscreen. While viewing something — Google Maps in a browser, an image, a web page, a document, or whatever else — you can use pinch-to-zoom just as you would on your phone to zoom in and out.
I find this more useful than clicking little zoom buttons or attempting to pinch-to-zoom on my laptop’s trackpad. (You should be able to use pinch-to-zoom with your trackpad, but it’s easier to do on a nice big screen than a smaller little trackpad.)
Windows touchscreen tip #4: Use gestures to navigate the desktop
Windows has a whole collection of its own custom touchscreen gestures waiting to be used. I wanted to start with simple and easy-to-remember tips, but these gestures are also supremely useful.
(You can use similar gestures on your laptop’s touchpad, too.)
Windows 11 PCs have access to more built-in touchscreen gestures than Windows 10 PCs. For example, on a Windows 11 PC with a touchscreen, you can:
Swipe up with one finger from the bottom of your screen to see the Start menu.
Swipe with one finger from the left edge of your screen to see the Widgets pane.
Swipe with one finger from the right edge of your screen to see the notification center.
Swipe up with three fingers to show all open windows with Task View.
Swipe down with three fingers to show the desktop.
Swipe left or right with three fingers to switch to the last app you were using.
Swipe with four fingers from the left or right to switch desktops, if you’re using multiple virtual desktops.
If these don’t work on your Windows 11 PC, head to Settings > Bluetooth & devices > Touch. Ensure the “Three- and four-finger touch gestures” setting is activated here.
Windows touchscreen tip #5: Take advantage of other basic taps and presses
There are many other useful ways to take advantage of a touchscreen. Obviously, you can use the it just like a mouse: Tap something to “click” it, or press and drag to move it around.
Whether you’re using a settings screen or filling out a form, you might go faster if you tap each checkbox with your finger rather than moving your cursor around with a touchpad.
You can combine the keyboard and the touchscreen, too. For example, when selecting multiple files in File Explorer, you can press and hold the Ctrl key while you tap each file in turn. That keyboard-plus-touchscreen method is faster to me than holding the Ctrl key while I use a laptop’s touchpad to click each file.
A touchscreen can also be useful if you’re streaming videos on your laptop. Rather than reach down and use the laptop’s trackpad to find playback controls, you can simply tap the on-screen playback controls.
“Scrubbing” through a video or audio file is another great use for a touchscreen: You can touch and hold the seek/back slider and move your finger back and forth to find the spot you want in your video. It’s much less awkward than using a laptop’s trackpad to do the same thing with your finger.
If you work with any sort of 3D modeling or CAD application, you might also find a touchscreen supremely useful for rotating models. There are many other uses, depending on the apps you rely on.
Laptop touchscreens aren’t going anywhere
Have I sold you on the productivity-boosting value of a touchscreen laptop? You often don’t have to go out of your way to have one in front of you and you might well wind up getting a touchscreen in the next laptop you buy.
Of course, if your current laptop supports pen input, its touchscreen is even more useful. A laptop with a digitizer for pens that supports a variety of pressure levels is a great tool when taking notes, marking up documents, drawing, and more.
There’s one last objection I hear often: People don’t want to smudge their screens with their fingers. But we’re smudging our phones with our fingers all day, anyway! Whether you’re cleaning your smartphone or your laptop’s screen, all you need is a simple microfiber cloth to keep it spotless.
Want more great tips? Sign up for my free Windows Intelligence newsletter — I’ll send you three things to try every Friday. Plus, get free copies of Paul Thurrott’s Windows 11 and Windows 10 Field Guides (a $10 value) as a special welcome bonus.
CVS, the pharmacy company, recently settled a class-action lawsuit in which the company faced a complaint it used AI lie detector tests during job interviews — and didn’t inform prospective employees about what it was doing. (The terms of the settlement were not disclosed.)
HireVue is an AI HR product designed to provide benefits both to prospective hires and the companies doing the hiring.
The two benefits for would-be employees, according to the company, are that AI can reduce bias in hiring (although the use of facial analysis was removed in 2020 over concerns about potential bias), and interview questions can be established in advance, then the candidate can do the interview with AI at any time. (The software also facilitates real-time interviews with real people.)
The platform integrates with HR systems and tools, including Microsoft Teams, LinkedIn, and Salesforce.
HireVue claims that its product enables teams to make better hires much faster. Videos are recorded, and AI analyzes them to rate verbal and non-verbal cues such as word choice, tone of voice, and facial expressions.
The AI features, developed by a team of data scientists and industrial-organizational psychologists, are a trade secret, but it’s some kind of machine learning trained on interviews and follow-ups to find out which employees worked out.
Affectiva’s Emotion AI technology, integrated with HireVue’s video interview platform to enhance HireVue’s AI, was designed to track and interpret various facial expressions such as smiles, surprise, contempt, disgust, and smirks. This analysis contributed to generating an “employability score” for each candidate, which included assessments of traits like “conscientiousness and responsibility” and an “innate sense of integrity and honor,” according to the lawsuit, which claimed the software amounted to a “lie detector.”
Europe is planning a lie detector test for entry
The European Union is planning to use an AI lie detector system called iBorderCtrl for travelers entering EU countries.
iBorderCtrl is an AI software tool designed to analyze facial movements and body language to flag suspicious behavior to immigration officers. Critics argue that the system will discriminate against people with disabilities or anxious personalities, but it could be implemented as early as Oct. 6, 2024.
Can AI really tell whether someone is lying?
It’s not clear whether the allegation against CVS involved actual lie detection, and even less clear whether the software can actually judge integrity and other qualities in a person.
But AI can actually do a pretty good job with lie detection, according to a recent study by Alicia von Schenk and her colleagues at the University of Würzburg in Germany.
The researchers developed an AI tool trained using Google’s BERT language model on a dataset of 1,536 statements about weekend plans, with half being “incentivized lies.” The AI system successfully identified true and false statements 67% of the time (which is better than humans, who typically achieve only about 50% accuracy).
Researchers claim such a tool could be used for identifying fake news and disinformation on social media, detecting exaggerations or lies in job applications and interviews, and other uses.
Another research group from the IMT School of Advanced Studies Lucca and the University of Padua developed generative AI (genAI) that can identify lies in written texts with an accuracy level of 80%, according to the researchers.
With the rise in publicly available genAI tools, we can expect research in the area of AI lie detection to grow.
The trouble with AI lie detection
You can see the problem with AI lie detection very clearly in these numbers. Assuming the University of Würzburg AI is roughly as accurate as the AI used in hiring or border patrol, you can see how it might improve hiring — and ruin border patrol.
With hiring, the majority of candidates are rejected, and one is selected, normally with human judgement. If AI judges better, a company might make better hires, on average.
But with border agents, all people coming in are accepted, unless agents find some reason to reject. If AI is throwing up red flags and false positives at scale based on a 67% success rate, many otherwise acceptable travelers might get turned away.
The European plan is already subject to lawsuits, and I would be surprised if it ends up being rolled out for now.
It’s also safe to assume that AI lie detection software might get much, much better. And then what? Such a tool might prove invaluable in war for interrogating prisoners, and also for espionage purposes. It could be great for finding moles and double agents in spy organizations, and for finding and identifying terrorists.
Someday AI lie detection might become standard in hiring, and even applied to everyday office communications, including emails, texts and other communications tools.
Personally, I would love to see much better AI applied to social media posts, so that disinformation could be flagged by default. Given sufficiently good lie detection software applied at scale on social networks, social networks could go from the least reliable to the most reliable sources of information.
And there are a host of applications for tomorrow’s high-quality AI lie detection — including as a feature of AI glasses. (Imagine seeing a literal red flag in your glasses when the person in front of you is telling a lie.)
But I doubt lie detection software will ever be accepted in an office environment for ordinary everyday employees. It’s far too intrusive, creepy and problematic. And it greases the slippery slope down the path of AI providing the role of validating humans. If AI is basically determining our productivity, our integrity and other factors, then the people are basically working at the pleasure of the machines — the stuff of dystopian science fiction.
In the meantime, we’re going to see lots of headlines over the next few years about AI lie detection — and more of what we’ve seen this month: lawsuits, research and huge plans for implementation in security settings.
I’m not going to lie — the whole trend is both fascinating and horrible.
I don’t want to embarrass you, but the top of your phone’s screen is, like, totally a waste of space.
For years now, y’see, Android device-makers have been designing devices with a circular cutout along the display’s upper edge. It’s an awkward workaround that allows ’em to keep the bezels around the screen smaller, since they don’t have to squeeze the front-facing camera into that area.
But it also puts a glaringly blacked-out dead zone right smack dab in the center of your screen, in the middle of the Android status bar — thus creating a pointless void in an area that’s otherwise packed with power.
Here’s an eye-opening revelation for ya, though: It doesn’t have to be that way. You can’t do much about the existence of that camera cutout, of course, but with a teensy bit of barely-there effort, you can transform your phone’s status bar black hole into a genuinely useful advantage — an extra source of efficiency that actually makes your phone even more well-suited for on-the-go productivity.
Let me show you three different ways you can inject a hearty helping of extra intelligence into your Android status bar and turn that space-wasting liability into a brain-aiding asset.
Android status bar secret #1: The notification station
Our first Android status bar power-up takes inspiration from the other side of the mobile-tech reservoir — with a feature that may be making its way to at least some Android devices natively before long.
It’s a clever little setup that lets you implement something similar to Apple’s Dynamic Island on any Android device this instant. And with Samsung supposedly lookin’ at cookin’ up something similar for its Galaxy gadgets, it couldn’t be a more timely twist to consider.
The secret lies within a thoughtfully crafted app called, fittingly enough, dynamicSpot. And — well, here’s what it looks like in action:
JR Raphael, IDG
In short, dynamicSpot shows a pill-like shape around your phone’s camera cutout whenever a new notification arrives. You can then tap that shape to expand and interact with the notification or long-press it to jump directly into the associated app.
And the specifics of exactly how it works are entirely up to you. You can set how long the pill remains in place after a notification arrives, with a default of 20 seconds but a possible value as high as 24 hours — if you really want to make sure something important catches your eye. And you can select exactly which apps cause the dynamicSpot alert to appear, so you could conceivably use the tool only for important, high-priority notifications, if you were so inclined.
The app can also show pop-ups for important system events, like when your battery is low or your internet connection is unavailable — a handy little touch that makes those types of alerts even more prominent and likely to be noticed.
JR Raphael, IDG
Last but not least, in an extra-nifty touch, you can enable an on-demand pop-up menu of shortcuts to specific apps that shows up anytime you tap the camera cutout in your Android device’s status bar (when a notification isn’t actively present).
JR Raphael, IDG
The dynamicSpot app is free with an optional pro upgrade that unlocks some of its more advanced features. It does require a fair amount of system-level access in order to operate — including an accessibility control setting that looks more than a little daunting when you first enable it — but that access is legitimately required for what the app needs to do. The tool comes from a known and trusted Android developer, too, and it’s extremely clear about the fact that it doesn’t collect or share any sort of personal info.
Android status bar secret #2: The shortcut summoner
If you aren’t into the whole Dynamic Island concept but do appreciate the idea of being able to turn your Android device’s status bar camera cutout into a shortcut-summoning step-saver, this next power-up is precisely the path for you.
It’s an app I’ve described before as an Android shortcut genie — and while it hasn’t come up in conversation in quite a while now, it’s every bit as impressive as when I first mentioned it a year ago.
The app is called, rather tantalizingly, Touch the Notch. (Oh, yes.)
Unlike our first selection, Touch the Notion is completely invisible most of the time. You won’t even know it’s there, visually speaking.
But when you touch your finger to that status bar area of yours, you’ll activate an app or function of your choosing — a whole bunch of ’em, in fact, depending on how exactly you caress that comely camera cutout.
You can set separate actions for:
A single touch
A long-press
A double-tap
A right-to-left swipe
And a left-to-right swipe
And the available actions include some seriously powerful possibilities. You can set any manner of notch fondling to accomplish feats like:
Opening a specific app
Opening a custom menu of apps
Toggling your phone’s flashlight
Capturing a screenshot
Toggling do-not-disturb mode
Controlling audio playback
Adjusting your screen’s brightness
Turning your screen off
JR Raphael, IDG
And again, any of those things happen simply as a result of your tapping your favorite fingie to that otherwise useless camera cutout at the top of your device’s display. Not bad, right?
Just like with our last Android status bar enhancer, this app does require a handful of pertinent permissions — including the ability to act as a system accessibility service, which may sound scary but, again, is actually needed for the app to be able to operate. It’s also worth noting that this app doesn’t require any lower-level system permissions, meaning it couldn’t even access the internet or do anything with your data if it wanted to. And that aside, its privacy policy is clear about the fact that it doesn’t store or share any manner of personal data.
Touch the Notch is free to use, with an optional donation if you want to support its developer.
Android status bar secret #3: The at-a-glance alerter
Last but not least in our collection of Android status bar superchargers is a splendidly subtle addition for your top-of-screen area — a zesty little nugget that introduces a major upgrade to the way your notifications, erm, notify you.
It’s called AodNotify, and it turns your Android camera cutout into an intelligent alert board that lets you learn about pending notifications without having to activate your display or allow yourself to be distracted by an info-dense always-on-display interface.
The specific version of AodNotify you need depends on what type of phone you’re using:
AodNotify can light up your Android status bar to indicate new notifications in all sorts of interesting ways. You can have incoming notifications from certain apps create a ring of light around the camera cutout at the top of your screen so you’ll always see ’em, for instance, even if you don’t hear the initial ding — or you can create a small LED-style dot at the top of your screen for specific types of incoming alerts. You can even set up an unmissable full-screen outline light in any color and style to associate with certain notifications, if you really want to get wild and make sure something catches your attention.
JR Raphael, IDG
What makes the setup especially useful is the fact that you can specify which exact apps will cause a notification light to appear — so, for instance, you could have important work-related apps like Slack or Gmail trigger an eye-catching light-up effect but not have that happen with less pressing alerts from LinkedIn or Google Photos.
AodNotify can light up your Android camera cutout for significant system events, too, like a low battery — and it can use specific colors mapped to each app or event so it’s easy to know what any effect means even with the most cursory of glances.
AodNotify is free with an optional $5 upgrade for some of its more advanced features. The app doesn’t require any permissions beyond what’s needed for its operation, and its developer (a well-known and widely trusted Android mainstay) says the software doesn’t collect, store, or share any manner of user data.
So there you have it: three supremely useful Android status bar enhancements. Pick the one that feels most helpful for you and your own style of getting stuff done — and watch that awkward camera cutout compromise morph into an indispensable efficiency advantage.
Hey — don’t stop here: Get six full days of advanced Android knowledge with my free Android Shortcut Supercourse. You’ll learn tons of time-saving tricks!
The demand for AI platform software is expected to grow 40% a year over the next four years, rising from $27.9 billion in sales last year to $153 billion in 2028, according to a new report from research firm IDC.
The report focused on the rapid pace by which AI platforms, such as Microsoft Azure AI, Amazon AI services, Google Cloud AI, and OpenAI grew last year, and how that growth is projected to maintain a “remarkable momentum,” driven by the increasing adoption of technology across many industries.
IDC expects that level of growth to push revenue for AI software to $307 billion worldwide in 2027. That forecast includes platforms and AI applications, AI System Infrastructure Software (SIS), and AI Application Development and Deployment (AD&D) software.
In 2023, the global AI platforms market grew by 44.4% year-on-year compared to 2022. Microsoft led the market, increasing by 77.9% last year to capture 13.8% of the market. Palantir, a major AI player, had 7.5% of the market, representing an 18.2% year-over-year increase, according to IDC.
“OpenAI’s meteoric rise in 2023 marked nothing short of an enormous transformation in the AI landscape,” IDC said in its report: “Worldwide Artificial Intelligence Platforms Software Market Shares.” OpenAI had a staggering 690% year-over-year increase in revenue last year; the company’s market share soared to 5.8%, “a remarkable achievement for a relative newcomer in this highly competitive field,” IDC said.
Ritu Jyoti, IDC’s group vice president of IDC’s AI, Automation and Analytics research, said the current market shows “no signs of slowing down. Rapid innovations in generative AI is changing how companies think about their products, how they develop and deploy AI applications, and how they leverage technology themselves for reinventing their business models and competitive positioning.”
AI platform adoption will continue to accelerate with the emergence of unified platforms for predictive and generative AI that support interoperating APIs, ecosystem extensibility, and responsible AI adoption at scale, according to Jyoti.
IDC expects cloud-based deployments of AI software to grow at a faster rate than on-premises deployments, with revenue from AI platforms in the public cloud forecast to have a five-year CAGR of 50.9%.
“This trend is attributed to the advanced security measures, data and regulatory compliance, and the scalability capabilities that cloud vendors offer,” IDC said. “With the rapid advancement of technology and the growing demand for AI solutions from businesses across industries, cloud-based deployment of AI platforms software is expected to continue expanding at a rapid rate.”
Google’s announcement last week that it still isn’t dropping support for third-party cookies came “out of the blue” and “undermines a lot of the work we’ve done together to make the Web work without third-party cookies,” Hadley Berman, of the Worldwide Web Consortium (W3C), wrote in a blog post Monday. (The post was titled: “Third-party cookies have got to go.”)
The W3C agrees with the updated RFC definition of cookies, which acknowledges they have “inherent privacy issues.” Moreover, the RFC strongly recommends that “user agents adopt a policy for third-party cookies that is as restrictive as compatibility constraints permit.”
While third-party cookies — which are set by a website other than the one a user is visiting through embedded content such as ads, social media widgets, or tracking pixels — can be helpful when used for authentication across multiple sites, they also enable hidden data collection about users’ internet activity, Berman said.
There also are other hazards lurking in “the tracking and subsequent data collection and brokerage” that third-party cookies support, including “micro-targeting of political messages” that harm society at large, she wrote.
Google’s ‘user’s choice’ approach to cookies
Rather than end support for third-party cookies, Google instead decided to update Chrome’s cross-site tracking protection policy, unveiled last December, with an option in the settings of Chrome’s Privacy Sandbox, a set of privacy-preserving APIs. The option allows users to choose whether they want to experience web browsing within the Privacy Sandbox setting or continue to have traditional cross-site cookies activated.
Chrome users can also use the “Enhanced Ad Privacy” feature Google rolled out last year as part of Chrome version 115; it allows for interest-based advertising without tracking individual users across websites, the company said.
The W3C has been working with Google’s Privacy Sandbox team for several years on third-party cookie policies with “substantial progress,” Berman noted. The recent change in direction by Google represents a major step back in that effort, she said.
“The unfortunate climb-down will also have secondary effects, as it is likely to delay cross-browser work on effective alternatives to third-party cookies,” Berman wrote. “We fear it will have an overall detrimental impact on the cause of improving privacy on the web.”
That said, the W3C hopes Google “reverses this decision and re-commits to a path towards removal of third-party cookies,” she added.
Google did not immediately respond to requests for comment Tuesday.
Google’s lack of privacy leadership
Privacy experts acknowledged that while third-party cookies do present privacy concerns, there are numerous stakeholders to consider.
“Google has repeatedly attempted to replace cookies…aiming to balance user privacy with the needs of advertisers,” said Jason Soroko, senior vice president of product at Sectigo, a provider of certificate lifecycle management. “However, these efforts have struggled due to resistance from privacy advocates, regulatory hurdles, and technical challenges.”
That likely contributed to Google’s decision to delay pulling its support for cookies, he said, citing the “complex interplay between innovation, privacy concerns, and regulatory frameworks.”
More disappointing is that the company “still seemingly has no clear plan to implement greater privacy and safety controls against tracking,” said one privacy expert, who doesn’t believe Google is doing enough.
Google “has long boasted about the innovation happening in its Privacy Sandbox initiative, but that has yet to publicly bear fruit,” said Gal Ringel, Cofounder and CEO at Mine, a global data privacy-management firm.
Moreover, given Google’s role as “the single most influential organization on the internet today,” the company’s failure “to take a true stand on privacy sets a bad precedent on the issue at a critical time when the US is trying to pass more legislation to address the problem,” he added.
Google’s announcement last week that it still isn’t dropping support for third-party cookies came “out of the blue” and “undermines a lot of the work we’ve done together to make the Web work without third-party cookies,” Hadley Berman, of the Worldwide Web Consortium (W3C), wrote in a blog post Monday. (The post was titled: “Third-party cookies have got to go.”)
The W3C agrees with the updated RFC definition of cookies, which acknowledges they have “inherent privacy issues.” Moreover, the RFC strongly recommends that “user agents adopt a policy for third-party cookies that is as restrictive as compatibility constraints permit.”
While third-party cookies — which are set by a website other than the one a user is visiting through embedded content such as ads, social media widgets, or tracking pixels — can be helpful when used for authentication across multiple sites, they also enable hidden data collection about users’ internet activity, Berman said.
There also are other hazards lurking in “the tracking and subsequent data collection and brokerage” that third-party cookies support, including “micro-targeting of political messages” that harm society at large, she wrote.
Google’s ‘user’s choice’ approach to cookies
Rather than end support for third-party cookies, Google instead decided to update Chrome’s cross-site tracking protection policy, unveiled last December, with an option in the settings of Chrome’s Privacy Sandbox, a set of privacy-preserving APIs. The option allows users to choose whether they want to experience web browsing within the Privacy Sandbox setting or continue to have traditional cross-site cookies activated.
Chrome users can also use the “Enhanced Ad Privacy” feature Google rolled out last year as part of Chrome version 115; it allows for interest-based advertising without tracking individual users across websites, the company said.
The W3C has been working with Google’s Privacy Sandbox team for several years on third-party cookie policies with “substantial progress,” Berman noted. The recent change in direction by Google represents a major step back in that effort, she said.
“The unfortunate climb-down will also have secondary effects, as it is likely to delay cross-browser work on effective alternatives to third-party cookies,” Berman wrote. “We fear it will have an overall detrimental impact on the cause of improving privacy on the web.”
That said, the W3C hopes Google “reverses this decision and re-commits to a path towards removal of third-party cookies,” she added.
Google did not immediately respond to requests for comment Tuesday.
Google’s lack of privacy leadership
Privacy experts acknowledged that while third-party cookies do present privacy concerns, there are numerous stakeholders to consider.
“Google has repeatedly attempted to replace cookies…aiming to balance user privacy with the needs of advertisers,” said Jason Soroko, senior vice president of product at Sectigo, a provider of certificate lifecycle management. “However, these efforts have struggled due to resistance from privacy advocates, regulatory hurdles, and technical challenges.”
That likely contributed to Google’s decision to delay pulling its support for cookies, he said, citing the “complex interplay between innovation, privacy concerns, and regulatory frameworks.”
More disappointing is that the company “still seemingly has no clear plan to implement greater privacy and safety controls against tracking,” said one privacy expert, who doesn’t believe Google is doing enough.
Google “has long boasted about the innovation happening in its Privacy Sandbox initiative, but that has yet to publicly bear fruit,” said Gal Ringel, Cofounder and CEO at Mine, a global data privacy-management firm.
Moreover, given Google’s role as “the single most influential organization on the internet today,” the company’s failure “to take a true stand on privacy sets a bad precedent on the issue at a critical time when the US is trying to pass more legislation to address the problem,” he added.