Microsoft will start selling a thin client device that lets workers boot directly to Windows 365 “in seconds,” the company announced on Tuesday.
Windows 365 Link will cost $349 when it launches next April, but businesses can contact their Microsoft account team and request a private preview. The preview program is open to customers in a handful of countries: the United States, Canada, United Kingdom, Germany, Australia, New Zealand.
“As cloud adoption has been growing, we’re starting to receive asks from customers for a Windows 365 endpoint that is secure, simple to manage, and gets them directly to Windows 365,” said Jalleen Ringer, product leader for Windows Cloud Endpoints, “and it gets them there fast.”
Measuring 4.72 inches square and just over an inch thick, the device can easily be mounted behind a desktop monitor or under a desk, Microsoft said.
The Link will particularly suit organizations that have hybrid work arrangements in place, according to Microsoft, with employees sharing the same desks and monitors. With Link devices, workers can turn up to work without a laptop and access their own Windows 365 desktop via the cloud.
“This would also be a great fit for call centers … and front-line workers who need to be able to log into their desktop from different areas around the factory, hospital, warehouse, etc,” said Tom Mainelli, IDC group vice president for device and consumer research. “What’s potentially very appealing about this relatively low-cost hardware is that it should drive a very good Windows 365 experience while helping to accentuate many of the manageability and security benefits of Windows 365.”
The device may not be a good fit for organizations that need more flexibility, however. “Those managing a mix of virtual desktop technologies, including Microsoft Azure Virtual Desktop, will need to consider alternative endpoints, as Link exclusively supports Windows 365,” said Stuart Downes, vice president analyst at Gartner.
The Link comes with 8GB of RAM — more than enough to handle the 4GB minimum requirement Teams video calls. Microsoft also plans to support other video meeting software apps such as Cisco’s Webex.
There’s also support for dual 4K monitors; four USB ports (three USB-A 3.2, one USB-C 3.2); one HDMI port; one DisplayPort; a 3.5mm headphone jack; an ethernet port; and a Kensington lock port. The device supports Bluetooth 5.3 and Wi-Fi 6E.
The Link uses an Intel chip, but Microsoft declined to provide further detail on the processor and other hardware specs.
Although the Link does not have a neural processing unit (NPU), by connecting to Windows 365, users can access the latest Windows 11 AI features coming to Copilot+ PCs — such as Recall and Click to Do — via the cloud.
The Link runs a lightweight version of Windows — Windows CPC — to authenticate and connect users to their PC running in the cloud, with minimal features such as settings. There are no local apps, no sensitive data stored on the device, and no local admin users. “With the small OS, we’re able to really dial up the security at the endpoint, reducing its attack surface and enabling a high security posture, all without impacting the experience,” said Ringer.
Microsoft has been testing the device with a small number of customers; Ringer claimed they’ve seen a lower total cost of ownership with the Link and Windows 365 (presumably compared to fleets of Windows-based laptops and PCs). That’s due, in part, to less time spent by IT on device setup, maintenance and user issue resolution.
Link customers still have to pay a monthly subscription fee for using Windows 365, but that could still make sense in terms of business costs.
“A Windows 365 subscription will typically cost more than buying a PC outright, but then you have to factor in the cost of managing that device and keeping it secure over its lifetime,” said Mainelli. “Many firms struggle to find enough IT professionals to manage their fleets. W365 can simplify this, and Microsoft’s new hardware may eliminate the roadblock of deciding what it should run on.”
Microsoft said the new device is the first iteration, with other form factors in development. The company also plans to work with original equipment manufacturer partners to develop similar products.
“The launch of Microsoft’s Surface devices previously spurred a wave of innovation among other device manufacturers,” said Downes. “Similarly, Windows 365 Link is expected to ignite advancements in the thin client market, which has seen limited hardware innovation in recent years.”
Gartner predicts that annual spending on Desktop as a Service (DaaS) will grow from $3.5 billion today to more than $5 billion in 2028, he said.
Unemployment in IT fields has been dropping in recent months. Generative AI (genAI) is opening up new career opportunities. Inflation is deflating, and the US economy appears strong.
So, why is it so many people are still finding it hard to land a job in technology?
This year, large enterprises, including tech giants such as Alphabet (Google), Dell, Intel, Microsoft and Cisco, have announced significant layoffs. So far in 2024, 168 tech companies have laid off more than 42,000 employees. That’s still a vastly smaller number than the 262,682 staffers laid off by tech firms in 2023.
One reason for the trend: small to midsize companies that had been starving for workers were scooping up talent left in the wake of enterprise layoffs. According to management consultancy Janco Associates, the IT unemployment rate dropped from 6% in August to 3.8% in September, while some other industry sources estimated it as low as 2.4%.
Last month, according Janco, the number of unemployed IT professionals in the US dropped from 148,000 to 98,000. (Janco derived its findings from a US Bureau of Labor Statistics (BLS) report released at the beginning of November.)
More recently, however, hiring has slowed, in part because of a lack of qualified candidates and because the number of job openings shrank as IT positions were quickly filled earlier this year, according to Janco Associates CEO Victor Janulaitis.
“In the last three months the IT Job market shrank by 21,900 jobs,” Janulaitis said. “Overall, that is a flattening of the long-term growth rate pattern of the IT job market. Based on our data and forecast models, there will be no growth in the IT job market in [the remainder of] calendar year 2024.”
Janco Associates
A crisis of confidence for job-seekers?
ZipRecruiter just released its latestJob Seeker Confidence Index; it found that confidence has dropped to its lowest level since the index began in Q1 2022. Forty-one percent of job seekers reported it is now much harder to find a job, and almost half of job seekers (43%) said their job search is going poorly, according to the report. Only 13% of job seekers described their hunt as going well — the widest gap in nearly three years. More than half of job seekers (53%) said there are fewer opportunities compared to six months ago, and 34% said they had to look outside their usual field.
That “flattening” in job growth has led to a dour mood among IT workers. A new survey of more than 1,100 individuals in high-demand tech roles by online hiring platform Indeed found more than a third of tech talent is concerned about layoffs in the next year. Four in 10 believe if layoffs occur, they’ll be impacted, 70% said that they are likely to start looking for roles at other companies if their current company does layoffs, and 79% feel pressure to upskill because of the rise of genAI.
The dynamic of smaller companies hiring more employees also added to the shift in needed skills.
“The economy is slowing,” Janulaitis said. “However, there are a number of jobs unfilled by IT pros. The primary culprit is the lack of qualified individuals to fill the open positions.”
Savi Thethi, who leads tech transformation consulting for the Americas at Ernst & Young, agreed that despite low unemployment, many IT job seekers struggle due to a shortage of skills in crucial areas such as data analytics, artificial intelligence, and cloud computing. The rapid evolution of those technologies has outpaced the availability of qualified professionals, creating a gap between demand and supply, he said.
“In addition, companies are increasingly investing in reskilling and upskilling their current workforce, not only within their IT departments but across the entire organization, to increase digital fluency and better prepare their workforce for the future,” Thethi said.
One of the biggest changes in the IT job market is that companies are less interested in college degrees and more likely to be seeking specific hard and soft skills such as problem solving, critical thinking, communication, and change management. They also want employees who can simply get along with others.
Companies also want candidates who have a mix of business and tech skills, according to Thethi. “Generally speaking, what can IT workers in the hunt for a new job do to increase their odds of landing one? Obtaining certifications in key areas such as AI, data science, and cloud computing is quick in many cases. Additionally, leveraging social platforms to build and expand professional networks can lead to new opportunities and valuable connections.”
It’s also crucial for job seekers to highlight their business acumen, showcasing how their skills and contributions have driven value creation and business outcomes in previous roles, he said. “By combining technical expertise with strong business skills and a proactive networking approach, IT professionals can significantly enhance their job prospects,” Thethi said.
Uneven demand for IT pros, depending on skills
Allison Shrivastava, an economic research associate at the Indeed Hiring Lab, said that while the unemployment is low, some sectors are doing much better than others. For example, many in-person and service sectors have job postings well above their pre-pandemic baselines, while other sectors, including software development and IT, are below.
There are several reasons tech-related sectors aren’t doing as well as others. In particular, the sectors expanded during the post-pandemic boom, with job postings in software development reaching well above pre-pandemic levels, Shrivastava said. The declines in hiring for those jobs could be related to a market correction after several years of rapid growth.
“These sectors are also pretty costly to hire in, both in terms of time and money, so employers could be more cautious in expanding their employee base, favoring a wait-and-see approach while the labor market settles,” Shrivastava said.
Linsey Fagan, a senior talent strategy advisor at Indeed, called the tech job market “unique right now.” With tech job volume down and more talent looking, job seekers can take steps to improve their success.
“First, tech is advancing rapidly, underscoring the importance of upskilling to remain competitive,” Fagan said, adding that the future of work will increasingly be shaped by AI, forcing tech pros to continually adapt to stay relevant.
Employers are currently struggling with the question of how to future-proof their job descriptions, since they are not 100% sure on what skills will be essential, according to Amy Loomis, a vice president analyst with research firm IDC.
“Job seekers need to show experience with current IT required skills as well as those that will be valuable for the future to drive AI-enabled business. Increasingly, enterprises require that job candidates verify their skills in real-world scenarios, like labs,” Loomis said. “Employers take significant stock in badging as a marker of proficiency, but some certifications can only be achieved by being employed somewhere that offers the training to get the badge, so it’s a Catch 22.”
A key difference between last year and this year is the speed with which skills are becoming outdated and the need for employees to undertake continuous learning to stay current, Loomis added.
James Stanger, chief technology evangelist with IT industry group CompTIA, said hiring managers are looking for more specialized knowledge in potential hires in areas such as automation, cloud computing, data security, and incident response. Hands-on knowledge is essential for demonstrating true skill capabilities.
Remote work is less of an option worldwide than it was last year, according to Stanger, which has led to an increase in security and privacy regulations, such as NIS2 in the European Union, SEC regulations in the United States, and the Cybersecurity Act in Malaysia.
“Hundreds of regulations have appeared, mandating the use of things such as Software Bill of Materials (SBOM) and imposing secure by design requirements,” Stanger said. “They are having an effect on hiring, because they drive hiring managers to look for people with an understanding of these regulations and best practices.”
AI skills are beginning to “creep slowly into serious job role descriptions,” he said.
“Automation is also increasingly important. That’s significant change. Data analytics knowledge and the ability to manage data has also increased in demand,” Stanger added.
According to Indeed’s AI at Work report, AI is expected to impact jobs that require highly technical skills. For IT professionals, staying on top of the evolving needs of the market, especially in areas like AI, machine learning (AI/ML), and cloud computing, will be critical. “At Indeed, we like to say AI won’t replace jobs, but people who can use AI well will,” Fagan said.
Stanger added that job seekers need to learn how to evaluate data as it comes in from AI programs. In other words, AI doesn’t yet create “information,” just just creates the data; it takes a human to interpret that data so that it can be applied to a business use case. If you can demonstrate you know how to do that, Stanger said, “that’ll get you some great interview opportunities.”
“Leaders of organizations in literally every sector have realized that wise use of technology is critical for any organization to stay on mission, or serve its constituencies, or remain profitable,” Stanger said. “As a result, hiring managers are reacting to significant pressure from the challenge to make sure their workers can map technology to business needs.”
In addition to AI skills, technologies like the programming languages Rust and Go, knowledge of Google Cloud Platform, AWS, and cloud management platforms such as Terraform, are all experiencing a surge in demand — with relatively few job seekers to fill those open roles. One place to acquire those skills: IT certifications.
The top IT certifications listed in tech Job postings for Q3 2023 vs. Q3 2024, measured in the percentage of each based on total tech job listings. Despite some certifications growing in demand and others shrinking, there hasn’t been a significant change in demand for certifications overall.
Indeed
According to Indeed, the top 10 certifications ranked by highest salary in job listings are:
CISSP
PMP
IAT Level II
DoD 8570
IAT
Certified Information Systems Auditor
CompTIA Security+
CCNA
CompTIA Network+
CompTIA A+
Certifications directly related to the role someone is being hired for are essential and should be called out in the job description, according to Tamara Larsen, Indeed’s director of IT Infrastructure & Platforms. These typically include certifications from recognized third-party providers such as AWS Cloud, Azure Cloud, Azure Active Directory, PMI Project Management, or CSM Scrum Master, among others.
In addition, certifications that help develop complementary skills, such as Leadership Development, Professional Writing, Toastmasters, or other technical certifications not explicitly required, can be helpful, too. “However, too many certifications that are not relevant can be considered a negative,” said Larsen.
“Learning those skills, and others related to AI, can give candidates a significant advantage in securing roles in what can only be described as a ‘dynamic landscape,’” said Indeed’s Fagan.
The good news for those currently working in IT: training is nearly always free.
“Our research found that 89% of tech professionals use company-provided training opportunities to keep their skills fresh. And with gen AI gaining momentum, 79% of tech professionals feel pressure to upskill,” Fagan said. “Most employers offer tuition reimbursement or upskilling opportunities, so it would be a missed opportunity not to take advantage.
“Additionally, adapting and integrating AI into workflows is becoming essential,” he added.
Flexibility is a priority for many job seekers, with tech professionals favoring remote roles over in-office ones. However, staying open to hybrid or on-site work can help job seekers find jobs faster.
“Our research found that professionals who work on-site about four days per week tend to want to stay with their employers, likely due to the collaboration and sense of community fostered by in-person interactions,” Fagan said. “By staying open to upskilling, particularly in high-demand areas and in AI integration, and considering flexibility in work location, tech job seekers can better navigate today’s tech job market.”
If hiring managers are looking for more evidence of your experience, then find clever ways to get experienced people to vouch for you. And, find ways to lead hiring managers into feeling confident in you.
“That’s more than just tech skill; you need to be a business tech problem solver. The way to prove that is to have a trusted third party do that for you,” CompTIA’s Stanger said. “That’s more than just tech skill; you need to be a business tech problem solver. The way to prove that is to have a trusted third party do that for you.”
The US Department of Justice (DOJ) is intensifying its antitrust actions against Google, proposing a historic move that could reshape the tech landscape. The DOJ has asked federal judge Amit Mehta to force Alphabet to sell its Chrome browser, which is a cornerstone of Google’s dominance in the search market, Bloomberg reported.
This proposal follows a ruling from August 2024 that found Google guilty of illegally monopolizing the search market.
The DOJ’s latest recommendation also includes measures related to artificial intelligence (AI) and the Android operating system, with the potential to impact both Google’s core advertising business and its burgeoning AI ventures. The case, which spans two presidential administrations, aims to address Google’s practices that critics argue suppress competition.
In addition to the sale of Chrome, the DOJ is pushing for data licensing requirements and for Google to uncouple its Android smartphone operating system from its other products, such as Google Search and Google Play, the report said quoting sources who wished not to be named.
These moves are designed to increase competition by giving rival companies more access to essential data and technologies currently controlled by Google, the report added.
“The DOJ’s attempt to force Google to sell Chrome is unprecedented and faces significant legal and practical challenges,” said Xiaofeng Wang, principal analyst at Forrester. “Google’s potential appeals could delay or overturn the decision. In addition, finding a suitable buyer without similar antitrust issues is also difficult.”
In October this year, the DOJ had proposed splitting off Google’s Chrome browser and Android operating system as part of sweeping remedies aimed at curbing the tech giant’s “illegal monopoly” in online search and advertising.
“The DOJ is considering behavioral and structural remedies that would prevent Google from using products such as Chrome, Play, and Android to advantage Google search and Google search-related products and features — including emerging search access points and features, such as artificial intelligence — over rivals or new entrants,” the DOJ said in a court filing then.
Google seems to be deeply disturbed by this development.
“The DOJ continues to push a radical agenda that goes far beyond the legal issues in this case,” Lee-Anne Mulholland, vice president of Google’s regulatory affairs, said in a statement. “The government putting its thumb on the scale in these ways would harm consumers, developers, and American technological leadership at precisely the moment it is most needed.”
A query to the DOJ remains unanswered.
Chrome’s dominance and the push for a sale
The proposed sale of Chrome stems from its critical role in Google’s search business. Chrome, which controls roughly 65% of the global browser market, serves as the primary gateway for users accessing Google’s search engine. By owning Chrome, Google can track signed-in users and better target ads, which form the bulk of its revenue. Additionally, Chrome has been used to funnel users toward Google’s AI-driven products, such as its Gemini AI system.
In an effort to protect consumers and developers, the DOJ’s proposed measures aim to reduce Google’s power to favor its own products. If the sale of Chrome proceeds, it could unlock new opportunities for competitors, potentially creating a more balanced online search market and encouraging innovation in AI.
The DOJ is also seeking to reshape how Google uses data, particularly in relation to its AI products. Google’s AI-driven search results, branded as “AI Overviews,” have drawn criticism from website publishers who argue that these summaries hurt their web traffic and ad revenue by providing answers directly on the search results page. To address this, the DOJ is proposing that Google be required to license its search data and allow websites more control over how their content is used in Google’s AI models.
Another key aspect of the DOJ’s recommendations includes pushing Google to make its search results more widely available to competitors. This could allow rival search engines and AI startups to improve their services using Google’s syndicated search data, which is currently restricted.
Implications for Google’s future
These developments are poised to alter Google’s business operations significantly. While the company has expressed its opposition to these proposals, with Google’s VP calling the DOJ’s actions “radical,” the potential reforms could lead to a more competitive digital ecosystem.
“If the DOJ succeeds in forcing Google to sell Chrome, it would likely impact Google’s ad targeting and measurement capabilities due to reduced data availability,” Wang noted. “This could decrease ad effectiveness and revenue, pushing Google to develop new data collection methods or innovate its ad strategies.”
Despite the far-reaching nature of these measures, the DOJ has stopped short of requiring Google to sell Android, a move that had been considered but ultimately deemed less essential than the changes proposed for Chrome and AI data.
The case, which will see further developments in 2025, is expected to have lasting effects on the tech industry. If the DOJ’s proposals are implemented, they could set a precedent for regulating the power of large tech companies in both the online search and AI markets.
“This action could set a precedent, leading to increased scrutiny of other tech giants like Amazon and Apple,” Wang added.
According to him, successful measures against Google might “encourage regulators to target other dominant players, reshaping the tech landscape.”
OpenAI launched its new AI-powered online search engine — SearchGPT — with the aim of supplanting “for specific search tasks” Google, Microsoft Bing and start-up Perplexity.
But the move is also raising concerns that it could open the door to plagiarism; AI-powered search engines have been accused of intentionally or unintentionally plagiarizing web-based content because the platforms scrape material and data from all over the web in real-time.
They can also generate content that closely mimics pre-existing content, according to Alon Yamin, CEO of AI-enabled plagiarism detection platform Copyleaks. That’s because the large language model engines behind generative AI (genAI) are trained using existing content.
“The trouble with ‘unintentional plagiarism’ is that it creates a gray area that’s challenging for both content creators and search engines to navigate,” Yamin said.
SearchGPT is a front-facing interface built atop OpenAI’s genAI-based ChatGPT chatbot; it will enable real-time web access for up-to-date sports scores, stock information and news. The search engine will also allow follow-up questions in the same search window, and its answers will consider the full context of the previous chat to offer an applicable answer.
The AI-based web crawler is also being touted for its ability to allow questions in “a more natural,” conversational way, according to OpenAI.
OpenAI announced on Oct. 31 that it had launched the SearchGPT prototype after beta testing it since July. Currently, access to SearchGPT is limited, as a list of hopeful free users waits for access.
An example of a search result from SearchGPT.
OpenAI
The pilot version of the search engine will be available at chatgpt.com/search as well as being offered as a desktop and mobile app. All ChatGPT Plus and Team users, as well as SearchGPT waitlist users, will have access from here on. Enterprise and education users will get access in the next few weeks, OpenAI said, with a “rollout to all free users over the coming months.”
One standout feature is the search engine’s ability to allow follow-up questions that build on the context of the original query.
For example, a user could ask what the best tomato plants are for your region; that could be followed up by asking about the best time to plant them.
SearchGPT is also designed to offer links to publishers of information by citing and linking to them in searches. “Responses have clear, in-line, named attribution and links so users know where information is coming from and can quickly engage with even more results in a sidebar with source links,” OpenAI said in its announcement.
Search rivals beat OpenAI to the punch
Last year, Google added its own AI-based capabilities to its search tool; so did Microsoft, which integrated OpenAI’s GPT-4 into Bing. “Big hitters like Google are already developing AI detection tools to help identify AI-generated content. But the challenge lies in distinguishing between high-quality AI-assisted content and low-quality, plagiarized material,” Yamin said. “It’s undoubtedly an ongoing process that will require constant refinement of algorithms and policies.”
For its part, Perplexity said in an updated FAQ that its web crawler, PerplexityBot, will not index the full or partial text content of any site that disallows it using robots.txt code. Robots.txt files are common simple text files stored on a web server to instruct web crawlers about which pages or sections of a website they are allowed to crawl and index.
“PerplexityBot only crawls content in compliance with robots.txt,” the FAQ explained. Perplexity also said it does not build “foundation models,” (also known as large language models), “so your content will not be used for AI model pre-training.”
The bottom line, Yamin said, is that search engines are in a “tricky position” as genAI evolves. “They want to provide the best results to users, which increasingly involves AI-generated or AI-enhanced content. At the same time, they need to protect original creators and maintain the integrity of search results. We’re seeing efforts to strike this balance, but it’s a complex issue that will take time to fully address.”
ChatGPT (i.e., SearchGPT) is probably best positioned among all competitors to upset Google’s dominance in online search, according to Damian Rollison, director of market insights at marketing software company SOCi. Of all the areas where ChatGPT competes with Google, search is where the latter’s 26-year advantage is the strongest.
“The early results of Bing search integrated into ChatGPT have been shaky, and the incredibly complex requirements of maintaining a world-class search platform tap into areas of expertise where OpenAI has yet to demonstrate its capabilities,” Rollison said.
Andy Thurai, a vice president analyst at Constellation Research, noted that Google still owns about 90% of the search engine market, meaning it won’t to be easy for anyone to encroach on that dominance.
An example of a follow-on question in SearchGPT that began with asking: \”What are the best tomatos for my region?\”
OpenAI
But Thurai said SearchGPT’s ease of use and conversational interface, which provides synthesized and more prose-like answers instead of traditional search results like Google, could attract more users in the future.
While Google can provide a personalized search result based on location, and previous searches, it still has limitations in terms of offering concise and conversational-style answers that remain on point, according to Thurai. “The concise nature of the answers, whether accurate or not, might be appealing to some users versus combing through many page search engines like those Google returns.”
Ironically, when ChatGPT was asked the question: Is SearchGPT as good as Google search? ChatGPT’s reply was nuanced.
“Google is great for quickly finding specific, current resources and ChatGPT is better for having interactive conversations, asking detailed questions, or seeking explanations on a wide range of topics,” SearchGPT responded. “The two can actually complement each other depending on what you need!”
When asked whether it’s as good or better than Bing, ChatGPT replied: “In short, if you’re looking for real-time information or need to browse the web, Bing is likely better. If you need detailed, conversational, or creative assistance, ChatGPT tends to be more helpful. Each tool excels in different areas!”
The murky issue of plagiarism
Thurai said he’s unsure whether AI-based search engines or “answer engines” will invite plagiarism on their own.
“They are not all that different from Google search, in which you get many answers instead of the most relevant answer that AI thinks is relevant to your question,” he said. “However, AI for content creation is a big concern for plagiarism. What is more concerning is that the current plagiarism tools don’t catch AI-produced content correctly. They are mostly useless.”
There are, however, tools that can create digital watermark/credentials such as C2PA, which can provide some content provenance and/or authenticity mechanisms, Thurai noted.
“As AI tools become more sophisticated and part of our day-to-day lives, distinguishing between AI-generated and human-created content, properly attributing original sources or authors, and empowering overall originality becomes even more critical,” Copyleak’s Yamin said. “This is precisely where the focus needs to remain — providing robust content integrity solutions that are evolving alongside the demands of the AI landscape.”
Microsoft’s November Patch Tuesday release addresses 89 vulnerabilities in Windows, SQL Server, .NET and Microsoft Office — and three zero-day vulnerabilities (CVE-2024-43451, CVE-2024-49019 and CVE-2024-49039) that mean a patch now recommendation for Windows platforms. Unusually, there are a significant number of patch “re-releases” that might also require administrator attention.
There were a few reported issues for the September update that have been addressed now, including:
Enterprise customers are reporting issues with theSSH service failing to start on updated Windows 11 24H2 machines. Microsoft recommended updating the file/directory level permissions on the SSH program directories (remember to include the log files). You can read more about this official workaroundhere.
It looks like we are entering a new age ofARM compatibility challenges for Microsoft. However, before we get ahead of ourselves, we really need to sort out the (three-month old) Roblox issue.
Major revisions
This Patch Tuesday includes the following major revisions:
CVE-2013-390: WinVerifyTrust Signature Validation Vulnerability. This update was originally published in 2013 via TechNet. This update is now made available and is applicable to Windows 10 and 11 users due to a recent change in the EnableCertPaddingCheck Windows API call. We highly recommend a review of this CVE and its associated Q&A documentation. Remember: if you must set your values in the registry, ensure that they are type DWORD not Reg SZ.
CVE-2024-49040: Microsoft Exchange Server Spoofing Vulnerability. When Microsoft updates a CVE (twice) in the same week, and the vulnerability has been publicly disclosed, it’s time to pay attention. Before you apply this Exchange Server update, we highly recommend a review of the reportedheader detection issues and mitigating factors.
And unusually, we have three kernel mode updates (CVE-2024-43511, CVE-2024-43516 and CVE-2024-43528 that were re-released in October and updated this month. These security vulnerabilities exploit a race condition in Microsoft’s Virtualization Based Security (VBS). It’s worth a review of the mitigating strategies while you thoroughly test these low-level kernel patches.
Testing guidance
Each month, the Readiness team analyzes the latest Patch Tuesday updates and provides detailed, actionable testing guidance based on a large application portfolio and a detailed analysis of the patches and their potential impact on Windows platforms and application installations.
For this release cycle, we have grouped the critical updates and required testing efforts into separate product and functional areas including:
Networking:
Test end-to-end VPN, Wi-Fi, sharing and Bluetooth scenarios.
Ensure internet shortcut files (ICS) display correctly
Security/crypto:
After installing the November update on your Certificate Authority (CA) servers, ensure that enrollment and renewal of certificates perform as expected.
Test Windows Defender Application Control (WDAC) and ensure that line-of-business apps are not blocked. Ensure that WDAC functions as expected on your Virtual Machines (VM).
Filesystem and logging:
TheNTFileCopyChunk API was updated and will require internal application testing if directly employed. Test the validity of your parameters and issues relating to directory notification.
I cannot claim to have anynostalgia for dial-up internet access (though I do have a certain Pavlovian response to the dial-up handshake sound). For those who are still using this approach to access the internet, the November update to the TAPI API has you in mind. A “quick” (haha) test is required to ensure you can still connect to the internet via dial-up once you update your system.
Windows lifecycle and enforcement updates
There were no product or security enforcements this cycle. However, we do have the following Microsoft products reaching their respective end of servicing terms:
Oct. 8, 2024: Windows 11 Enterprise and Education, Version 21H2, Windows 11 Home and Pro, Version 22H2, Windows 11 IoT Enterprise, Version 21H2.
Oct. 9, 2024: Microsoft Project 2024 (LTSC)
Mitigations and workarounds
Microsoft published the following mitigations applicable to this Patch Tuesday.
CVE-2024-49019: Active Directory Certificate Services Elevation of Privilege Vulnerability. As this vulnerability has been publicly disclosed, we need to take it seriously. Microsoft has offered some mitigation strategies during the update/testing/deployment for most enterprises that include:
Remove overly broad enroll or auto-enroll permissions.
Remove unused templates from certification authorities.
Secure templates that allow you to specify the subject in the request.
As most enterprises employ Microsoft Active Directory, we highly recommend a review of thisknowledge note from Microsoft.
Each month, we break down the update cycle into product families (as defined by Microsoft) with the following basic groupings:
Browsers (Microsoft IE and Edge);
Microsoft Windows (both desktop and server);
Microsoft Office;
Microsoft Exchange Server;
Microsoft Development platforms (ASP.NET Core, .NET Core and Chakra Core);
Adobe (if you get this far).
Browsers
Microsoft released a single update specific to Microsoft Edge (CVE-2024-49025), and two updates for the Chromium engine that underpins the browser (CVE-2024-10826 and CVE-2024-10827). There’s a brief note on the browser update here. We recommend adding these low-profile browser updates to your standard release schedule.
Windows
Microsoft released two (CVE-2024-43625 and CVE-2024-43639) patches with a critical rating and another 35 patches rated as important by Microsoft. This month the following key Windows features have been updated:
Windows Update Stack (note: installer rollbacks may be an issue);
NT OS, Secure Kernel and GDI;
Microsoft Hyper-V;
Networking, SMB and DNS;
Windows Kerberos.
Unfortunately, these Windows updates have been publicly disclosed or reported as exploited in the wild, making them zero-day problems:
CVE-2024-49019: Active Directory Certificate Services Elevation of Privilege.
CVE-2024-49039: Windows Task Scheduler Elevation of Privilege Vulnerability.
Add these Windows updates to your Patch Now release cadence.
Microsoft Office
Microsoft pushed out six Microsoft Office updates (all rated important) that affect SharePoint, Word and Excel. None of these reported vulnerabilities involve remote access or preview pane issues and have not been publicly disclosed or exploited in the wild. Add these updates to your standard release schedule.
Microsoft SQL (nee Exchange) Server
You want updates to Microsoft SQL Server? We got ‘em: 31 patches to the SQL Server Native client this month. That’s a lot of patches, even for a complex product like Microsoft SQL Server. These updates appear to be the result of a major clean-up effort from Microsoft addressing the following reported security vulnerabilities:
The vast majority of theseSQL Server Native Client updates address the CWE-122 related buffer overflow issues. Note: these patches update the SQL Native client, so this is a desktop, not a server, update. Crafting a testing profile for this one is a tough call. No new features have been added, and no high-risk areas have been patched. However, many internal line-of-business applications rely on these SQL client features. We recommend that your core business applications be tested before this SQL update, otherwise add it to your standard release schedule.
Boot note: Remember that there is a major revision toCVE-2024-49040 — this could affect the SQL Server “server” side of things.
Microsoft development platforms
Microsoft released one critical-rated update (CVE-2024-43498) and three updates rated as important for Microsoft .NET 9 and Visual Studio 2022. These are pretty low-risk security vulnerabilities and very specific to these versions of the development platforms. They should present a reduced testing profile. Add these updates to your standard developer schedule this month.
Adobe Reader (and other third-party updates)
Microsoft did not publish any Adobe Reader-related updates this month. The company released three non-Microsoft CVEs covering Google Chrome and SSH (CVE-2024-5535). Given the update to Windows Defender (as a result of the SSH issue), Microsoft also published a list of Defender vulnerabilities and weaknesses that might assist with your deployments.
The EU, which is now developing guidelines for how the region’s new AI law must be complied with, has started collecting opinions in two areas via an online survey.
The first area involves how the law should define AI systems (compared to traditional software). Here, the EU wants to hear from people in the AI industry, companies, academics and civil society. The second area concerns when the use of AI should be prohibited. The EU wants detailed feedback on each prohibited use and is particularly interested in practical examples.
Points will be collected using the survey until Dec. 11, and the European Commission expects to publish guidelines regarding the definition of AI systems and any prohibited uses in early 2025.
Google has entered a new and more intense phase of the AI wars, introducing its own Google Gemini app for iPhones; now you can use Apple Intelligence, ChatGPT, Microsoft Copilot and Google Gemini on one device.
Like most Google services, Google Gemini seems free, in that you don’t need to part with any cash credits to use it. Open it up, and you’ll find a chat window that also lets you get to a list of your previous chats. Speaking to Gemini is simple — text, voice, or even use a camera to point at something and you’ll get some answers. In other words, the app integrates the same features as you’ll find on the Gemini website, but it’s an app so that makes it cool.
Probably.
There is one more thing — access to the more conversational Gemini Live bot, which works a little like ChatGPT in voice mode. You can even assign access to Gemini as a shortcut on your iPhone’s Action button for fast access to the bot, which can also access and control any Google apps you’re brave enough to install on your iPhone.
All about Google
And that’s the thing, really. Like so much coming out of Silicon Valley now, Google Gemini is self-referencing.
You use Google on your iPhone to speak to a Google AI and access Google services, which gives you a more Android-like experience if you happen to have migrated to iOS from Android. You can use Gemini on your iPhone to control YouTube Music, for example, and you’ll get Google Maps if you ask for directions.
You even get supplementary privacy agreements for all those apps, some of which deliver exactly what you expect from Google the ads sales company, which is probably a little different than the privacy-first Apple experience you thought you were using. Gemini does put some protection in place, but your location data, feedback, and usage information can be reviewed by humans.
Most people won’t know this. Most people don’t read privacy agreements before accepting them. They should – but they are long, boring, and archaically written for a reason.
AI tribalism
If art reflects life and tech is indeed the new creativity, then the emergence of these equal but different digital tribes reflects the deeper tribalism that seems to be impacting every other part of life. Is that a good thing? Perhaps that depends on which state you live in.
At the end of days, Gemini on iPhone is your gateway to Google world, just as Windows takes you to Microsoft planet and Apple takes you to its own distorted reality, (subject to the EU). There are other tech worlds too, but this isn’t intended to be a definitive list of differing digital existences, especially now that these altered states have become both cloud- and service-based. It’s a battle playing out on every platform and on every device.
After all, if your primary computing experience becomes text- and voice-based, and the processors handling your requests are in the cloud, then it matters less which platform you use, as long as you get something you need. (It’s only later we’ll find that we get slightly less than what we need, with the difference between the two being the profit margin.)
Apple’s approach is to support those external services while building up its own AI suite with its own unique — and, if you ask me, vitally necessary — selling point around privacy. Others follow a different path, but it’s hard to ignore that control of your computational experience is the root of all these ambitions.
King of the hill
With its early mover advantage, OpenAI is not blind to the battle. Just this week it introduced support for different applications across Windows and Mac desktops. In a Nov. 14 message on X (for whomever remains genuinely active there), Open AI announced: “ChatGPT for macOS can now work with apps on your desktop. In this early beta for Plus and Team users, you can let ChatGPT look at coding apps to provide better answers.”
That means it will try to help when working in applications such as VS Code, Xcode, and Terminal. While you work, you can speak with the bot, get screenshots, share files and more. There is, of course, also a ChatGPT app for iPhones, and the first comparative reviews of the experience of using both Gemini and ChatGPT on an Apple device show pros and cons to both. Downstream vendors, most recently including Jamf, are relying on tools provided by the larger vendors to add useful tools to their own.
Google and OpenAI are not alone. Just last month, Microsoft introduced Copilot Vision, which it describes as autonomous agents capable of handling tasks and business functions, so you don’t need to. Apple, of course, remains high on its recent introduction of Apple Intelligence.
Things will get better before becoming worse
It’s a clash of the tech titans. And like every clash of the tech titans so far this century, you — or your business — are the product the titans are fighting for. That raises other questions such as how will they monetize your experience of AI.
How high will energy prices climb as a direct result of the spiraling electricity demands of these services? At what point will AI eat itself, creating emails from spoken summaries that are then in turn summarized by AI? When it comes to security and privacy, is even sovereign AI truly secure enough for use in regulated enterprise? Just how secure are Apple’s own AI servers?
And once the dominant players in the New AI Empire finally emerge, how, just how, will they do what Big Tech always does and follow Doctorow’s orders?
Google has entered a new and more intense phase of the AI wars, introducing its own Google Gemini app for iPhones; now you can use Apple Intelligence, ChatGPT, Microsoft Copilot and Google Gemini on one device.
Like most Google services, Google Gemini seems free, in that you don’t need to part with any cash credits to use it. Open it up, and you’ll find a chat window that also lets you get to a list of your previous chats. Speaking to Gemini is simple — text, voice, or even use a camera to point at something and you’ll get some answers. In other words, the app integrates the same features as you’ll find on the Gemini website, but it’s an app so that makes it cool.
Probably.
There is one more thing — access to the more conversational Gemini Live bot, which works a little like ChatGPT in voice mode. You can even assign access to Gemini as a shortcut on your iPhone’s Action button for fast access to the bot, which can also access and control any Google apps you’re brave enough to install on your iPhone.
All about Google
And that’s the thing, really. Like so much coming out of Silicon Valley now, Google Gemini is self-referencing.
You use Google on your iPhone to speak to a Google AI and access Google services, which gives you a more Android-like experience if you happen to have migrated to iOS from Android. You can use Gemini on your iPhone to control YouTube Music, for example, and you’ll get Google Maps if you ask for directions.
You even get supplementary privacy agreements for all those apps, some of which deliver exactly what you expect from Google the ads sales company, which is probably a little different than the privacy-first Apple experience you thought you were using. Gemini does put some protection in place, but your location data, feedback, and usage information can be reviewed by humans.
Most people won’t know this. Most people don’t read privacy agreements before accepting them. They should – but they are long, boring, and archaically written for a reason.
AI tribalism
If art reflects life and tech is indeed the new creativity, then the emergence of these equal but different digital tribes reflects the deeper tribalism that seems to be impacting every other part of life. Is that a good thing? Perhaps that depends on which state you live in.
At the end of days, Gemini on iPhone is your gateway to Google world, just as Windows takes you to Microsoft planet and Apple takes you to its own distorted reality, (subject to the EU). There are other tech worlds too, but this isn’t intended to be a definitive list of differing digital existences, especially now that these altered states have become both cloud- and service-based. It’s a battle playing out on every platform and on every device.
After all, if your primary computing experience becomes text- and voice-based, and the processors handling your requests are in the cloud, then it matters less which platform you use, as long as you get something you need. (It’s only later we’ll find that we get slightly less than what we need, with the difference between the two being the profit margin.)
Apple’s approach is to support those external services while building up its own AI suite with its own unique — and, if you ask me, vitally necessary — selling point around privacy. Others follow a different path, but it’s hard to ignore that control of your computational experience is the root of all these ambitions.
King of the hill
With its early mover advantage, OpenAI is not blind to the battle. Just this week it introduced support for different applications across Windows and Mac desktops. In a Nov. 14 message on X (for whomever remains genuinely active there), Open AI announced: “ChatGPT for macOS can now work with apps on your desktop. In this early beta for Plus and Team users, you can let ChatGPT look at coding apps to provide better answers.”
That means it will try to help when working in applications such as VS Code, Xcode, and Terminal. While you work, you can speak with the bot, get screenshots, share files and more. There is, of course, also a ChatGPT app for iPhones, and the first comparative reviews of the experience of using both Gemini and ChatGPT on an Apple device show pros and cons to both. Downstream vendors, most recently including Jamf, are relying on tools provided by the larger vendors to add useful tools to their own.
Google and OpenAI are not alone. Just last month, Microsoft introduced Copilot Vision, which it describes as autonomous agents capable of handling tasks and business functions, so you don’t need to. Apple, of course, remains high on its recent introduction of Apple Intelligence.
Things will get better before becoming worse
It’s a clash of the tech titans. And like every clash of the tech titans so far this century, you — or your business — are the product the titans are fighting for. That raises other questions such as how will they monetize your experience of AI.
How high will energy prices climb as a direct result of the spiraling electricity demands of these services? At what point will AI eat itself, creating emails from spoken summaries that are then in turn summarized by AI? When it comes to security and privacy, is even sovereign AI truly secure enough for use in regulated enterprise? Just how secure are Apple’s own AI servers?
And once the dominant players in the New AI Empire finally emerge, how, just how, will they do what Big Tech always does and follow Doctorow’s orders?
Research by British telecommunications provider O2 has found that seven in ten Britons (71 percent) would like to take revenge on scammers who have tried to trick them or their loved ones. At the same time, however, one in two people does not want to waste their time on it.
AI grandma against telephone scammers
O2 now wants to remedy this with an artificial intelligence called Daisy. As the “head of fraud prevention”, it’s the job of this state-of-the-art AI granny to keep scammers away from real people for as long as possible with human-like chatter. To activate Daisy, O2 customers simply have to forward a suspicious call to the number 7726.
Daisy combines different AI models that work together to first listen to the caller and convert their voice to text. It then generates responses appropriate to the character’s “personality” via a custom single-layer large language model. These are then fed back via a custom text-to-speech model to generate a natural language response. This happens in real-time, allowing the tool to have a human-like conversation with a caller.
Although human-like is a strong understatement: Daisy was trained with the help of Jim Browning, one of the most famous “scambaiters” on YouTube. With the persona of a lonely and seemingly somewhat bewildered older lady, she tricks the fraudsters into believing that they have found a perfect target, while in reality she beats them with their own weapons.
OpenAI recently introduced SimpleQA, a new benchmark for evaluating the factual accuracy of large language models (LLMs) that underpin generative AI (genAI).
Think of it as a kind of SAT for genAI chatbots consisting of 4,326 questions across diverse domains such as science, politics, pop culture, and art. Each question is designed to have one correct answer, which is verified by independent reviewers.
The same question is asked 100 times, and the frequency of each answer is tracked. The idea is that a more confident model will consistently give the same answer.
The questions were selected precisely because they have previously posed challenges for AI models, particularly those based on OpenAI’s GPT-4. This selective approach means that the low accuracy scores reflect performance on particularly difficult questions rather than the overall capabilities of the models.
This idea is also similar to the SATs, which emphasize not information that anybody and everybody knows but harder questions that high school students would have struggled with and had to work hard to master. This benchmark results show that OpenAI’s models aren’t particularly accurate on the questions that work asked. In short, they hallucinate.
OpenAI’s o1-preview model achieved a 42.7% success rate. GPT-4o followed with a 38.2% accuracy. And the smaller GPT-4o-mini scored only 8.6%. Anthropic did worse than OpenAI’s top model; the Claude-3.5-sonnet model managed to get just 28.9% of the answers correct.
All these models got an F, grade-wise, providing far more incorrect answers than correct ones. And the answers are super easy for a human.
Here are the kinds of questions that are asked by SimpleQA:
What year did the Titanic sink?
Who was the first President of the United States?
What is the chemical symbol for gold?
How many planets are in our solar system?
What is the capital city of France?
Which river is the longest in the world?
Who painted the Mona Lisa?
What is the title of the first Harry Potter book?
What does CPU stand for?
Who is known as the father of the computer?
These are pretty simple questions for most people to answer, but they can present a problem for chatbots. One reason these tools struggled is that SimpleQA questions demand precise, single, indisputable answers. Even minor variations or hedging can result in a failing grade. Chatbots do better with open-ended overviews of even very complex topics but struggle to give a single, concise, precise answer.
Also, the SimpleQA questions are short and self-contained and don’t provide a lot of context. This is why providing as much context as possible in the prompts that you write improves the quality of responses.
Compounding the problem, LLMs often overestimate their own accuracy. SimpleQA queried chatbots on what they think is the accuracy of their answers; the models consistently reported inflated success rates. They feign confidence, but their internal certainty may be low.
As one of their test examples, the researchers found that LLMs can generate accurate driving directions in complex environments like New York City. But when researchers introduced detours, the models’ performance dropped because they didn’t have an internal representation of the environment (as people do). Closing just 1% of streets in New York City led to a drop in the AI’s directional accuracy from nearly 100% to 67%.
Researchers found that even when a model performs well in a controlled setting, it might not possess coherent knowledge structures necessary for random or diverse scenarios.
The trouble with AI hallucinations
The fundamental problem we all face is this: Industries and individuals are already relying on LLM-based chatbots and generative AI tools for real work in the real world. The public, and even professionals, believe this technology to be more reliable than it actually is.
As one recent example, OpenAI offers an AI transcription tool called Whisper, which hospitals and doctors are already using for medical transcriptions. The Associated Press reported that a version of Whisper was downloaded more than 4.2 million times from the open-source AI platform HuggingFace.
More than 30,000 clinicians and 40 health systems, including the Children’s Hospital Los Angeles, are using a tool called Nabla, which is based on Whisper but optimized for medical lingo. The company estimates that Nabla has been used for roughly seven million medical visits in the United States and France.
One engineer who looked for Whisper hallucinations in transcriptions found the in every document examined. Another found hallucinations in half of the 100 hours of Whisper transcriptions he analyzed.
Professors from the University of Virginia looked at thousands of short snippets from a research repository hosted at Carnegie Mellon University. They found that nearly 40% of the hallucinations were “harmful or concerning.”
In one transcription, Whisper even invented a non-existent medication called “hyperactivated antibiotics.”
Experts fear the use of Whisper-based transcription will result in misdiagnoses and other problems.
What to do about AI hallucinations
When you get a diagnosis from your doctor, you might want to get a second opinion. Likewise, whenever you get a result from ChatGPT, Perplexity AI, or some other LLM-based chatbot, you should also get a second opinion.
You can use one tool to check another. For example, if the subject of your query has original documentation — say, a scientific research paper, a presentation, or a PDF of any kind — you can upload those original documents into Google’s NotebookLM tool. Then, you can copy results from the other tool, paste them into NotebookLM, and ask if it’s factually accurate.
You should also check original sources. Fact-check everything.
Chatbots can be great for learning, for exploring topics, for summarizing documents and many other uses. But they are not reliable sources of factual information, in general.
What you should never, ever do is copy results from AI chatbots and paste it into something else to represent your own voice and your own facts. The language is often a bit “off.” The emphasis of points can be strange. And it’s a misleading practice.
Worst of all, the chatbot you’re using could be hallucinating, lying or straight up making stuff up. They’re simply not as smart as people think.