It is strategically essential for the US to bring home the manufacture of key components for the technology used across government, consumer, and enterprise markets, an imperative impacting IT leadership that will generate change across the coming decade.
That’s why it matters that Apple’s chip manufacturing partner, TSMC, has inked a deal with Amkor to “collaborate and bring advanced packaging and test capabilities to Arizona, further expanding the region’s semiconductor ecosystem.” Both TSMC and Amkor are investing in major projects in Arizona. Apple is the biggest customer of both firms.
Apple, Amkor, TSMC, ‘Neath the Arizona Skies
If this news sounds familiar it’s because Apple confirmed its own deal with Amkor to package Apple Silicon at TSMC last November. This made Apple the “first and largest” customer at Amkor’s new manufacturing plant, which at that time was billed as the “largest advanced packaging facility in the US.”
Discussing that arrangement at the time, Apple Chief Operating Officer Jeff Williams said: “Apple is deeply committed to the future of American manufacturing, and we’ll continue to expand our investment here in the United States.”
He characterized Apple as, “thrilled that Apple Silicon will soon be produced and packaged in Arizona.” Since then, TSMC has begun small-scale production of the A16 chip used in iPhone 15 and 15 Plus.
Arizona becomes a silicon development powerhouse
The most recent announcement from Amkor and TSMC suggests the wind under this plan is blowing a little more strongly. The memorandum of understanding between the two companies means they will work together to bring “advanced packaging and test capabilities” to Arizona.
There is quite a lot more to the agreement:
First, the pact confirms that Amkor and TSMC have been closely collaborating to deliver high volume, leading-edge technologies for advanced packaging and testing of semiconductors to support critical markets such as high-performance computing and communications.
Second, it tells us that TSMC will now contract turnkey advanced packaging and test services from Amkor in their planned facility in Peoria, AZ.
These services will see particular use in advanced wafer fabrication.
The partners believe that the geographical proximity of the two firms will accelerate product cycle times, which presumably means they’ll be able to accelerate processor design.
But what may perhaps be most important is that the companies intend to jointly define some packaging technologies, such as TSMC’s Integrated Fan-Out (InFO) and Chip on Wafer on Substrate (CoWoS).
Chips in play
Apple watchers take note that InFO packaging features have been in chips since the A10 and also in the R1 chip inside Vision Pro. It is also notable that Google is expected to begin using chips with InFO packaging beginning in 2025. With many in tech coalescing around Arm-based processors, it’s hard not to see the strategic importance of bringing manufacturing into the US, particularly around AI.
CoWoS could also hold interesting opportunities for Apple, as it’s an advanced chip packaging tech that can efficiently link graphics processors, memory, and CPU together. There may be some implications as Apple is expected to move to 2nm chips (made by TSMC, designed by Apple’s silicon teams, and based on Arm reference designs) in 2025.
TSMC Chairman and CEO C.C. Wei referenced this a little earlier in the year, telling Nikkei, “AI is so hot that all my customers want to put AI into their devices.” As history shows, Apple has now accomplished precisely that and Nvidia uses CoWoS chip packaging tech in its own high-performance graphics processors.
What the partners say
Speculation aside, this is what Amkor and TSMC had to say in a statement announcing the agreement: “Amkor is proud to collaborate with TSMC to provide seamless integration of silicon manufacturing and packaging processes through an efficient turnkey advanced packaging and test business model in the United States,” said Giel Rutten, Amkor’s president and CEO. “This expanded partnership underscores our commitment to driving innovation and advancing semiconductor technology while ensuring resilient supply chains.”
“Our customers are increasingly depending on advanced packaging technologies for their breakthroughs in advanced mobile applications, artificial intelligence and high-performance computing, and TSMC is pleased to work side by side with a trusted longtime strategic partner in Amkor to support them with a more diverse manufacturing footprint,” said Kevin Zhang, TSMC’s senior vice president of business development and global sales and deputy Co-COO.
“We look forward to close collaboration with Amkor at their Peoria facility to maximize the value of our fabs in Phoenix and provide more comprehensive services to our customers in the United States.”
Designed in Arizona
It is almost certainly no coincidence these deals are all falling into place just two years after the US passed the CHIPS and Science Act to fund corporations such as TSMC and Amkor to increase investment in US semiconductor industries.
Apple last year confirmed that Amkor will invest approximately $2 billion in its Arizona project, even as Cupertino confirmed itself to be on target to invest $430 billion in the US economy by 2026. Of course, behind all of this, with the company quietly beginning iPhone manufacturing in Brazil, to what extent will future iPhones be American made?
Open AI has unveiled Canvas, a new ChatGPT interface specifically crafted for developers that is now available in beta.
Canvas has been developed using GPT-4o and makes it possible, among other things, to use a separate window for code. It also provides a number of shortcuts that can be used to review code, track down bugs, add comments, and translate code to Javascript, Typescript, Python, Java, C++ and PHP. The new interface can also help make texts longer or shorter, fix grammatical errors, or add emojis at appropriate places.
The first to get access to Canvas are users of ChatGPT Plus and Team, followed next week by Enterprise and Edu users. Non-paying users have to wait until the beta testing is finished before they’ll get access.
Ah, gestures. Whether we’re waltzin’ around the world or working on a touch-enabled tech toy, don’t you just love how much you can convey with a simple swish of a single finger?
While our single-fingered movements in the physical world may be more, let’s say, communicative in nature, here in the land o’ Android, a gesture is a powerful action initiator. Deploying the right finger motion at the right moment can save you time and help you accomplish all sorts of interesting things on whatever device you’re using.
The only problem is that by their very nature, gestures are invisible. You don’t see ’em or have any real signs of their existence — which means it’s up to you to remember they exist and then get yourself in the habit of using ’em. And no matter how long you’ve used Android or how intelligent of a mammal you may (allegedly) be, you’re bound to forget about some gestures over time or never even notice that they’re there in the first place.
With that in mind, I’ve been racking my brain to remind myself of all the awesome Android gesture tricks that are out of sight, out of mind for most of us.
We’ll start with one of the simplest but most effective Android gesture actions around. While it may be relatively basic, though, you’d better believe it’s all too easy to lose sight of over time.
So, for context: Android’s Quick Settings — y’know, those one-tap tiles that show up when you swipe down twice from the top of your screen — are all about saving time and making it easier to access common adjustments.
And here, for ye, is a quick time-saving gesture for getting to those Quick Settings even more quickly:
Swipe down from the very top of your screen with two fingers together, side by side — and hey, how ’bout that?!
JR Raphael, IDG
You got exactly where you wanted to go, in precisely half the steps it’d typically take ya.
And speaking of Quick Settings…
Android gesture action #2: Hidden holds
When you see a tile or a button, like the ones in Android’s Quick Settings area, your first instinct is to tap it — right?
Well, here’s a little secret: With certain Android Quick Settings options, you can also press and hold the buttons to accomplish an extra invisible action.
The tricky thing is that there’s no real way to know when that maneuver’s possible. But, for instance, in the standard Google Android interface that’s present on Pixels and certain other devices, pressing and holding the Quick Settings tiles for Internet, Hotspot, Bluetooth, Quick Share, Dark Theme, Do Not Disturb, and even Auto-Rotate zaps you directly to the associated section of your full system settings.
Samsung handles this a bit differently and less consistently (because — well, Samsung), but you’ll find some long-press surprises within its Quick Settings setup, too, if you press and hold to see what happens.
Android gesture action #3: On-demand shortcuts
While we’re thinkin’ about that good old-fashioned long-press Android gesture, take a sec to remind yourself of this brilliantly invisible little benefit:
Pressing and holding any icon on your home screen or in your app drawer will surface a series of simple shortcuts for jumping directly to specific areas within the associated app.
So, for instance, with Google Docs, you can go straight into working on a new document without having to first open up the app and find the right options. With Google Calendar, you can create a new event with a single tap. With Slack, you can make your way immediately into any recently accessed workspace or conversation. And with Google Maps, you can fire up instant navigations to any of your favorite places right from your home screen.
Android gesture action #4: The Overview swift swipe
First things first, with our next nifty trick: You know about Android’s Overview interface, right?
That’s the list of recently opened apps you can access by swiping upward about an inch from the bottom of your screen and then stopping, if you’re using the current Android gesture navigation system — or by tapping one of the icons along the bottom edge of your screen, if you’re still stickin’ with the old legacy three-button nav setup. (It’s a square-shaped icon at the right in the standard Google version of Android and a three-vertical-line icon at the left with Samsung — again, ’cause Samsung.)
Once you’re in that area, take advantage of two easy-to-miss extra gesture options:
You can swipe up on any app’s card you see to close it and dismiss it from the list.
And you can swipe down on any app to open it quickly.
JR Raphael, IDG
Whee!
Android gesture action #5: The fast app flip
When you want to zip back to the app you had opened most recently, remember this:
With the current Android gesture nav setup, you can flick your finger horizontally to the right along the bottom edge of your screen to move backwards one step in your app continuum — and then you can swipe to the left in that same area to flip back from there.
It’s basically like Alt-Tab in Windows, only on Android:
JR Raphael, IDG
If you’re still with the old three-button nav approach, double-tapping the Overview icon will accomplish something similar.
Android gesture action #6: History in a hurry
Android’s notification history is one of the platform’s most useful and underused elements. Once you activate it, you can access a list of alerts that’ve popped up on your device — even after you’ve dismissed ’em. Handy, wouldn’t ya say?
And here’s a hidden gesture few Android-appreciating animals are even aware of: In addition to the History button at the bottom of the Android notification panel — in the standard Google version of Android, at least, when you have one or more notifications present — you can press your favorite fingie onto the words “No notifications” when no notifications are showing to get to that same place in a flash.
This is one even Samsung hasn’t stripped out of the software. (Hallelujah!)
Pixel pals, time to teach yourself a faster way to access your Pixel Clock app:
Swipe down once from the top of your screen to open your notifications panel, then tap the time in the upper-left corner of the screen.
Good to know, no?!
Android gesture action #8: The split-screen slide
If you’re using a reasonably recent large-screen Android device, be it a tablet or a foldable, this next one’s for you:
Google’s brilliantly useful taskbar is an awesome way to switch between apps and slide into Android’s typically out-of-the-way split-screen mode especially easily.
First, to summon the taskbar, swipe up gently from the bottom of your screen — just barely, then stop. (And note that this’ll work only in a large-screen Android environment — meaning only in the fully unfolded, tablet-like state of a phone like the Pixel Fold or on a traditional tablet’s spacious display.)
Then, once you’ve got the taskbar in front of you, press and hold your finger onto any icon either in the favorites area or within the app drawer at the left of the taskbar, then drag it up into either side of the screen to start a split between that and whatever other app you already had open.
JR Raphael, IDG
And while we’re thinking about that large-screen Android experience…
This next trick is one I just discovered during my Pixel 9 Pro Fold explorations the other day, and my goodness, is it a good’un:
When you’re looking at the Google Keep Android app on any large-screen setup, be it an unfolded foldable phone or a tablet, take note: You can press and hold your finger onto the line separating the app’s two panels — the note list and whatever individual note you’re actively viewing — and then slide your finger in either direction to change the panels’ sizes.
JR Raphael, IDG
It’s the same gesture available in the standard Android split-screen interface, now possible within a single specific app’s view, too.
On a related note…
Android gesture action #10: The Calendar divide
Following that revelation last week, a thoughtful Android Intelligence reader reached out to tell me about a similarly invisible advanced gesture they’d noticed in the Google Calendar Android app — again, when it’s being used in a large-screen setup.
With Calendar, when you’re looking at any split view — showing both a full calendar interview and a specific event, in other words — you can press and then slide your finger along the line separating the panels to adjust each side’s size.
JR Raphael, IDG
Mind. Blown.
Android gesture action #11: Video vrooming
Android’s picture-in-picture system is fantastic for keeping a video or even Google Maps navigation present on your screen while you’re doing other things.
In most apps that support the function, you can start a picture-in-picture view by heading back to your home screen while the video or navigation is playing (though some apps, like YouTube, do have certain restrictions in place for when the feature can be used).
Then — here’s the fun advanced-gesture-requiring part — once that picture-in-picture box is present, with recent Android versions, you can use two fingers to pinch in or out on the box itself to make it smaller or larger.
You can also press and hold your finger onto the box to fling it around to any area of your screen — including, even, off to the side, if you want it out of the way and just barely visible for a moment — and to dismiss it entirely, too, by dragging it down to the bottommost edge of the display.
Android gesture action #12: The tab swipe
The next time you need to see your tabs in Chrome, swipe down from the address bar area.
JR Raphael, IDG
From there, you can tap any tab to open it and swipe left or right on any tab in your list to dismiss and close it.
Android gesture action #13: The menu slider
Speaking of sliding, an oldie-but-a-goodie Android gesture gem that’s all too easy to forget is the slide-down gesture that’s possible in lots of app menus.
When you see a three-dot menu icon within an app, instead of pressing it, try sliding your finger downward on it. In Chrome, Gmail, and plenty of other places, that’ll open up the menu and then allow you to simply keep sliding downward and stop on the option you want.
Android gesture action #14: Camera slidin’
Before you stop slippity-sliding, take a sec to open your phone’s Camera app — then try sliding your finger up or down and left or right on the main viewfinder area.
The specifics of what happens will vary depending on who made your device, but you might just uncover some interesting possibilities you never knew existed.
That’s absolutely the case for Pixels and Samsung devices alike!
It’s an easy way to use your keyboard as a trackpad of sorts and shift the on-screen cursor in any text field simply by sliding your finger around.
And here’s all there is to it: Anytime you’ve got an active text field open, just swipe your finger side to side on the Gboard space bar. You’ll see the on-screen cursor move right along with that friendly li’l fingie of yours.
JR Raphael, IDG
If the gesture isn’t workin’ for ya, tap the four-square menu icon in Gboard’s upper-left corner, select “Settings,” then tap “Glide typing” and make sure the toggle next to “Gesture cursor control” is in the on and active position.
And there you have it: With this and all the other advanced Android gesture actions we just went over, the power’s officially in your fingertips. Once you remember to swipe, slide, and press in all the right places, you’ll be flyin’ around your phone like never before.
Via the Digital Services Regulation (DSA), the European Commission has requested information from Youtube, Snapchat and Tiktok about which parameters their algorithms use to recommend social media content to users.
The Commission then wants to evaluate the extent to which these algorithms can amplify risks linked to, for example, democratic elections, mental health and children’s well-being. The authority also wants to look at how the platforms work to reduce the potential impact their recommendation systems have on the spread of illegal content, such as the promotion of drugs and incitement against ethnic groups.
The social media companies have until Nov. 15 to provide the requested information.
Most Apple watchers may have noticed that the company’s iPhone 16 marketing really does put Apple Intelligence front and center, even though its home-baked breed of AaI (Artificial [Apple] Intelligence) isn’t available quite yet.
All the same, the system, which we explain in great depth here, is on the way. And in the run up to its arrival, we’re learning more about it, and when and how it will be introduced. As we wait on data about the extent to which Apple Intelligence boosts future iPhone sales, read on to learn when Apple Intelligence will come to your nation, what schedule the various tools are shipping on, and other recently revealed details concerning Apple’s hugely hyped service.
When is Apple Intelligence coming?
Apple will introduce the first of its Apple Intelligence services with the release of iOS 18.1. More tools and services will be made available later this year and across 2025, when the company will likely introduce brand new and unannounced features. You will require an iPhone 16 series device, an iPhone 15 Pro series device, or an iPad or Mac running an M1 chip or later to run the system.
What schedule are service releases on?
A Bloomberg report tells us when to expect Apple Intelligence features to appear:
iOS 18.1:
Due in mid-October, this first set of features will include various Writing tools, phone call recording and transcription, a smart focus mode and Memories movies. Apple tells us the feature list includes:
Writing Tools.
Clean Up in Photos.
Create a Memory movie in Photos.
Natural language search in Photos.
Notification summaries.
Reduce Interruptions Focus.
Intelligent Breakthrough and Silencing in Focus.
Priority messages in Mail.
Smart Reply in Mail and Messages.
Summaries in Mail and Messages.
And Siri enhancements, including product knowledge, more resilient request handling, a new look and feel, a more natural voice, the ability to type to Siri, and more.
iOS 18.2:
In December, we should see Apple make Genmoji and Image Playground services available.
iOS 18.4:
This is when Siri will be overhauled to become more contextually aware and capable of providing more personally relevant responses. This release is thought to be coming in March and will be preceded by a more minor update (iOS 18.3).
Where will Apple Intelligence be available?
Bad news, good news. The good news is that US iPhone owners will get to use Apple Intelligence as soon as iOS 18.1 ships. The other good news is that any user anywhere willing to set their device language to US English should also be able to run the services; if you want to keep your iPhone running your language, you’ll have to wait a little while.
Apple has promised to introduce localized language support for the following English nationalities in December: Australia, Canada, New Zealand, South Africa, and the United Kingdom.
Throughout 2025, the company has promised to introduce Apple Intelligence support for English (India), English (Singapore), French, German, Italian, Japanese, Korean, Portuguese, Spanish, and Vietnamese. The company also promised support for “other” languages, but hasn’t announced which ones. For the moment, at least, Apple Intelligence will not be available in the EU.
How much storage does the system need?
An Apple document confirms that Apple Intelligence requires 4GB of available iPhone storage to download, install, and use. The company hasn’t disclosed how much space is required on iPads or Macs, but it seems reasonable to expect it’s close to the same. Apple also warns that the amount of required storage could increase as new features are introduced.
What else to know
Apple now sees AI as a hugely important component to its business moving forward. That means the service will work on all future iPads, Macs, and iPhones (including iPhone SE). It also means the company is plotting a path to support the service on visionOS devices and Homepod and deploy it in future products, including an intelligent home automation and management system it apparently plans, along with the introduction (at last) of a “HomeOS.” There’s more information here.
Although OpenAI’s revenues are increasing significantly, the generative AI (genAI) pioneer remains dependent on financial injections, according to Reuters.
The maker of ChatGPT generated revenue of $300 million in September alone, sources said — an increase of 1700% compared to the beginning of 2023. And the company expects revenue to jump to $11.6 billion next year.
Nevertheless, OpenAI expects to lose around $5 billion this year despite sales of $3.7 billion.
Expenses can only be partially traced
Various factors are responsible for the high losses, reports The New York Times. This year one of the biggest increased operating costs has been increased energy consumption tied to an enormous upswing since the launch of ChatGPT at the end of 2022. The company sells subscriptions for various tools and the startup grants licenses to numerous companies for the use of large language models (LLMs) from its GPT family.
Employee salaries and office rent also have a financial impact.
AI needs more money
In order to cover existing debts and further increase growth, the genAI company has for some time been aiming for another round of financing, which should also help manage energy costs.
The latest financing round — led by Thrive Capital, a US venture capital firm that plans to invest $1 billion — brought in $6.6 billion and pushed the company’s valuation to $157 billion. At the same time, OpenAI is warning investors away from rivals like Anthropic, xAI and Safe Superintelligence (SSI), a startup launched by OpenAI co-founder Ilya Sutskever.
One reason for Apple’s change of heart could be internal turmoil caused by the board’s plans to transform OpenAI into a for-profit company. Following the announcement of those plans, there were a number of key departures at OpenAI, most notably the departure of CTO Mira Murati.
In the near term, the growth of OpenAI is likely to continue; according to analysts’ calculations, the company has now achieved a market share of 30%.
OpenAI has raised $6.6 billion from investors like Thrive Capital and Tiger Global, but the AI company also sought assurances that investors would avoid funding five competing firms, according to a Reuters report.
The competitors include Anthropic, Elon Musk’s xAI, and Safe Superintelligence (SSI), a startup launched by OpenAI co-founder Ilya Sutskever.
These companies directly compete with OpenAI in advancing large language models, a capital-intensive effort.
Additionally, OpenAI named two AI application firms — AI search startup Perplexity and enterprise search company Glean.
On Wednesday, the San Francisco-based startup announced it completed its latest funding round, reaching a $157 billion valuation — the highest in Silicon Valley’s history.
This comes after the company revealed plans to shift from its nonprofit origins to a for-profit structure amid major leadership upheavals, including the sudden departure of several top executives.
“The new funding will allow us to double down on our leadership in frontier AI research, increase compute capacity, and continue building tools that help people solve hard problems,” OpenAI said in a statement.
The investors included chipmaker Nvidia and Microsoft. Apple, which had been in discussions to invest, ultimately chose not to participate, according to Reuters.
Impact of exclusivity deal
Exclusivity agreements, while not unheard of, are relatively rare in the tech industry, particularly within the AI venture capital space, according to Thomas George, president of Cybermedia Research.
“These arrangements have traditionally been more common in fast-moving, high-stakes industries like ridesharing, where firms like Uber and Lyft sought to secure conflict-free funding during critical growth periods,” George said. However, such agreements were typically limited to defined periods, such as six or 12 months, he added.
The move could significantly reshape the venture capital landscape, potentially intensifying competition for funding among emerging AI startups and concentrating most venture capital investments around fewer, larger companies.
“OpenAI’s move could stifle innovation in the short term,” said Nitish Mittal, partner at Everest Group. “With fewer resources available, competitors might struggle to keep pace with OpenAI’s advancements. By restricting capital flow to competitors, OpenAI could consolidate more market share and talent, thus slowing down the growth of rivals.”
However, this might also incite a counter-reaction, spurring these companies to seek alternative funding sources, forge new alliances, or innovate to reduce their reliance on heavy capital, according to George.
“While this could temporarily consolidate OpenAI’s position, it also risks creating a more aggressive competitive environment, where rivals may accelerate innovation to differentiate themselves,” George said.
Possible expansion plans
These concerns become more pronounced when considering OpenAI’s plans, particularly its possible expansion of enterprise offerings.
The inclusion of AI application developers in its portfolio suggests this direction, as the company projects revenue to rise to $11.6 billion by 2025, up from $3.7 billion this year.
“To capture deeper financial engagement, OpenAI aims to accelerate the development and rollout of enterprise-grade AI systems and large language models, making it more competitive,” George said. “This strategy appears to support OpenAI in expanding its business operations and achieving high revenue expectations sooner than anticipated.”
However, there is also the possibility of heightened regulatory scrutiny. “If successful, this strategy could boost OpenAI’s market position, but it may also provoke regulatory scrutiny or push rival firms to innovate faster through alternative funding channels,” Mittal said.
Are you a robot? Google really, really wants to know.
The answer to this question is demanded of web users 200 million times a day via CAPTCHAs — “Completely Automated Public Turing test to tell Computers and Humans Apart,” a system owned and operated by Google.
Google wanted CAPTCHAs to test whether users were human or bots to protect websites from spam and fraud — but with a twist. Google intended to substitute the original, deliberately distorted letters (readable by people but not bots) with accidentally distorted ones — ambiguous scans from the Google Books Library Project. For example, if most users identified a blurry letter as an “E,” that would be confirmed or corrected in the digital book scan.
The vision for this project was to get the world’s web users to work for free, identifying letters while also thwarting malicious bots. Google later used reCAPTCHA for human identification of ambiguous Street View and Maps photographed objects, including home addresses, street signs, and business names and addresses. More recently, Google has used reCAPTCHA to support its broader AI initiatives across maps, computer vision, speech recognition, and security.
There are many kinds of CAPTCHAs — text-based, image-based, audio, math problems, word problems, time-based, honeypot, picture identification, and invisible. The most common ones are the click-the-checkbox CAPTCHAs and the click-the-pictures-that-contain-a-bus CAPTCHAs. Both are Google’s reCAPTCHA v2.
Google’s most recent version, reCAPTCHA v3, uses behavioral analysis to detect bots without explicit challenges. The user is never forced to stop and solve a puzzle. This approach makes sense and doesn’t divert users in their tracks to solve Google’s recognition problems.
So why do we still see the old kind of reCAPTCHA v2 challenges everywhere, every day?
One reason is that reCAPTCHA v2 is simpler for website owners to implement and manage. They can verify users without having to interpret complex risk scores. It’s also more tangible to website owners because they can see it (whereas v3 operates invisibly in the background). It also has more customizable options and uses fewer cookies.
Even website owners who use v3 implement v2 as a fallback system, either for especially suspicious traffic or when the v3 engine can’t capture enough data.
While using reCAPTCHA v2 has clear benefits, new events this month radically changed the cost-benefit analysis.
AI defeats reCAPTCHA
Researchers from ETH Zurich published a research paper Sept. 13 demonstrating that it can solve Google’s reCAPTCHA v2 with 100% accuracy.
The study reveals that current AI technologies can effectively exploit advanced image-based captchas like reCAPTCHA v2. Any malicious actor anywhere in the world can easily implement an automated bot system that gets past reCAPTCHA v2 challenges.
Humans can “prove they’re human” with 71-85% accuracy. Machines can “prove they’re human” with 100% accuracy.
Fraudulent CAPTCHA pages are shared on shady websites claiming to offer cracked versions of popular games like Black Myth: Wukong, Skylines II, and Hogwarts Legacy. The fake CAPTCHA test tricks users into performing keyboard actions that secretly paste and execute a PowerShell script that downloads and installs the Lumma Stealer malware.
The same fraudulent CAPTCHA challenges are also included in phishing emails disguised as GitHub communications about a fake “security vulnerability.”
One reason the phony CAPTCHA scam works is that CAPTCHAs are so ubiquitous. We’ve all been trained like lab rodents to engage with them, so it’s easy to convince the public to use them. The social engineering trick simply hijacks an existing widespread habit.
The ubiquity of CAPTCHAs itself is an exploitable security threat.
In the past few weeks, it’s become clear that reCAPTCHA v2 is both breakable by AI and a huge security risk. But the biggest problem with reCAPTCHA v2 has existed for years.
Unconscionable exploitation of users
I can’t stand reCAPTCHA v2 challenges. As a research-obsessed journalist, I open hundreds or thousands of web pages daily. I’ve bookmarked hundreds of pages of news searches, which I open every day to stay informed about my far-flung technical beats. I churn through web pages at high speed, hunting for information. Plus, I use a lot of browser extensions.
I’m also a digital nomad, traveling globally and constantly accessing random Wi-Fi networks in airports, cafes, restaurants, Airbnbs, and elsewhere. I often need to pretend (for some US services) to be in the United States, so of course, I use a VPN.
Each aspect of how I use the web and Google Search is deemed “suspicious,” so CAPTCHA challenges are constantly arresting my work momentum.
I’m an online speed freak. I’ve spent thousands of dollars on my laptop solely for performance. I don’t want anything slowing me down. So, for Google to stop me in my tracks and make me identify buses, stairs, and crosswalks a hundred times a day while I’m in the writing “zone” is vexing to an extreme.
The researchers note that Google might have profited as much as $888 billion from cookies created by reCAPTCHA sessions and could monetize CAPTCHA activity by tracking users, gathering behavioral data, and creating user profiles for advertising. (Google denied this charge, saying reCAPTCHA v2 user data is used only to improve the service.)
(The researchers also estimate that reCAPTCHA traffic consumed about 134 petabytes of bandwidth, which has so far burned roughly 7.5 million kWh of energy and produced 7.5 million pounds of CO2.)
Google: It’s time to pull the plug
Enough already with the CAPTCHAs that force users to stop and take a test! It’s a massive, unpaid exploitation of users for Google’s gain. The technology is easily defeated by AI. And the very existence of the CAPTCHA concept is now being exploited by malicious actors.
While reCAPTCHA v3 is probably much better, it’s now clear that reCAPTCHA v2 is beatable with AI, a security risk, and a giant pain in the ass for millions of people.
Google has killed more than 296 products since 2006, according to the Google Graveyard.
Apple has been accused of violating union rights, according to a complaint filed by the US National Labor Relations Board (NLRB) .
The complaint, filed in May by the NLRB and released Monday, accused Apple of several federal labor law violations, including “coercively interrogating employees about their union sympathies;” “confiscating union flyers from its employee break room,” and “interfering with, restraining, or coercing employees” from exercising their rights.
It’s not the first time Apple has been accused by a US labor board of trying to illegally stop efforts to unionize. In 2021, the company was accused of interrogating workers and barring them from leaving pro-union flyers in a break room in a Manhattan store.
Apple did not immediately respond to a request for comment on the allegations.
The most recent complaint is the result of a lawsuit filed last year by Ashley Gjovik, a former Apple senior engineering manager who was “terminated” in 2021, and Cher Scarlett, who accused the company of forbidding employees from discussing wages and employment conditions.
Scarlett agreed to leave Apple and drop her NLRB complaint. Scarlett was one of the founders of the #AppleToo movement, a whistleblower group that alleged racism, sexism, and inequality at the company.
Last year, after an attempt to unionize failed at another Manhattan store, the NLRB affirmed an administrative law judge’s findings that Apple illegally interrogated workers at the store about unionization efforts and prevented them from sharing pro-union flyers. A complaint was also filed by Gjovik in a California federal court alleging Apple illegally fired, disciplined, threatened, and interrogated her for engaging in protected union activity at its headquarters in Cupertino, CA.
The NLRB complaint calls on Apple to stop the violating practices and post notices in workplaces showing agency has found it violated Federal labor law and saying Apple agrees to now obey those laws.
Apple also faces at least two other pending NLRB cases claiming it fired an employee at its headquarters for criticizing managers and illegally interfered with a union campaign at a retail store in Atlanta, according to Reuters.