Author: Security – Computerworld

Apple gets ready for app sideloading on EU iPads

Apple didn’t make a song and dance about it during this week’s iPhone 16 launch, but one other thing that’s about to change (at least in Europe) is that it will support third-party app stores with the release of iPad OS 18. (It already supports this on iPhones in the EU.

We knew this was coming. 

European regulators decided Apple needed to open up its platform earlier this year when they imposed requirements in the Digital Markets Act (DMA). What we don’t yet know is the extent to which the move to open up iPads and iPhones to this kind of competition will leave European customers vulnerable to security and privacy attacks

Changing the story

We also don’t yet know whether every store that appears will be legitimate, or whether their security procedures will be as rock solid as those Apple provides. 

In part, that’s because we can’t predict how stable those regimes will become, or the extent to which increasingly well-resourced hackers will identify and exploit vulnerabilities in third-party app shops. That’s the big experiment that’s really taking place here, and we won’t see the results of this regulatory dedication to market ‘liberalization’ for some time to come.

It’s hard to believe Apple is having a good time in Europe. The bloc just demanded $14 billion in tax from the company, and regulators seem resistant to giving Apple the transparency it needs before offering Apple Intelligence there. 

Your private answer

Privacy is a core commitment to Apple. It works hard to protect it. And yet, the regulators say the company’s demand for transparency around how the DMA will be applied to these features in the EU shows how anti-competitive the company is.

That’s a stretch. Apple’s argument is predicated on the nature of the personal data its system can access on devices. That information is personal, and the company is committed to keeping it that way. This’s why Apple Intelligence is being developed as a super-private AI service you can use when you want to hold your data close. 

If Apple finds itself forced to make that information available to third parties, then what will be the consequences on personal privacy? When you have a regulator who seems to think it’s a victory to play ‘Fortnite’ on her iPhone, then Apple would probably prefer to negotiate with someone possessed of more nuance. Sometimes things get worse before they get better.

Opening up…

Context aside, the addition of iPads to the open market does expand the number of potential consumers third-party stores can approach. 

However, it’s fair to say that developers have so far been pretty slow at taking Apple up on the terms under which it has so far offered to open up app store access. I suspect further compromise will be reached, but I also think Apple has the right to ensure its business is sustainable; I doubt critics will get a free ride, no matter how entitled to one they believe they are. 

In the end, the big question around the matter never seems to be asked. No one yet has stuck their neck above the parapet to ask how much profit a business should legitimately make? It is amusing the extent to which business-backed political entities everywhere want to avoid defining an ethical approach to profit margins. 

Perhaps they fear losing election contributions if they do.

Let the games begin

Nevertheless, the Great European App Store experiment is under way, and while the number of third-party stores that have appeared so far is limited, this may change. As well as Apple’s App Store, European iPhone and iPad users can now pick between Setapp MobileAltStore PALAptoidMobivention, and the Epic Games Store. (Two of these are games stores, one a B2B white label app distro service, SetApp is an app subscription service, and Aptoid is an open-source friendly indie app store.)

From baby acorns, new trees grow. But the way I expect this to play out is that as the number of such stores grows, the sector will become more competitive, and then grow a bit until M&A action starts. Once the inevitable market consolidation does take place, it seems reasonable to expect we’ll end up with a couple of stores that have unique USPs, and two or three larger concerns, one of which may (or may not) be Apple’s App Store. 

That’s assuming Apple’s concerns around platform security and third-party apps are never realized; if they are, consumers will flock to the only secure store they know. As of Monday, EU consumers on iPads as well as iPhone will be able to try their luck. Good luck with that.

Please follow me on LinkedInMastodon, or join me in the AppleHolic’s bar & grill group on MeWe.

New brain-on-a-chip platform to deliver 460x efficiency boost for AI tasks

The Indian Institute of Science (IISc) has announced a breakthrough in artificial intelligence hardware by developing a brain-inspired neuromorphic computing platform. Capable of storing and processing data across 16,500 conductance states in a molecular film, this new platform represents a dramatic leap over traditional digital systems, which are limited to just two states (on and off).

Sreetosh Goswami, assistant professor at the Centre for Nano Science and Engineering (CeNSE), IISc, who led the research team that developed this platform, said that with this discovery, the team has been able to nail down several unsolved challenges that have been lingering in the field of neuromorphic computing for over a decade.

Decoding OpenAI’s o1 family of large language models

OpenAI said its project Strawberry has graduated to a new family of large language models (LLMs) that the company has christened OpenAI o1.

The new family of models, which also includes an o1-mini version for cost efficiency, according to the company, can be differentiated from the latest GPT-4o models basis their reasoning abilities.

“We’ve developed a new series of AI models designed to spend more time thinking before they respond. They can reason through complex tasks and solve harder problems than previous models in science, coding, and math,” the company wrote in a blog post, adding that the models were currently in preview.

According to OpenAI, the next model update performs similarly to PhD students on challenging benchmark tasks in physics, chemistry, and biology, and even excels in math and coding.

“In a qualifying exam for the International Mathematics Olympiad (IMO), GPT-4o correctly solved only 13% of problems, while the reasoning model scored 83%. Their coding abilities were evaluated in contests and reached the 89th percentile in Codeforces competitions,” it added.

The reasoning capabilities inside the OpenAI o1 models are expected to help tackle complex problems in the fields of science, coding, and mathematics among others, according to OpenAI.

“For example, o1 can be used by healthcare researchers to annotate cell sequencing data, by physicists to generate complicated mathematical formulas needed for quantum optics, and by developers in all fields to build and execute multi-step workflows,” it explained.

How do the models get reasoning capabilities?

The new family of o1 models gets its reasoning capabilities from the company’s large-scale reinforcement learning algorithm that teaches the models how to think productively using its “Chain of Thought” mechanism in a “highly data-efficient training process.”

“We have found that the performance of o1 consistently improves with more reinforcement learning (train-time compute) and with more time spent thinking (test-time compute),” the company said in another blog post and highlighted that this approach has substantially different constraints when compared to LLM pretraining.

In the field of AI and generative AI, experts say that any model, during training time, tries to rearrange or modify its parameters depending on the training data it has been fed to reduce errors in an effort to increase accuracy.

In contrast, during testing time, developers and researchers expose the model to new data in order to measure its performance and how it adapts to new instances of data.

Therefore, in the case of the new models, the more time it spends analyzing and solving a problem, the more it learns resulting in the sharpening of its reasoning abilities.

This learning is activated by the model’s Chain of Thought algorithm that works similar to how a human may think for a long time before responding to a difficult question, often breaking the problem into smaller chunks.

Speaking about the models’ reasoning capabilities, Nvidia senior research manager Jim Fan, via a LinkedIn post, said that the world is finally seeing the paradigm of inference-time scaling popularized and deployed in production.

“You don’t need a huge model to perform reasoning. Lots of parameters are dedicated to memorizing facts, in order to perform well in benchmarks like trivia QA. It is possible to factor out reasoning from knowledge, i.e. a small ‘reasoning core’ that knows how to call tools like browsers and code verifiers. Pre-training compute may be decreased,” Fan explained.

Further, Fan said that OpenAI must have figured out the inference scaling law a long time ago, which academia is just recently discovering. However, he did point out that productionizing o1 is much harder than nailing the academic benchmarks and raised several questions.

“For reasoning problems in the wild, how (the model) to decide when to stop searching? What’s the reward function? Success criterion? When to call tools like code interpreter in the loop? How to factor in the compute cost of those CPU processes? Their research post didn’t share much.
 

OpenAI, too, in one of the blog posts has said that the new model, which is still in the early stages of development and is expected to undergo significant iteration, doesn’t yet have many of the features that make ChatGPT useful, such as browsing the web for information and uploading files and images.

“For many common cases GPT-4o will be more capable in the near term,” the company said.

OpenAI is hiding the reasoning tokens

Although the new family of models has better reasoning, OpenAI is hiding the reasoning tokens or the Chain of Thought algorithm for the models.

While the company acknowledges that exposing the Chain of Thought algorithm could allow enterprises to understand how the models were functioning and if they were showing signs of manipulating a user, it has decided that it would not be helpful to open up a model’s unaligned Chain of Thought or reasoning tokens directly visible to its users.

Interfering with any unaligned Chain of Thought or reasoning tokens is counterintuitive to the model’s functioning, the company explained, adding that to exactly understand how the model is reasoning, it must have the freedom to express its thoughts in unaltered form.

This is why OpenAI cannot train any policy compliance or user preferences onto the Chain of Thought.

“We acknowledge this decision has disadvantages. We strive to partially make up for it by teaching the model to reproduce any useful ideas from the Chain of Thought in the answer,” it added.

British programmer Simon Wilson, who is the co-founder of the social conference directory Lanyrd and co-creator of the Django Web framework, in his blog post said he wasn’t happy with the OpenAI’s policy decision. “The idea that I can run a complex prompt and have key details of how that prompt was evaluated hidden from me feels like a big step backward,” he wrote.

Other limitations of the o1 model

Another issue about the reasoning tokens that Wilson pointed out is that though reasoning tokens are not visible in the API response, they are still billed and counted as output tokens.

From a technical standpoint, this means that enterprises will have to increase their prompt budgets due to the reasoning tokens.

“Thanks to the importance of reasoning tokens — OpenAI suggests allocating a budget of around 25,000 of these for prompts that benefit from the new models — the output token allowance has been increased dramatically — to 32,768 for o1-preview and 65,536 for the supposedly smaller o1-mini,” Wilson wrote.

These output token allowances are an increase from the gpt-4o and gpt-4o-mini models, both of which currently have a 16,384 output token limit, the programmer added.

OpenAI is also advising enterprises to use retrieval-augmented generation (RAG) differently for the new models.

Unlike the usage of RAG presently where the advice is to potentially cram as many relevant documents as possible, OpenAI suggests that in the case of the new models, users should include only the most relevant information to prevent the model from overcomplicating its response, Wilson explained.

How to get the new o1 family of models? 

ChatGPT Plus and Team users will be able to access o1 models in ChatGPT starting Thursday.

Both o1-preview and o1-mini can be selected manually in the model picker, and at launch, weekly rate limits will be 30 messages for o1-preview and 50 for o1-mini, the company said, adding that it was working to increase those rates and enable ChatGPT to automatically choose the right model for a given prompt.

Alternatively, ChatGPT Enterprise and Edu users will get access to both models beginning next week. Open AI said that developers who qualify for API usage tier 5 can start prototyping with both models in the API starting Thursday with a rate limit of 20.

“We’re working to increase these limits after additional testing. The API for these models currently doesn’t include function calling, streaming, support for system messages, and other features,” the company said, adding that it was planning to bring o1-mini access to all ChatGPT Free users.

What North Korea’s infiltration into American IT says about hiring

American companies have unwittingly hired hundreds — maybe thousands — of North Korean workers for remote IT positions, according to the US Department of Justice, the FBI, the US State Department, and cybersecurity companies.

The sophisticated scheme, perpetrated by the North Korean government for years, partly funds that country’s weapons program in violation of US sanctions. 

Agents working for the North Korean government use stolen identities of US citizens, create convincing resumes with generative AI (genAI) tools, and make AI-generated photos for their online profiles.

Using VPNs and proxy servers to mask their actual locations — and maintaining laptop farms run by US-based intermediaries to create the illusion of domestic IP addresses — the perpetrators use either Western-based employees for online video interviews or, less successfully, real-time deepfake videoconferencing tools. And they even offer up mailing addresses for receiving paychecks. 

These North Korean government agents have landed positions at more than 300 US companies, including Fortune 500 corporations, major tech firms, cybersecurity consultant companies, and aerospace manufacturers. 

US officials estimate that the scheme generates hundreds of millions of dollars annually for North Korea, directly funding its nuclear and ballistic missile programs, as well as espionage. 

In addition to collecting the salaries, the North Korean government tasks these fake employees with stealing intellectual property (IP) and sensitive information and deploying malware in corporate networks that provides backdoors for future cyberattacks. 

Mandiant (Google Cloud’s cybersecurity division) discovered a list of email addresses created as part of a big North Korean operation targeting US companies in June 2022. Some 80 or so of these addresses were used to apply for jobs at critical infrastructure organizations in the US. At the time, Mandiant said the operation was a way to raise money for espionage and IP theft; Mandiant analyst Michael Barnhart said North Korean IT workers were “everywhere.” 

The number of North Korean agents seeking IT work in the US has increased in the past two years. 

In May, an Arizona woman named Christina Chapman was arrested and accused of conspiring with North Korean “IT workers” Jiho Han, Chunji Jin, Haoran Xu, and others (all allegedly working for the North Korean Munitions Industry Department) to illegally land remote jobs with US companies. This one band of criminals allegedly used an online background check system to steal the identities of more than 60 people to generate nearly $7 million for the North Korean government at more than 300 US companies, including a car maker, a TV network, and a defense contractor. 

Among her assigned tasks, Chapman maintained a PC farm of computers used to simulate a US location for all the “workers.” She also helped launder money paid as salaries (companies sent the paychecks to her home address).

The group even tried to get contractor positions at US Immigration and Customs Enforcement and the Federal Protective Services. (They failed because of those agencies’ fingerprinting requirements.) They did manage to land a job at the General Services Administration, but the “employee” was fired after the first meeting.

A Clearwater, FL IT security company called KnowBe4 hired a man named “Kyle” in July. But it turns out that the picture he posted on his LinkedIn account was a stock photo altered with AI. The company sent a work laptop to the address “Kyle” supplied, which was, in fact, a US-based collaborator. The “employee” tried to deploy malware on the company’s networks on his first day but was caught and fired. 

“He was being open about strengths and weaknesses, and things he still needed to learn, career path ideas,” Stu Sjouwerman, founder and CEO of KnowBe4, told The Wall Street Journal. “This guy was a professional interviewee who had probably done this a hundred times.”

What the hiring of North Korean agents says about US hiring

Statistically, it’s unlikely you or your company will hire North Korean agents. But knowing this can happen should raise questions about your corporate hiring practices and systems. Are they so inadequate that you could hire and employ someone who’s not who they say they are, does not have the experience they claim, does not live where they say they live, or who is illegal to hire?

The truth is that the world has changed, and hiring practices aren’t keeping up. Here’s what has changed, specifically, and what you should do to keep up: 

  • Remote work. Since the pandemic, remote work has been normalized. Along with this change, companies have also embraced remote interviews, hiring, and onboarding. A straightforward solution is to allow remote work, but build at least one in-person meeting into the hiring or onboarding process. Fly the would-be hire to your location and put them up in a hotel to sign the employment contract (this provides the added assurance of having their legal signature on file), or have them meet with a local representative where they are. Also: Protect access to work laptops or applications with biometrics and have them register those biometrics in person. That way, you’ll see that the applicant is who they say they are and that the ongoing work is really performed by the person you hired. You might also deploy a mobile device management solution to identify the location of provided laptops, tablets, or phones. 
  • Generative AI chatbots. One metric for gauging the communication skills of a prospective employee is to look at their resume and cover letter. But anyone can create such documents with flawless English using ChatGPT or some other chatbot. Clarity of communication in any written document tells you exactly nothing about the employee’s ability to communicate. Make a writing test part of the evaluation process, where the applicant can’t use AI help. 
  • Generative AI image tools. Thanks to widely available tools, anyone can create a profile picture that looks real. Never assume a photo shows what a person looks like. Physical characteristics shouldn’t play a part in the hiring anyway; headshots’ only role in hiring is to bias the hiring manager. 

Some things haven’t changed. It’s always been a good idea to check references to ensure prospective employees have worked where they say they’ve worked and have gotten the education and certifications they say they’ve gotten. 

Yes, malicious North Korean agents are out there trying to get a job at your company so they can funnel money to a despotic regime and hack your organization. 

But the broader crisis is that, thanks to recent developments in technology, you might only truly know who you’re hiring if you modify your hiring approach. 

Make sure you really know who you’re hiring and employing, and take the necessary steps now to be absolutely sure. 

How to bring Google’s Pixel 9 Pro Fold multitasking magic to any Android device

After spending the past couple weeks living with Google’s new Pixel 9 Pro Fold — a.k.a. the second-gen Pixel Fold — I’ve got two big thoughts swimming around my murky man-noggin:

  1. Multitasking really is a whole new game on a device like this, and that opens the door to some incredibly interesting ways to get stuff done on the go.
  2. Part of that is undoubtedly tied to the phone’s folding form — but part of it is also a result of the Android-based software enhancements Google’s built into the gadget.

More than anything, that very last part keeps coming back to the forefront and making my brain say, “Hmmmmmmmm.”

We can talk all day about advantages related to one specific device, after all (and, erm, we did, earlier this week) — but especially with a phone like the Pixel 9 Pro Fold and its hefty $1,800 price tag, most people aren’t gonna end up with it inside their paws, purses, or pantaloons.

So what if there were a way to take at least some of the folding Pixel’s multitasking magic and make it available on other Android devices — more traditional phones without the Fold’s unusual (and unusually expensive) folding screen parts?

My friend, lemme tell ya: Such a slice of sorcery absotively exists — two such slices, in fact. They’re off-the-beaten-path advanced adjustments that’d only be possible here on Android. And they can be on your own personal phone this minute, if you know where to look.

[Psst: Love shortcuts? My Android Shortcut Supercourse will teach you tons of time-saving tricks for your phone. Sign up now for free!]

Prepare to be blown away.

Google Pixel 9 Pro Fold multitasking trick #1: The split-screen shortcut

We’ll start with the simpler of our two Pixel-9-Pro-Fold-inspired multitasking advantages, and that’s the newly Google-given ability to open two apps together in Android’s split-screen mode with a single tap.

Part of what makes the Fold so useful, y’see, is that splendid inner screen it sports and the way that added space serves as a canvas for viewing and even interacting with two apps side by side together at the same time.

Google Pixel 9 Pro Fold Multitasking: Split screen
Android’s split-screen interface, as seen on the inner display of a Pixel 9 Pro Fold phone.

JR Raphael, IDG

With this new second-gen Pixel Fold model, Google’s upped the ante by adding in a new native feature that lets you save specific app pairings and then have a simple on-screen shortcut for launching ’em side by side anytime with one fast tap — without all the usual hunting, opening, and arranging effort.

In the Pixel 9 Pro Fold’s software, setting up such a feat is as simple as booping a newly added button inside Android’s Overview mode, right beneath any active app pairing you’ve opened:

Google Pixel 9 Pro Fold Multitasking: "Save app pair" button
A subtle but powerful button added into the Pixel 9 Pro Fold’s Overview interface.

JR Raphael, IDG

All you’ve gotta do is tap that son of a gibbon, and bam: You get an easy-as-can-be icon right on your home screen for zipping back to that ready-to-roll pairing in the blink of an eye.

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-app-pairs.webp?quality=50&strip=all 600w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-app-pairs.webp?resize=289%2C300&quality=50&strip=all 289w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-app-pairs.webp?resize=162%2C168&quality=50&strip=all 162w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-app-pairs.webp?resize=81%2C84&quality=50&strip=all 81w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-app-pairs.webp?resize=463%2C480&quality=50&strip=all 463w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-app-pairs.webp?resize=347%2C360&quality=50&strip=all 347w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-app-pairs.webp?resize=241%2C250&quality=50&strip=all 241w" width="600" height="622" sizes="(max-width: 600px) 100vw, 600px">
One tap, and any app pair is present and ready — exactly as you like it.

JR Raphael, IDG

It’s incredibly handy — and while you may not have the same amount of screen space as what the Pixel 9 Pro Fold provides, you’d better believe the same instant screen-splitting setup is also available for you on any reasonably recent Android phone.

The secret resides in a simple little app called, rather amusingly, Be Nice: A Tiny App Launcher. It’s free, open source, and ad-free, too, and it doesn’t require any permissions or collect any type of personal data. (Seriously — what more could you ask for?!)

And once you install the thing and set up whatever on-demand app pairs you want, you’ll probably never actively open it or think about it again.

Here’s all there is to getting your own custom Pixel-9-Pro-Fold-caliber app pair shortcut:

  • Install Be Nice from the Play Store.
  • Open it once, and tap the plus icon in the lower-right corner of its configuration interface.
  • Tap “Select first app” and pick the first app that you want to show up in your pairing.
  • Tap “Select second app” and pick the other app that you want to be included.
  • If you want, you can increase the delay between the time when the first app opens and the second app appears. There’s really no need to mess with that, though.
  • And if you want, you can adjust the text that’ll appear alongside the shortcut on your home screen as well as the style of the icon associated with it. But again, the defaults are perfectly fine.
  • Tap “Create” once you’re finished and then confirm that you want to add your newly created shortcut onto your home screen.
Google Pixel 9 Pro Fold Multitasking: Be Nice Create App Pair
Be Nice makes creating an on-demand app pair almost shockingly simple.

JR Raphael, IDG

And that’s it: Once you head back to your home screen, you’ll see that snazzy new shortcut right then and there for easy ongoing access.

Google Pixel 9 Pro Fold Multitasking: Be Nice split screen shortcut home screen
An instant app pair shortcut, as created by the independent Be Nice Android power tool.

JR Raphael, IDG

And now, whenever you’re ready to work with those two specific apps together for desktop-like mobile multitasking, a fast tap of that fresh ‘n’ friendly new icon is all that’s required. How ’bout them apples?!

Google Pixel 9 Pro Fold Multitasking: Be Nice split screen shortcut
Just like on the Pixel 9 Pro Fold, you can launch any app pair in an instant — on any device.

JR Raphael, IDG

It’s a powerful start for a smarter smartphone setup. Now, if you really want to take your Android multitasking to the next level, keep reading.

Google Pixel 9 Pro Fold multitasking trick #2: The on-demand taskbar

This second Pixel-9-Pro-Fold-inspired bit o’ multitasking magic is a little less simple — and a little more limited, too.

But if you’re using one of Google’s other Pixel phones — any ol’ Pixel, so long as it’s running 2022’s Android 13 operating system or higher — it’s already present on your phone and available for the taking. All you’ve gotta do is figure out how to find it.

And goodness gracious, it ain’t easy. This Android-exclusive productivity advantage is buried deep within Google’s Pixel software and something no mere mortal would ever encounter under ordinary circumstances.

But oh, is it ever worth the effort. It’s a way to add my absolute favorite folding Pixel feature onto whatever Pixel phone you’ve got in front of you. I’m talkin’ about the on-demand taskbar that pops up on the Pixel 9 Pro Fold whenever you swipe up gently from the bottom edge of the screen with the device in its unfolded state:

Google Pixel 9 Pro Fold Multitasking: Taskbar
The Pixel 9 Pro Fold taskbar — a true productivity-boosting treasure.

JR Raphael, IDG

That taskbar gives you a desktop-caliber dock for switching to any other app anytime, either via its customizable primary shortcut positions or via the instant access to your entire app drawer also built right into that interface. And better yet, in addition to opening any app without having to head back to your home screen, the taskbar makes it impossibly easy to switch yourself over to that Android split-screen setup we were just ogling — simply by pressing and holding any icon within the taskbar area and then dragging it up into the main area of your screen.

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-taskbar.webp?quality=50&strip=all 600w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-taskbar.webp?resize=289%2C300&quality=50&strip=all 289w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-taskbar.webp?resize=162%2C168&quality=50&strip=all 162w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-taskbar.webp?resize=81%2C84&quality=50&strip=all 81w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-taskbar.webp?resize=463%2C480&quality=50&strip=all 463w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-taskbar.webp?resize=347%2C360&quality=50&strip=all 347w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-taskbar.webp?resize=241%2C250&quality=50&strip=all 241w" width="600" height="622" sizes="(max-width: 600px) 100vw, 600px">
That Pixel 9 Pro Fold taskbar takes Android’s split-screen system to soaring new heights.

JR Raphael, IDG

And here’s the buried Android treasure to beat all buried Android treasures: While the taskbar is officially limited to appearing only on large-sized devices like the Fold, with a quick tweak to a tucked-away area of your system settings, you can actually enable it on any Google Pixel phone this minute — without dropping a single dime on any fancy new hardware.

Now, fair warning: This does require some fairly advanced and ambitious Android spelunkin’ (to use the technical term). And, again, it’ll work only on Pixel phones, as other Android device-makers like Samsung haven’t opted to implement the same feature into their software setup.

What we’ve gotta do is employ a teensy bit of virtual voodoo to trick your Pixel into thinking it’s bigger than it actually is — ’cause, again, the software is set to show that taskbar element only when it’s running on a device of a certain size.

To do that, we need to dive deep into Android’s developer settings, which house all sorts of intimidating options that aren’t intended for average phone-usin’ folk to futz around with. There’s no risk to you or your phone, and as long as you follow these instructions exactly, it’s actually quite easy. (It’s also incredibly easy to undo, if you ever decide you aren’t into it and want to go back.) But we will be pokin’ around in an area of Android that’s meant mostly for developers, and if you veer off-course and mess with the wrong setting, you could absolutely make a mess.

So proceed only if you’re comfortable — and stick closely to the directions on this page. Capisce? Capisce.

Here we go:

1. First, we need to tell your Pixel that you want to even see Android’s advanced developer options in the first place:

  • Head into your phone’s system settings (by swiping down twice from the top of the screen and then tapping the gear-shaped icon in the corner of the panel that comes up).
  • Scroll down to the very bottom of the settings menu and select “About phone.”
  • Scroll down to the very bottom of that screen and find the line labeled “Build number.”
  • Tap your finger onto that line a bunch of times in a row until you see a prompt to enable developer mode on the device. (I swear it’ll work — this isn’t a wild goose chase!) You’ll probably have to put in your PIN, pattern, or passcode to proceed and confirm that you want to continue.

2. Now, with developer mode enabled, we’re ready to make the multitasking magic happen:

  • Mosey your way back out to the main system settings menu and tap the search box at the top of the screen.
  • Type the word smallest into the search prompt. That should reveal a developer option called “Smallest width.” Tap it!
  • Tap “Smallest width” one more time, and in the prompt that comes up, first jot down the number that’s there to start — just in case you want to change it back later. Then change the value to 600 and tap “OK.”
Google Pixel 9 Pro Fold Multitasking: Taskbar smallest width
This curious-seeming setting holds the key to unlocking advanced Android multitasking magic.

JR Raphael, IDG

At this point, you should see all the text on your screen get smaller. This is an unavoidable side effect of this setup, since we’re tricking your Pixel into thinking its screen is larger than it actually is, but we’ll do some things to make it more palatable and easy on the eyes in a second.

First, let’s find that splendid multitasking taskbar, shall we? Provided you’re using the current Android gesture system and not the legacy three-button navigation approach, you should be able to swipe your finger up gently from the bottom of the screen to reveal that newly unleashed productivity beast:

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-developer-settings.webp?quality=50&strip=all 750w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-developer-settings.webp?resize=300%2C294&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-developer-settings.webp?resize=710%2C697&quality=50&strip=all 710w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-developer-settings.webp?resize=171%2C168&quality=50&strip=all 171w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-developer-settings.webp?resize=86%2C84&quality=50&strip=all 86w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-developer-settings.webp?resize=489%2C480&quality=50&strip=all 489w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-developer-settings.webp?resize=367%2C360&quality=50&strip=all 367w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-developer-settings.webp?resize=255%2C250&quality=50&strip=all 255w" width="750" height="736" sizes="(max-width: 750px) 100vw, 750px">
An on-demand Android taskbar — just like on the Pixel 9 Pro Fold.

JR Raphael, IDG

Whee! And, just like on the Pixel 9 Pro Fold, you can now tap any app icon within that taskbar to switch to it, tap the app drawer icon at the left of the bar to access your complete list of installed apps from anywhere, and press and hold any icon and then drag it upward to bring the associated app into an instant split-screen setup.

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-split-screen.webp?quality=50&strip=all 750w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-split-screen.webp?resize=289%2C300&quality=50&strip=all 289w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-split-screen.webp?resize=672%2C697&quality=50&strip=all 672w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-split-screen.webp?resize=162%2C168&quality=50&strip=all 162w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-split-screen.webp?resize=81%2C84&quality=50&strip=all 81w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-split-screen.webp?resize=463%2C480&quality=50&strip=all 463w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-split-screen.webp?resize=347%2C360&quality=50&strip=all 347w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-split-screen.webp?resize=241%2C250&quality=50&strip=all 241w" width="750" height="778" sizes="(max-width: 750px) 100vw, 750px">
Simple Pixel-Fold-style screen-splitting, on any Android phone? Yes, please!

JR Raphael, IDG

Not bad, right?!

So, back to that tiny text that’s come along with this adjustment — here’s the fix:

  • Head back into your phone’s main settings menu.
  • Tap “Display,” then select “Display size and text.”
  • Place your finger on the slide beneath “Font size” and crank the sucker all the way over to the right.

That’ll make the text bigger and easier to read everywhere while still keeping that taskbar available whenever you want it.

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-display-settings.webp?quality=50&strip=all 750w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-display-settings.webp?resize=300%2C294&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-display-settings.webp?resize=711%2C697&quality=50&strip=all 711w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-display-settings.webp?resize=171%2C168&quality=50&strip=all 171w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-display-settings.webp?resize=86%2C84&quality=50&strip=all 86w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-display-settings.webp?resize=490%2C480&quality=50&strip=all 490w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-display-settings.webp?resize=367%2C360&quality=50&strip=all 367w, https://b2b-contenthub.com/wp-content/uploads/2024/09/google-pixel-9-pro-fold-multitasking-taskbar-display-settings.webp?resize=255%2C250&quality=50&strip=all 255w" width="750" height="735" sizes="(max-width: 750px) 100vw, 750px">
You can have your Pixel-Fold-inspired taskbar without having to squint.

JR Raphael, IDG

All that’s left is to explore your newly enhanced Android environment and see whatcha think. You’ll probably notice other interesting changes sparked by this shift — like the ability to see six Android Quick Settings shortcuts instead of four when you swipe down once from the top of your screen and the presence of a more desktop-like tab interface within your Android Chrome browser.

Google Pixel 9 Pro Fold Multitasking: Chrome browser tabs
Desktop-like browser tabs on an Android phone? Eeeenteresting. Very, very eeeeenteresting.

JR Raphael, IDG

You might also notice the presence of multipaned interfaces in certain apps that allow you to see different bits of info on screen at the same time.

It’s up to you to decide if you appreciate or are annoyed by these adjustments. But now you know how to make it happen. And if you ever decide you aren’t thrilled with the overall package, all you’ve gotta do is (a) tap the “Reset settings” options within that same “Display size and text” menu and then (b) either change the “Smallest width” developer setting back to its original value or just turn off Android’s developer options entirely (via the toggle at the top of the “Developer options” menu, within the System section of your phone’s settings) to return to your standard Android setup.

The power’s in your hands — and that folding-Pixel-level multitasking magic is officially there and available for you, anytime you want to summon it.

Don’t let yourself miss an ounce of Pixel magic. Start my free Pixel Academy e-course to uncover all sorts of hidden wizardry built into your favorite Pixel phone!

Parallels 20 turns Macs into cross platform DevOps powerhouses

Here’s an exciting development that almost got missed during Apple’s heady week of iPhone news: Parallels has hit version 20 and now provides a series of powerful features designed to streamline artificial intelligence (AI) development. 

If you run Windows on your Mac, you’re likely already familiar with Parallels Desktop. It is, after all, the only solution authorized by Microsoft to run Windows in a virtualized environment on Apple Silicon. 

If you think back to when Apple introduced the M1 Macs, you might recall the entire industry was impressed by the performance Apple Silicon unleashed. One tester went on the record to say running Windows for ARM on an M1 Mac using Parallels Desktop 16 was “the fastest version of Windows” they’d ever used. “Apple’s M1 chip is a significant breakthrough for Mac users,” Nick Dobrovolskiy, Parallels senior vice president of engineering and support, told me at the time.

Parallels now says its software can run in excess of 200,000 Windows applications quite happily on Macs. With M4 Macs on the horizon, you can anticipate further performance gains — and with Parallels, Apple Intelligence has now come to Windows. 

Apple Intelligence meets Windows?

If you are running a virtualized Windows environment on your Mac using Parallels, you will be able to use Apple’s AI-powered Writing Tools once macOS Sequoia ships. 

Parallels hasn’t told us whether we’ll also be able to access other AI features from within the Windows environment, but it has said we’ll be able to sign into Apple ID across multiple macOS virtual machines on the same Mac. What this means is that developers can fully leverage virtual Macs for building and testing software in an isolated environment.

But the big hook for Parallels in this release is the AI development tools packed inside. The new Parallels AI Package is designed to make building AI models more accessible. To do so, it offers a virtual machine pre-loaded with 14 AI development tools, sample code, and instructions. The idea is that people who want to build AI solutions can install the package and run third-party small language models inside the virtual environment, even while they are offline.

This is included free in Parallels Desktop for Mac Business and Enterprise editions and is free to install in the Desktop for Mac Pro Edition for the rest of the year.

Why did Parallels do this?

“As PCs become more AI-capable, we believe AI will soon be standard on every desktop,” said Prashant Ketkar, CTO at Parallels. “This shift challenges developers to update their applications to fully leverage AI-enabled PCs.

“That’s why we created the Parallels AI Package: to equip development teams, whether experts or beginners, with accessible AI models and code suggestions. This enables ISVs to build AI-enabled applications in minutes, significantly boosting productivity for every software development team using a Mac.”

What else has improved?

Parallels, now owned by Corel Corporation, might have put a lot of effort into support for the AI wave, but the company has also delivered additional features that should improve the experience of running Windows on a Mac.

One big change: you might experience up to 80% better performance while running legacy Windows apps using the Prism emulator on Arm.

Another enhancement comes with a new shared folders technology, which makes it much easier to work across Mac and Windows files on apps. This feature also supports Linux virtual machines, which in combination with the power of Macs and the new AI toolkits from Parallels makes for a powerful DevOps machine. The Visual Studio Code extension lets you manage multiple machines, and even lets you access Microsoft Copilot when you do. 

The enterprise connection

Lots of people working with Windows on a Mac work at companies in which both platforms are used. For IT, this can raise challenges around licensing and deployment of operating system licenses.

For them, Parallels now offers a new enterprise portal that IT can use to manage virtual machines, licensing issues and more. To achieve this, Parallels built new tech to make it possible to deploy Parallels Desktop without resorting to complex scripts.

“These advancements mark a significant milestone in our ongoing commitment to improving the IT admin experience. With these new features, deploying Parallels Desktop across a network of Macs is simpler and more flexible than ever before,” the company said in a blog post.

You’ll also find GitHub Actions to transform CI/CD workflows. In a related move, the software has attained a SOC Type 2 report, which means it is undergoing regular aggressive tests to ensure it remains secure. 

Smart for business

I’ve been watching Parallels since it first appeared on the Mac, and I’m liking the direction in which the company is going. While it remains a solid option for consumers who just want to run a few Windows apps (including games) on their Mac, it is becoming a powerful adjunct for developers, enterprise pros, and (with version 20), a useful passport to enable AI development as well. This edition builds on the many enhancements introduced in 2023.

That’s not bad for something that costs from $99 to $149 per year (Windows licenses extra).

Please follow me on LinkedInMastodon, or join me in the AppleHolic’s bar & grill group on MeWe.

Mistral releases ‘Pixtral 12B,’ its first multimodal AI model

French AI startup Mistral has released its first multimodal model, the Pixtral 12B, which can handle both text and images, according to Techcrunch. The model uses 12 billion parameters and is based on Mistral’s Nemo 12B text model. Pixtral 12B can answer questions about images via URLs or images encoded with base64 such as how many copies of a certain object are visible.

Most generative AI (genAI) models have been partially trained on copyrighted material, which has led to lawsuits from copyright owners. (AI ​​companies claim that the tactic should be classified as fair use.)

It is unclear what image data Mistral used to develop the Pixtral 12B.

The multimodal model checks in at about 24 gigabytes, can be downloaded via Github and the Hugging Face machine learning platform, and can be used and modified under an Apache 2.0 license without restrictions.

Adobe unveils additional AI-based video-generation tools

Adobe has offered another glimpse into upcoming generative AI (genAI) video features  by previewing a tool that lets users create video clips from text and still image prompts. 

Adobe announced the Firefly Video model in April as the latest addition to its genAI models, which also handle image, design, and vector graphic generation. 

On Wednesday, the company released a preview video that shows how the Firefly Video model will be used in the Firefly web app when it becomes available later this year. In the web app, users can generate short video clips from text prompts, with adjustable controls for camera angles, motion and zoom. Images can also be uploaded as prompts to turn illustrations into live action clips, Adobe said.

The videos will have a maximum length of five seconds, an Adobe spokesperson told The Verge

The Firefly video generation model is “designed to help the professional video community unlock new possibilities, streamline workflows and support their creative ideation,” Ashley Still, senior vice president for Adobe’s Creative Product Group, said in a statement.

Adobe first discussed its genAI video plans earlier this year whenit previewed features coming to its Premiere Pro video editing app. These include text-to-video generation, a “generative extend” tool that creates additionalframes to lengthen a video clip, and “object addition and removal,” which which lets editors replace items in a scene — such as changing the color of an actor’s tie — or remove them from a shot altogether, such as removing a mic boom.

The features in Premiere Pro will be available in beta later this year. 

Google faces EU investigation over AI data compliance

European privacy regulator, Data Protection Commission (DPC), has launched an inquiry into Google over its use of the personal data of users in the region, adding to the tech giant’s growing legal challenges.

In a statement, DPC said that the inquiry focuses on whether Google complied with obligations under GDPR to conduct a Data Protection Impact Assessment (DPIA) before processing personal data of EU or EEA individuals in developing its AI model, Pathways Language Model 2 (PaLM 2).

DPIA is a process designed to help data controllers identify and mitigate data protection risks associated with high-risk processing activities. It aims to ensure that the processing is necessary and proportionate and that adequate safeguards are implemented based on the identified risks.

The investigation is part of the DPC’s broader efforts to ensure generative AI adheres to privacy regulations.

Recently, the commission initiated court action and reached an agreement with social media platform X, requiring the company to stop using EU users’ personal data for AI training until they are given the option to withdraw consent.

This inquiry adds to Google’s mounting legal challenges. In August, a US District Court ruled that the search giant is a monopoly, stating it used its dominance in the online search market to suppress competition.

A separate trial focused on Google’s advertising business is also currently being conducted.

Impact on Google

Despite mounting regulatory concerns for Google, analysts do not expect the inquiry to have a significant short-term impact.

Priya Bhalla, practice director at Everest Group, noted that most large enterprises are aware of these issues and have taken internal measures to protect their AI initiatives.

These steps include investing in data and AI governance, limiting applications in high-risk areas, and using fine-tuned versions of large language models (LLMs), among others.

“Additionally, if we take a broader lens on this, enterprises understand that this is not the first company that has been put into the spotlight, and it’s not going to be the last, so I don’t see any goodwill impact for Google,” Bhalla added.

A likely scenario is Google following the example of X, who recently agreed to pause or stop using content from European users to train their models.

DPC’s impact on AI usage

In a recent blog post about large language models, DPC said that organizations using AI products based on personal data could be classified as data controllers and should consider conducting formal risk assessments.

The commission advised that before deploying an AI system, users should understand the personal data it processes, how it is used, whether third parties are involved, how long the data is retained, and how the product complies with GDPR obligations.

This means that while localizing the training of foundation models is crucial, transparency about the data used for training is becoming a baseline requirement.

“Enterprises racing to train their AI models using foundational models from Google or Meta may need to pause and assess compliance with user privacy and local regulations,” said Neil Shah, partner and co-founder at Counterpoint Research. “This could slow AI rollouts, especially in the EU, where businesses rely on tech giants with large-scale, advertising-driven models.”

Regulatory gray areas

Enterprises partnering with the likes of Google or OpenAI would prioritize regulatory compliance, which mainly addresses consent-based data collection. However, this creates a gray area of concern, according to Faisal Kawoosa, chief analyst at Techarc.

“Legally and technically, regulations may be followed,” Kawoosa said but added that users often face a dilemma – without consent, the service cannot be accessed, but with consent, their data is used, and they may not fully understand how.

 “It’s also tricky to establish in court that there are gaps in the way data is collected and used,” Kawoosa added. “Given this, enterprises will primarily look at whether regulatory compliances have been followed. They may also check if the best practices have been adhered to, but that’s the extent of what they can do.”

Intel won’t sell off its programmable chip business: Altera CEO

Intel’s plan to spin off its Altera programmable chip business and pursue an initial public offering (IPO) by 2026 remains unchanged, despite recent speculation about the company potentially selling the unit outright.

Sandra Rivera, CEO of Altera, reaffirmed the company’s commitment to the IPO during an interview with CRN, addressing rumors reported by Reuters.

“There’s so much […] that gets written that is not true and not sourced from anyone that actually knows what’s happening,” Rivera said, clarifying that Intel remains committed to the plan it outlined over a year ago.

Rivera emphasized that Intel has always planned to sell a stake in Altera rather than fully divesting from it, to take the company public by 2026.

Intel’s decision to spin off Altera and take it public could reshape the competitive dynamics of the FPGA market, according to analysts.

“Intel’s decision to spin off Altera and take it public has the potential to significantly impact the competitive landscape for FPGAs,” said Arjun Chauhan, an analyst at Everest Group. “AMD’s acquisition of Xilinx already positions it as a formidable player in this space, and a more independent Altera could enhance its ability to compete by focusing more closely on innovation and emerging use cases, particularly in artificial intelligence (AI), data centers, and cloud computing.”

Competitive impact on the FPGA market

Altera — which designs field-programmable gate arrays (FPGAs), allowing chips to be reprogrammed for diverse applications — has been operating independently of Intel since early 2024. However, the company is still decoupling from Intel’s administrative functions, a process expected to be completed by January 2025. According to Rivera, Altera is “ahead of schedule” in this transition, the CRN report added.

Intel acquired Altera for $16.7 billion in 2015, integrating the FPGA business into its operations under the name Programmable Solutions Group. In 2023, Intel announced plans to spin off the unit as a standalone company to attract private investment and support Intel’s broader financial strategy under CEO Pat Gelsinger. In early 2024, Altera was spun off from Intel as a separate business.

Rivera noted that Altera is well-positioned in the FPGA market, particularly after the acquisition of competitor Xilinx by AMD in 2022. In an earlier press briefing, Rivera had said that Altera aims to capitalize on a $55 billion FPGA market opportunity, spanning sectors like cloud, data centers, automotive, and aerospace.

The ultimate goal, according to Rivera, is to make Altera a leading player in the FPGA industry, with the IPO being a critical milestone in that journey.

This move will intensify competition between Altera and AMD’s Xilinx, especially as demand for programmable chips increases across sectors, Chauhan said. “This move could heighten competition between Altera and Xilinx, especially as the demand for programmable chips continues to grow. While AMD/Xilinx currently has an advantage, the spinoff could allow Altera to attract strategic investments and partnerships that would help it close the gap.”

Despite these potential opportunities, Intel’s decision to sell a stake in Altera is also a strategic move to unlock liquidity, which can be reinvested in areas critical to its future growth, such as advanced process technology and AI.

“Intel’s decision to sell a stake in Altera signals a strategic shift aimed at unlocking liquidity, which could be reinvested into its core growth areas, such as advanced process technology, AI chips, and its IDM 2.0 strategy,” Chauhan said.

The risk, however, lies in how Altera’s independence could create overlap with Intel’s core businesses. “There’s a slight risk that this new competitive dynamic could lead to some overlap with Intel’s core businesses, especially if Altera establishes itself as a strong independent entity – but this risk could be mitigated if both entities coordinate their product roadmaps,” Chauhan observed.

Reports about Intel’s divestment plans

Citing undisclosed sources familiar with the situation, Reuters had earlier reported that Intel CEO Pat Gelsinger, along with top executives, is expected to present a strategic plan to the board of directors later this month, aimed at cutting non-essential businesses and revising capital expenditures.

The proposal reportedly included selling off units, such as the programmable chip division Altera, as part of broader efforts to reduce costs and refocus resources amid declining profits at the once-dominant chipmaker.

The chip giant is going through its worst phase currently and it may look at alternatives to improve its financials. In its latest quarterly results, the company reported an 85% year-on-year drop in its profit and also announced a slashing of 15,000 jobs as it grapples with significant financial difficulties.

“We are focused on reducing operating expenses, capital expenditures, and cost of sales while maintaining core investments to execute our strategy,” Intel’s CEO Pat Gelsinger said in a note to employees as part of its “next phase of our multiyear transformation strategy.” He also hinted at taking “decisive actions” to improve operating and capital efficiencies.

A few weeks later, another report suggested that Qualcomm is eyeing Intel’s struggling chip business units. Though there was no clarity on which business units Qualcomm is evaluating to buy, speculations were rife that it may look at Intel’s Altera and Movidius businesses. Analysts believed it would help “fill gaps in Qualcomm’s portfolio.”

However, for now, as per the CRN report, Rivera remains focused on executing the IPO plan and positioning Altera as a specialized leader in the FPGA market.

While this spinoff could initially be seen as Intel stepping back from a rapidly growing market, the long-term view could be different. “In the short-term, the move to spin off Altera could also be viewed as Intel stepping back from a market that is seeing increasing demand, potentially giving AMD/Xilinx an upper hand, but the picture can change in the long-term view,” Chauhan said.