Month: August 2024

Meta and Universal Music Group sign deal on AI music

Social media giant Meta and the music company Universal Music Group, UMG, are expanding their previous long-term music agreement. The new deal allows Meta’s users to share songs from Universal’s music library on Meta-owned Facebook, Instagram, Horizon, Threads and Whatsapp without violating copyright, Techcrunch reports .

The pact will also concern the treatment of “unauthorized AI-generated content.” This refers to when music is data scraped by AI systems, often without the creator’s consent.

“We look forward to continuing to work together to address unauthorized AI-generated content that may affect artists and songwriters so that UMG can continue to protect their rights both now and in the future,” Michael Nash, Universal Music Group’s chief digital officer and executive vice president, said in a statement.

Apple’s Patreon fee will hurt the wrong people

Apple’s insistence on taking a slice of subscription sales made on iOS from Patreon seems short-sighted, as it effectively takes money from the pockets of creatives who probably aren’t earning that much.

Of course, from Apple’s point of view, its rules have to apply consistently and apparently it has not been consistently applying these against Patreon, which hasn’t had to cough up this cash until now.

The catch is that while Patreon has the scale to pay and still profit, many of the creative types using its service have nothing like those numbers. The choice they face — increase fees or swallow the fee — can (and probably will) demotivate some authors; it’s a choice they must make by November.

What makes this particularly difficult is it comes as changes in GDPR and online advertising have also eviscerated revenue at small websites, including my own. It’s hard not to think that disempowering small online publishers, while pouring investment into AI to replace them, is a form of cultural assassination — but perhaps that’s just me.

What else can Apple do?

Apple’s 30% fee only applies on subscriptions taken out using Patreon’s iOS app — no fee is applied to sales outside that app, so a subscription taken out using Patreon’s website should escape the charge.

But for creatives hoping to carve out an income on that service, the impact of Apple’s change means they will earn around $6 on a $10 subscription that used to generate roughly $9. That’s a big difference, and while potential subscribers may figure out how to sign-up elsewhere to escape Apple’s fee, it adds friction to the experience. And as we all know by now, friction anywhere in the customer journey affects sales.

Is there an alternative?

I think there is a viable alternative that both builds on work Apple is already doing around App Store fees in Europe while also protecting the interests of smaller creatives on Patreon — while still maintaining company policy. Why not extend the right to place a unique link within an author’s Patreon listing that leads to an external page for subscription sign-ups? This is not a million miles away from what Apple is offering in the EU, so the technology to support this already exists. 

Apple won’t want to do this, of course, because if it does so for Patreon — which is effectively a third-party provider of digital goods that works on an agency basis — then it will need to do the same thing for any other third-party digital store. That inevitably includes third-party games sellers and music-streaming services. 

A policy to nurture small creatives

There really is a big difference between million/billionaire-backed digital services and creatives trying to make a little cash writing. So perhaps Apple needs to develop a policy decision that recognizes that difference while also complying with Europe’s DMA

This can’t be a means-tested income, but given that developers on the App Store pay 15% or less, and some categories (including educational publishers) pay nothing, then surely there’s an opportunity to create some wriggle room? 

The spirit of Apple’s current developer deal sees apps that generate more than $1 million paying 30%, but even if Patreon does generate that, the creatives using its service just don’t. 

Where does the transaction really happen?

Maybe Apple’s policy could look more closely at where in the transaction to place the fee. When it comes to Patreon, there are three parties in the dance: platform provider Apple, subscription service provider, Patreon, and the people creating the content others sign up for.

If you think about the nature of that latter group, it is arguable that under Apple’s existing rules, if each creative offered their content on subscription via their own iOS app, they would pay nothing at all, as they would not generate a million dollars in sales each year. Patreon, however, generates around 80 cents for each $10 spent in the iOS app, according to its own data

Assuming that fee exceeds $1 million a year, then it is appropriate for Apple to charge a fee. But even then the impact on the creatives whose content is being subscribed to would come to just 24 to 42 cents out of every $10, which is far more reflective of the actual nature of the exchange. 

Complexity, consistency, creativity

While I recognize the need to maintain consistent policies around App Store fees and appreciate the challenges Apple faces, it feels sub-optimal to effectively charge small creatives using Patreon the same 30% fee that large entities generating millions of dollars pay. 

Perhaps another approach might be to create a new category within App Store policy that pays little or no fees, a “Community Benefit” category. This would join nonprofits, schools, and governments in being fee exempt, or it could be a much lower fee levied against Patreon’s actual commission rather than the total value of the exchange. Alternatively, Apple could lump Patreon purchases in as being the same as a reader app, which also pay lower commissions. 

Whatever Apple does decide, it seems important to note that the benefit Patreon provides to creative individuals sits miles away from the business model of competing app, game, or content stores. While I understand the complexity of building a nuanced and viable policy that recognizes that difference, I hope Apple works to figure something out. 

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Apple, this is the time to seize the moment

Apple’s competitors are reeling on bad news, which makes this a perfect moment for Apple to exploit their weakness and seize market share. And it’s well prepared to do so.

Just in the last few weeks, several big events have taken place Cupertino could and should exploit.

Reasons to be cheerful

Here’s the landscape Apple faces at the moment:

  • The recent judgement against Google might eventually cost Apple as much as $20 billion in lost fees, but it will cost Android device makers money, too, since they also take cash to use Google as the default search engine. That means Android devices will inevitably become more expensive as manufacturers in that part of the highly competitive smartphone market struggle to make the numbers add up. 
  • Security and reliability matter. That’s why when we look at the recent Microsoft/Crowdstrike failure that cost the global economy billions and generated human stress and (in medical scenarios) might have caused real suffering, we should consider Apple’s far stronger track record on security. While the miscreants try to evade responsibility citing loopholes buried in their T&C, Apple continues its “tireless work to keep our users safe.” If it is true that many in tech remain invested in Windows, Delta’s ongoing litigation over the Crowdstrike failure will further expose the weakness of such dependency. 
  • Apple’s evolving approach to artificial intelligence (AI) in its devices and its determination to prize privacy above convenience further sets the company apart. This caution matches the public mood, which is to embrace AI in ways that augment, rather than replace, the human. While opinions on Apple Intelligence differ, its expected introduction will boost iPhone, iPad, and Mac sales.

These three factors amplify other ongoing trends, for example: feedback from employee choice schemes continues to overwhelmingly favor the Mac and iPhone; customer satisfaction levels for its products lead the industry; there’s growing recognition of the TCO advantages of its platforms; and Apple’s fast-growing enterprise market share acts as catalyst on its own — each deployment gives business users confidence to consider the platform

Action and reaction

That confidence is also turning into action. 

  • Recent PC market data from IDC, Canalys, and Gartner confirm Apple is growing at a rate that dramatically exceeds its rivals, even while smartphone sales in those all-important not-yet-saturated emerging markets also favor iPhone. 
  • In the background, you see hundreds of millions of dollars being invested in the wider Apple-in-the-enterprise ecosystem, which is growing swiftly.
  • You also see Apple’s rapid iteration with its own Apple Silicon processors, which realize significant computational performance improvements with each new version. No one else comes close today.

To paraphrase the gravel-tinged tones of Apple co-founder Steve Jobs’ favorite, Bob Dylan, you really don’t need to be a weatherman to see which way this wind is already blowing.

So, what’s Apple doing to seize the day? 

Will Apple now change the game?

Apple is doing quite a lot, as it happens. Here’s what to expect in the next few weeks:

  • First, it is upscaling its entire product range, pimping out all its devices with faster AI-ready M4 and A18 processors.  
  • Second, Apple is reaching higher by going lower, planning to put A18 chips inside next year’s iPhone SE upgrade, which reports claim will be priced into the mid-range smartphone market, at around $500
  • Third, Apple is expected to introduce new Macs this fall, all of which are said to hold M4 chips. These will include a much smaller Mac mini, which currently costs around $599. 
  • Fourth, it looks as if Apple Intelligence isn’t going to be introduced until Apple launches new Macs (if it launches them) in fall, further reinforcing the “whole platform” advantage it enjoys.

Put it all together and within a few short weeks Apple will be offering the world’s most secure end-to-end platform ecosystem for AI priced so you can jump in for just $500. It will do this even as the reputation of its main rivals stands tarnished for reasons that really, really matter to \ users. It does so at a point in Apple’s history when it is already receiving the same degree of regulatory oversight as it might expect if it actually led the markets it is in.

So, why not become the market leader and make that attention worthwhile? This is the time.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

X faces new challenges in Europe over AI training with personal data

After the Irish Data Protection Commission (DPC), Austrian advocacy group NOYB has now filed a complaint against the social media platform X, accusing it of using personal data to train its AI systems without consent.

In a statement, NOYB said it has lodged GDPR complaints with data protection authorities in nine countries to ensure the fundamental legal concerns over X’s AI training are thoroughly addressed.

Last week, the DPC sought an order to halt or limit X’s use of user data for AI development, training, and refinement.

Following this, X agreed to temporarily suspend AI training using personal data from EU users until they had been given the opportunity to withdraw their consent.

However, NOYB considers this insufficient, saying that the DPC’s complaint primarily addresses mitigation measures and X’s lack of cooperation, without challenging the legality of the data processing itself.

“We have seen countless instances of inefficient and partial enforcement by the DPC in the past years,” Max Schrems, chairman of NOYB, said in the statement. “We want to ensure that Twitter [X] fully complies with EU law, which – at a bare minimum – requires to ask users for consent in this case.”

NOYB added that several key questions remain unanswered, including what happened to EU data already ingested into the systems and how X plans to properly separate EU and non-EU data.

Setting a precedent in GDPR enforcement

The complaint marks a significant precedent for applying GDPR in AI training, particularly in the creation and use of embeddings in AI models. Under the EU AI Act, these processes must adhere to transparency and ethical AI usage standards.

“This case highlights the tension between technological advancement and data privacy regulations,” said Sakshi Grover, senior research manager at IDC Asia Pacific. “The GDPR, alongside the EU AI Act, emphasizes user consent and transparency, leading to increased scrutiny of how personal data is used for AI training. Applications and platforms must adhere to strict data governance standards to protect user data and ensure privacy, involving measures compliant with GDPR and the EU AI Act’s data handling provisions.”

This is crucial because data serves as the lifeblood of modern organizations, driving competition, enhancing efficiency, generating business insights, and creating new revenue streams.

“In an age of continuous data generation, businesses need the ability to access, govern, and use data securely and effectively to advance their digital transformation efforts,” Grover added. “Establishing protocols for data privacy is essential to fuel this transformation securely.”

Impact on X’s operations  

For X, this could spell trouble as it needs to build a strong “social graph,” a model that tracks users’ interactions on a social media platform, according to Neil Shah, VP of research and partner at Counterpoint Research.

Using AI on user-generated content linked to demographic data is essential for the survival of social media platforms, especially those driven by advertising-led business models.

“While X can directly use data generated on the platform in the public domain, similar to others, there’s a fine line regarding how and to what extent user data is stored, used to train the AI model, and then utilized to target those users again,” Shah said. “This will require more transparency from X to ensure that the line isn’t crossed and that the proper GDPR process is followed. Until then, this case could set a precedent for most platforms leveraging user data to train their AI.”

The latest complaint could slow down the advanced analytics goal with Grok for X, especially for monetization in the form of a premium subscription or targeted advertising capabilities, Shah added.

GenAI compliance is an oxymoron. Ways to make the best of it

One of the biggest challenges CIOs face today is reconciling the constant pressure to deploy generative AI tools with the need to keep their organizations in compliance with regional, national, and often international regulations.

The heart of the problem is a contradiction deeply embedded into the very nature of generative AI systems. Unlike the software that most IT workers have trained on for decades, genAI is predictive, trying to guess the next logical step. 

If someone doesn’t write very explicit and prescriptive limitations about how to deal with the assigned problem, genAI will try to figure it out on its own based on the data it’s been exposed to.

The classic example of how this can go wrong is when HR feeds a genAI tool a massive number of job applications and asks it to evaluate the five candidates whose background most closely resembles the job description. In this example, genAI analyzes all current employees; finds patterns in age, gender, and demographics; and extrapolates that that must be the kind of applicant the enterprise wants. It’s then only a short walk to regulators accusing the enterprise of age, racial, and gender discrimination.

Confoundingly, genAI software sometimes does things that neither the enterprise nor the AI vendor told it to do. Whether that’s making things up (a.k.a. hallucinating), observing patterns no one asked it to look for, or digging up nuggets of highly sensitive data, it spells nightmares for CIOs.

This is especially true when it comes to regulations around data collection and protection. How can CIOs accurately and completely tell customers what data is being collected about them and how it is being used when the CIO often doesn’t know exactly what a genAI tool is doing? What if the licensed genAI algorithm chooses to share some of that ultra-sensitive data with its AI vendor parent? 

“With genAI, the CIO is consciously taking an enormous risk, whether that is legal risk or privacy policy risks. It could result in a variety of outcomes that are unpredictable,” said Tony Fernandes, founder and CEO of user experience agency UEGroup.

“If a person chooses not to disclose race, for example, but an AI is able to infer it and the company starts marketing on that basis, have they violated the privacy policy? That’s a big question that will probably need to be settled in court,” he said.

The company doesn’t even need to use those details in its marketing to get into compliance trouble. What if the system records the inferred data in the user’s CRM profile? What if that data is stolen during an attack and gets posted somewhere on the dark web? How will the customer react? How will regulators react?

Ignorance of the law (or the AI) is no excuse

Complicating the compliance issue is that there is not merely a long list of global privacy regulations for CIOs to grapple with (the most well-known of which is the EU’s GDPR), but also a ton of new AI regulations on the books or in the works, including the EU AI Act, bills under consideration in multiple US states, the White House’s Blueprint for an AI Bill of Rights, Japan’s National AI Strategy, various frameworks and proposals in Australia, the Digital India Act, New Zealand’s Algorithm Charter, and many more.

Companies must plan how they’ll comply with these and other emerging regulations — a task that becomes infinitely harder when the software they’re using is a black box.

“Enterprise CIOs and their corporate counsels are right to be nervous about genAI because, yes, they cannot truly validate or disclose the information being used to make decisions. They need to think about AI differently than other forms of data-driven tech,” said Gemma Galdón-Clavell, an advisor to the United Nations and EU on applied ethics and responsible AI, as well as founder and CEO of AI auditing firm Eticas.AI.

“When it comes to AI, transparency around information sources is not only impossible, it’s also beside the point. What’s important is not just the data going in, but the results coming out,” Galdón-Clavell said. CIOs must get comfortable with a greater lack of visibility in genAI than they would accept almost anywhere else, she said. 

It’s precisely that absence of transparency that concerns Jana Farmer, a partner at the Wilson Elser law firm. She sees a big legal problem in the lack of comprehensive and detailed information enterprises get from the AI vendors from whom they license the genAI software. Her worries go beyond the limited information about how the models are trained.

“Do we want to play with a system when we don’t know where it keeps its brain?” she asked. “When you look at the emerging regulations, they are basically saying that if you deploy [AI], you are responsible for what it does, even if it disobeys you.”

Enterprises are already being sued over their use of genAI. Patagonia recently got hit with a customer lawsuit alleging that the retailer did not disclose that genAI was listening in on customer calls, collecting and analyzing data from those calls, and storing the data on the servers of its third-party contact center software vendor. It’s unclear whether Patagonia knew everything the genAI program was doing, but ignorance is no excuse. 

The emerging rules around AI adopt the legal concept of strict liability, Farmer said. “Let’s say that you own a train company and it uses dangerous machinery. You have to make that thing safe. If you don’t, it doesn’t matter that you tried your best. Saying ‘we tested it every which way and it never did it before’ won’t satisfy regulators,” she said, adding that CIOs must perform extensive and realistic due diligence.

“If you have not done [realistic due diligence], the answer ‘I didn’t know that it would do that’ doesn’t do you much good,” Farmer said.

Indemnification is not the (full) answer

Farmer said that she has seen various businesses trying to contractually remove their liability by asking the AI vendor to indemnify them against costs or legal issues arising from their use of genAI tools. It often doesn’t help nearly as much as the enterprise executives hope it will.

She said that the AI vendor will usually stipulate that they cover all costs only if they are found to have been negligent by a recognized court or regulatory body. “If we and only if we are found to have been negligent, we will indemnify you later on,” she said, paraphrasing the AI vendor.

This brings the enterprise back to awareness of exactly what the genAI program is doing, what data it is examining, what it will be doing with its analysis, and so on. For many different reasons, Farmer said, executives often do not know what they need to know.

“The issue is not that nobody in the organization knows what data is being processed, but that understanding information practices is a ‘whole business’ issue, and the various departments or stakeholders are not communicating,” she said. “Marketing may not know what technologies IT has implemented, IT may not know what analytics vendors Marketing engaged and why, etc. Not knowing the information that privacy laws require to be disclosed is not an acceptable response.”

This then gets far tricker when genAI tries to extrapolate insights from data.

“If an AI system can make inferences from existing data, that needs to be transparently disclosed, and the standards are usually those of reasonableness and foreseeability. Deployers of genAI should make transparent disclosures to consumers that are interacting with AI — what data the system was trained on and has access to — and advise of the system’s limitations,” Farmer said.

UEGroup’s Fernandes noted that an AI’s inferences may simply be wrong, citing an example from his own life: “I get Spanish-language stuff served to me, but I don’t know a lick of Spanish. In fact, my surname is Portuguese, but to Americans, it is all the same.” Because of Portugal’s colonial past, some people in Brazil and India share his surname, so he receives ads targeted to those nationalities as well.

“There is too much nuance and context in the human condition for the algorithm writers to understand all of human history and assign accurate probabilities,” he said. “[AI] can be so damn wrong for so many reasons. At the end of the day, it is an imperfect manmade thing that embodies the biases of the programmer and data, he said.

Given its risks, genAI isn’t right for every situation, attorney Farmer noted. “Depending on the use case and the risk assessment, the question may be whether the organization should be deploying an AI system in the first place. For example, if a genAI model is used in decision making in connection with education, employment, financial, insurance, legal, etc., those are likely going to be high risk, and the risks/compliance requirements may outweigh the benefits,” she said.

Fernandes agrees. In fact, he questions whether any organization should be deploying genAI today, given the opaque nature of the technology.

“Does it make sense to deploy software to fly a plane that will act in ways that you cannot anticipate? Would you put your child or grandchild into an autonomous vehicle alone, where the actions the software takes cannot be anticipated?” he asked. “If the answer is ‘no,’ then why would any CIO in their right mind do that with a piece of software that may put their entire organization at risk?”

4 techniques for addressing genAI compliance risk

For lower-risk scenarios (or when “just say no” isn’t an option), doing some hard prep work can help protect organizations against the legal risks associated with genAI.

Shield sensitive data from genAI

IT has to be careful to put limitations on what genAI can access. Technologically, that can be done by either setting limits on what genAI can do — known as guardrails — or by protecting those assets independently.

It is perhaps best to think of genAI as a toddler. Should a parent tell the child, “Don’t go into the basement, because you could be very badly hurt down there”? Or should they add a couple of high-security deadbolts to the basement door?

Ravi Ithal, CTO of Normalyze, a data security vendor, said that a recent prospect was experimenting with Microsoft Copilot. “Within the first day, an employee asked the system to see all documents with their name in it. Copilot returned dozens of documents, one of them being a confidential layoff list with the employee’s name on it. The system did what it was told to do, given that it was told things without context about what data could or could not be used for output to this employee.”

This problem should be very familiar to IT veterans. Back around 1994, during the early days of companies aggressively using the web for corporate intranets, it was a typical move for reporters to search for “confidential” and start reviewing the tons of sensitive documents that Yahoo delivered. (Google didn’t yet exist.)

In the same way that information security professionals of that era quickly learned ways to block search engine spiders and/or to place sensitive documents into areas that were blocked from such scanning, today’s CISOs must do the same with genAI.

Learn everything you can from the AI vendor

Robert Taylor, an attorney with Carstens, Allen & Gourley, said that even though most AI vendors do not disclose everything, some CIOs don’t make the effort to identify every informational morsel that the AI vendor does disclose. 

“You need to look at the vendor’s documentation, their service terms, terms of use, service descriptions, and privacy policy. The answers you may need to disclose to your end users may be buried there,” Taylor said.

“If the vendor has disclosed it to you but you fail to disclose it to your end users, you are likely on the hook. If the vendor hasn’t proactively made these disclosures, the onus is on you to ask the questions — just as customers routinely do with vendor security assessments,” he said.

Some enterprises have explored minimizing the vendor visibility issue by building their genAI programs in-house, said Meghan Anzelc, president of Three Arc Advisory, but that merely reduces the unknowns without eliminating them. That’s because even the most sophisticated enterprise IT operations are going to be leveraging some elements created by others.

“Even in the ‘build in-house’ scenario, they are either using packages in Python or services from AWS. There is almost always some third-party dependence,” she said. 

Keep humans in the loop

Although having human employees be part of genAI workflows can slow operations down and therefore reduce the efficiency that was the reason for using genAI in the first place, Taylor said sometimes a little spot checking by a human can be effective.

He cited the example of a chatbot that told an Air Canada customer they could buy a ticket immediately and get a bereavement credit later, which is not the airline’s policy. A Canadian civil tribunal ruled that the airline was responsible for reimbursing the customer because the chatbot was presented as part of the company’s website.

“Although having a human in the loop may not be technically feasible while the chat is occurring, as it would defeat the purpose of using a chatbot, you can certainly have a human in the loop immediately after the fact, perhaps on a sampling basis,” Taylor said. “[The person] could check the chatbot to see if it is hallucinating so that it can be quickly detected to reach out to affected users and also tweak the solution to prevent (hopefully) such hallucinations happening again.”

Prepare to geek out with regulators

Another compliance consideration with genAI is going to be the need to explain far more technical details than CIOs have historically had to when talking with regulators. 

“The CIO needs to be prepared to share a fairly significant amount of information, such as talking through the entire workflow process,” said Three Arc’s Anzelc. “‘Here is what our intent was.’ Listing all of the underlying information, detailing what actually happened and why it happened. Complete data lineage. Did genAI go rogue and pull data from some internet source or even make it up? What was the algorithmic construction? That’s where things get really hard.”

After an incident, enterprises have to make quick fixes to avoid repeats of the problem. “It could require redesign or adjustment to how the tool operates or the way inputs and outputs flow. In parallel, fix any gaps in monitoring metrics that were uncovered so that any future issues are identified more swiftly,” Anzelc said. 

It’s also crucial to figure out a meaningful way to calculate the impact of an incident, she added. 

“This could be financial impact to customers, as was the case with Air Canada’s chatbot, or other compliance-related issues. Examples include the potentially defamatory statements made recently by X’s chatbot Grok or employee actions such as the University of Texas professor who failed an entire class because a generative AI tool incorrectly stated that all assignments had been generated by AI and not by human students,” Anzelc said.

“Understand additional compliance implications, both from a regulatory perspective as well as the contracts and policies you have in place with customers, suppliers, and employees. You will likely need to re-estimate impact as you learn more about the root cause of the issue.”

Microsoft warns of serious vulnerability in Office

Microsoft is urging all users of Office and Microsoft 365 to update the software as soon as possible, because hackers have started exploiting a serious vulnerability to access sensitive information on computers.

To be fully protected against the vulnerability, designated CVE-2024-38200, users need to install a security fix that will be released to the public on Aug. 13, this month’s Patch Tuesday, according to The Hacker News.

Tuesday’s security fixes will also close other publicized vulnerabilities, including CVE-2024-38202 and CVE-2024-21302, that could be used by hackers to downgrade Windows to an earlier version.

EY exec: In three or four years, ‘we won’t even talk about AI’

July was another rough month for the tech sector, with a worse-than-expected jobs report from the US Bureau of Labor Statistics (BLS) and analysis of that data by industry experts.

And on top of that, uncertainties around tech talent remain, according to a recent pulse survey from consultancy and professional services firm Ernst & Young (EY) — uncertainty exacerbated by the arrival of artificial intelligence (AI) tools and platforms. Half of IT leaders expect AI adoption to contribute to a roiling mix of hirings and firings into the fall, the survey found.

Even with hiring plans in place, 61% of tech leaders surveyed say the rapidly evolving technology has made it more challenging for them to source top talent. “One thing is certain: Companies are reshaping their workforce to be more AI savvy,” EY’s report said.

Ken Englund, who leads EY’s Americas Technology Growth sector, said companies are now concerned with how they should restructure teams to meet new demands, a restructuring that could mean the end of the most unique hiring market he’s seen in a decade. Englund spends most of his time evaluating and advising up-and-coming companies in IPO and pre-IPO stages — and there are a lot of such firms in the AI, software, and semiconductors space, he said.

Computerworld spoke with Englund about how AI is affecting hiring, re-shaping enterprise restructuring and what employees need to do to stay relevant as the marketplace undergoes dramatic shifts.

Ken Englund

Ken Englund

Ernst & Young

Why do you believe the July jobs report was worse than expected, especially for tech? “Two things. When we talk about tech jobs, we’re talking about jobs in tech companies or technical jobs anywhere. I think about jobs in tech, for the most part. I think the other thing to keep in mind, depending on where you look, net, we’re down about 10,000 jobs [in July]. In the scheme of the whole population of tech — there are several millions jobs in tech — what we’re seeing is still very strong demand in technical roles — developers, cyber, data scientists, and lighter roles in service and support and marketing.

“In any given month, we continue to see the workforce in evolution, given AI as a driver of upskilling and reskilling of the employee base. In some months, the net is positive and in some months, negative. I look a lot at layoffs.fyi, that’s sort of the data point I look at out in the market, and the trend line is getting smaller. Aside for a few major restructuring layoffs in tech over the past couple of weeks, the outflow seems to be getting smaller in magnitude.

“We’re seeing a lot of new companies, a lot of new start-ups, angel seed, round A [firms]. So those aren’t hiring that many people, but new company formation is growing.”

What kinds of start-ups are dominating? “AI and analytics, software and cloud — definitely on the digital side of tech versus hardcore infrastructure. In the bigger picture, as we’ve looked at tech over the years, things move from hardware to software over time. We’re still going to need a lot of hard-core infrastructure, semi-conductors, hardware, but more and more will move toward the software and apps layer.”

Can you explain the mixture of hirings and firings that we’ve seen over the past few years? “I actually think we’re starting to get close to a balance. If you go back over the past 24 months, you saw a lot of layoffs that were right sizing. Over-hiring around the time of the pandemic was driving no-regret-hiring as the overall tech sector was moving up, and ZIRP (zero interest rate policy) was allowing the tech companies to hoard talent. Now, there’s much more scrutiny around job requisitions and tying them to specific business needs, goals. Are they really needed? Are they directly aligned to some business initiative or value?

“I think we’re just getting back to what was considered normal behavior before the pandemic.”

How has AI changed the state of tech jobs? “…In general, there’s a very positive view of AI in tech. In a lot of other industries, there’s some uncertainty, some trepidation, some curiosity. But part of our pulse survey said about three out of four tech workers are using AI on a daily basis. So, the adoption in this portfolio of companies is higher than most, and I’d also said most employers and workers have a very good idea that AI is going to improve their business and their work.

“Really, we’re seeing its use mainly in development, software, testing, quality, customer care service as initial use cases. So, it’s slowly getting woven into everyone’s work.”

How are organizations restructuring their employee teams? “Everyone varies a bit. Probably two-thirds of these companies have some sort of reskilling or upskilling program. So, this isn’t about out with the old and in with the new. We did talk about rebalancing the workforce, but a lot of this will be employees being upskilled or retrained. That’s the most critical item going forward.

“I view AI skills as adjacent, additive skills for most people — aside from really hardcore data scientists and AI engineers. This is how most people will work in the new world. Generally, it depends. Some organizations have built whole, distinct AI organizations. Others have built embedded AI domains in all of their job functions. It really depends. There’s a lot of discussion around whether companies should have a chief AI officer. I’m not sure that’s necessary. I think a lot of those functions are already in place. You do need someone in your organization who has a holistic view of the positive sides of this and the risks associated with this.”

Why do you think this has been one of the most unique hiring periods over the past decade or so, and how has AI affected that lately? “I do fundamentally think we’ve had a platform shift. We had this around mobile. We had this around e-commerce. Or, if you go back far enough, we had this shift from mainframes to client-servers. So, I do believe this [AI] is a fundamentally a platform shift.

“From that perspective, the most critical thing when I sit down with clients, I always ask them, ‘How’s your data doing?’ We all know nobody has perfect data. In the AI world, data is going to become even more important. If it was difficult to manage your data before — think about graph databases and vector databases — really we see a lot of investment by enterprises into getting their data right for AI; that translates into ensuring you have the right resources: data architects, analysts, AI engineers and all those sort of positions as driving it.”

A lot of organizations are relying on cloud-based AI services from the likes of Microsoft, Oracle, Amazon and Google. Are you seeing an increase in the use of proprietary small language models based on open source versus these large language models (LLMs) offered through SaaS-style services? “I think it’s both. I think it’s still very early days. I think most enterprises are continuing to work with large language models and I think that will be the trend over the near horizon. Most of the key cloud providers, even frontier [companies], are building small models, too. I believe over time, you will see specialization, verticalization and small models being of distinct value.

“On the small language model size, I think — go back 24 months where large language models were — that’s where small models are now. But for early adoption among enterprises today, most are using large language models and doing RAG work. Not a lot of them are building their own proprietary models. But I do think it’s realistic to believe that most enterprises will have some level of proprietary models build out in the future.

“The thing I always hear is AI is not going to take your job. Somebody using AI will take your job.”

-Ken Englund

“We think about cloud around workloads. I think in the future, when thinking about what models you’ll use — small, large, proprietary, open source — it will be all around use cases. Most of our clients are starting on a single foundational model, but we always tell them to architect in some flexibility, because we think it’ll be models of models in the future.”

What does “models of models” mean? “At the end of the day, to get an answer to an answer to a particular set of use cases, you may need more than one big foundational model; it may be an open-source model…. [or] you’ll go to different models for different needs in the enterprise. I definitely don’t think it’s a one-size fits all. Build for flexibility.

“Most of these big frontier models really have commercial models around APIs. This idea of being fit for purpose for the kind of information or response you need for an inference will be the case going forward. You can think of yourself as … being a smart router for how you direct your AI inferences.”

How can IT professionals ensure they’re not left behind as organizations modernize? “Just start trying these tools, even if it’s in your personal direct to consumer life. People who have some familiarity with these tools off the bat will have a leg up. I think most companies will have a set of certifications, training and upskilling programs in their organizations. A lot of them already have ‘AI 101’ courses. I think as a tech worker, it’s up to you to take advantage of all those resources your company is offering you as a starting point, let alone all the other things out there in the open-source world.

“The thing I always hear is AI is not going to take your job. Somebody using AI will take your job.”

How are organizations upskilling or reskilling, and does that apply differently depending on the worker’s job, like line-of-business vs technologist? “When I think of functions that will use AI, whether that’s marketing or finance or customer care support or product development, they’re very different situations. I think the first three of those, they will have a much more applied use of this.

“My thought is if we fast-forward three or four years from now, we won’t even talk about AI. It will be embedded into the marketing automation software and ERP platforms out there. We’re going to get to a point where it just is.

“I think that’s the case for those line-of-business folks. For me, what’s important for business users, as these models get better — and I like to tell folks today is the worst AI will ever be, everyday it gets a little better — these models, whether they overlay deterministic models on top of probabilistic models, outcome and solution quality will get better. But understanding how that works in the meantime: there will be some judgment for what business workers need. What we’ll see mainly there are assistants — copilots that’ll make recommendations for business users like a marketer, but not probably human-out-of-the-loop at this point.

“Where we’re really seeing much more hard-core work is around software development, testing, quality and those areas where you’re in the nitty-gritty of activities. Those learnings will come out of classical enterprise IT or engineering product teams will flow into other parts of the enterprise.”

But AI is going to affect all of that, correct? “Absolutely. If you think about the most structured language in the world, it’s coding. If you think about these large language models, genAI, they’re really language based. AI’s ability to determine how code gets built, tested, released — that’s ground zero for all this stuff.”

So, more than anywhere else in the IT space, AI is being used to produce software, test it and deploy it, correct? “I think on the corporate IT side, yes. If you think about the rest of corporate IT functions, probably the two areas we’re seeing the most internet in AI is for customer service and customer care. These chatbots have been reasonably good initial products. We think anything that can handle customer care and service requests, we’re seeing move super well.

“Then, [it’s] things around employee workforce experience. So think about how you onboard a new employee — a lot of things that are rules-based and document based are the leading functions for AI.

“I think the last thing I’d add from our pulse poll is that these same concerns around AI adoption are the same ones we continue to see: cybersecurity, privacy, intellectual property are the biggies. The top line in our survey is really around skill-based development around AI expertise. This whole idea around certifications and upskilling is a really critical item.

“I know we’re all focused on the technology part of it, but this will continue — as always — to come down to the people and whether they can use it. That has never changed.”

Seeking DMA compliance, Apple gets to business

Apple has once again tweaked its terms of business for developers as it continues to seek alignment with Europe’s Digital Markets Act (DMA) while looking to protect its business. 

The latest changes followed accusations from the European Commission that the conditions Apple had made so far to meet the DMA did not go far enough. Regulators felt the terms prevented developers from freely guiding customers to alternative ways to pay and were threatening very costly legal action for non-compliance with the law. In hopes of avoiding a large fine, Apple has now completely relaxed those rules, while introducing a new fee structure. 

As usual, the changes still won’t satisfy the company’s fiercest critics. But at this stage of the game, it appears very little will — though for the vast majority of developers Apple’s EU offer is better than before.

What changes has Apple made?

The primary change involves relaxed restrictions on how apps in the EU can link out to external sites. While some of the changes are relatively complex to easily summarize, the tweaks give developers a lot more flexibility as to where and how to promote external offers, including via competing app stores.

Apple is permitting developer links to open inside the app, rather than in a web browser. The company has also changed the way it charges fees for the service. Among the tweaks:

  • First, it is introducing an Initial Acquisition Fee (5%), which must be paid for the first 12 months subsequent to a new customer being won on Apple’s platforms. This reflects the value of Apple’s platform as a way to find new customers and ends after 12 months.
  • An additional 10% Store Services fee is charged for all sales of digital goods and services across 12 months following any app install, update, or reinstall, though the vast majority of developers will pay just 5%. The way this fee is structured means Apple will continue to collect it in future.
  • Apple also takes a €0.50 Core Technology Fee for apps distributed via the App Store, Web distribution or alternative app marketplaces. This fee is paid for each first annual install over 1 million first annual installs in the year, and reflects a contribution to maintaining the company’s platforms.
  • Users can opt-out of reading the disclosure sheet Apple provides to warn people when they are about to make purchases outside the protection of the Apple platform.
  • Apple revised its fee calculator to help developers understand the consequences of the new fee structure.
  • All the changes are described in full in Apple’s revised guidance on apps distributed in the European Union.

The guidance also notes that developers can communicate and promote offers for purchases at a destination of their choice (not just their own website) and can design those in-app promotions as they wish. This gives developers a lot more flexibility as to where and how to promote external offers and where those offers are made available.

There are plenty of nuances to the guidance that might apply to you or your business, but the basic outcome is most developers will be paying less and developers of free apps will continue to pay nothing at all. Fee-based apps with fewer than 1 million downloads (which is most of them) will pay just 5% Store Services Fee, or 7% for developers remaining in the App Store ecosystem.

How much is fair?

For all the complexity, it seems reasonable to believe Apple’s problems with regulators will inevitably coalesce around the question of how much is appropriate to charge for access to its ecosystem. It’s not as if globally accepted and used computing platforms create themselves; they are the sum of decades of work, investment, and effort that requires reward. Otherwise, why bother trying? 

Apple’s biggest critic, Epic CEO Tim Sweeney, doesn’t see it that way, arguing that Apple’s top rate 15% fee is an “illegal junk fee.” But it is difficult within that argument to discern any recognition for the value provided by Apple’s platforms. It can’t be that Sweeney doesn’t understand this intrinsic value. After all, Epic charges application developers using Unreal Engine 5% of revenue after the first $1 million. Is that a “junk fee?”

Logically therefore, it makes sense that those who profit from the existence of the platforms should compensate platform providers for the tools they use to build on them. You cannot warm yourselves beside the fire if you don’t go out and seek some fuel for those flames from time to time. 

While critics seem to think Apple (and by inference, every Apple customer) should bear all the costs of maintaining the platforms, that seems unreasonable. A competitive marketplace cannot and should not demand one entity stokes the fire, while everyone else casts happy shadows in the smoke. It requires at least some shared reward, and shared risk.

Where is the value?

With this new fee system, Apple has taken fresh steps toward defining the value of its business, by which I mean, addressing what it brings in terms of customer introductions, platform creation and development, and tools and support to developers. All three of these are uniquely provided by Apple and have inherent value. The only stumbling block is now and always has been, how much should that value be?

Apple meanwhile continues to work with EU regulators. The company has been in talks with them for years over these matters and will continue to engage as it works toward building a viable business proposition that works for Apple, EU, developers who value its platforms, and Apple’s European customers. 

We must now wait and see whether Europe feels Apple’s new changes meet their expectations of its behavior under the DMA.

More from Jonny Evans

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

AnitaB.org takes steps to protect attendees at this year’s Grace Hopper Celebration

AnitaB.org has announced new measures it’s taking to avoid a repeat of the debacle at last year’s Grace Hopper Celebration (GHC).

The nonprofit organization’s annual event to support the advancement of women and nonbinary technologists, was named for computing pioneer Rear Admiral Grace Hopper. It combines conference sessions with an expo and job fair.

At GHC 2023, the job fair was invaded by large numbers of men, some of whom had lied about their gender identity when registering, and who monopolized recruiters from large tech employers, butting into line and preventing the conference’s target attendees from getting interview slots. Attendees reported being physically pushed, demeaned, and sexually harassed by some of the men.

In a LinkedIn post after the conference, AnitaB.org pledged to address the problem. It said, “We are dedicated to bringing structural changes to ensure that GHC continues to be an uplifting experience and provides opportunities for women and non-binary technologists.”

A tale of two events

Bo Young Lee, president of AnitaB.org advisory, said this week in an email interview, “GHC 23 was a tale of two events. Those conference attendees who largely participated through attendance at sessions and talks had the same joyful, celebratory, and community-based experience that GHC has come to be known for.

“The most problematic behavior we witnessed was concentrated in our Expo Hall. It was there that we had a minority of attendees, mostly students and male, engage in aggressive behavior that violated our code of conduct.”

Lee cited three factors for this that the organization’s subsequent investigation revealed:

  • A scarcity mindset brought on by reduced recruiting at universities and colleges that, Lee said, resulted in a larger number of job seekers than in previous years and “resulted in more aggressive behavior than we’ve seen in the past.”
  • A larger number of male job-seeking attendees than in years past. “These male attendees were not at GHC to participate in any of the content sessions, and instead stayed fixed in the Expo Hall,” Lee said.
  • Coordinated efforts: An investigation conducted after GHC 23 revealed that there was a coordinated effort by far-right anti-DEI groups “to undermine and disrupt GHC, both in person and online.”

Actions for GHC 2024

“Our commitment to inclusivity remains strong, focusing on engaging members, participants, and attendees who support the advancement of women, nonbinary technologists, and the LGBTQIA+ community,” AnitaB.org said in a recent email to members. “Our goal is to ensure that everyone involved in our celebration feels safe and valued.”

The email outlined a list of process changes for GHC 24, which will be held October 8 – 11 both virtually and in person in Philadelphia, Pennsylvania, that the organization believes will prevent the recurrence of last year’s issues.

First, it is modifying its registration procedure to require valid ID, such as a driver’s license, when registering. It will also require proof of student status if appropriate.

But, Lee said, “GHC has always been open to women, nonbinary, and ally technologists. We will never discriminate against who can buy a registration and participate.”

At the event, there will be stricter badge checks and ID verification for entry to the venue, as well as when entering the expo. In addition, attendees will be assigned to timed expo entry groups to allow everyone to experience the expo without having to fight crowds.

Finally, an update to the code of conduct holds everyone accountable for behavior that aligns with the organization’s mission. Attendees must agree to abide by it when registering.

Lee said there will also be enhanced cybersecurity monitoring to detect any coordinated efforts early, so they can be dealt with, and onsite security personnel to handle problems that might arise at the venue. These measures were created in consultation with external security consultants, local law enforcement, and cybersecurity consultants.

Why events like GHC are needed

The events at GHC 23 underscore the need for industry events aimed at underrepresented communities as a means to build and develop diverse talent, said Erin Pierre, principal analyst at Gartner.

“Our research has shown that women make up nearly half of the global workforce, and they only represent about 26% of IT employees. I’m not sure what the numbers are for nonbinary talent, but the numbers show us that more than half — a majority, at least — of IT employees are predominantly male,” she said. “So these types of events, where women and nonbinary talent can come together and learn and develop their skill sets and get some networking opportunities or even potential interviewing opportunities, are incredibly important.”

A spokesperson for QueerTech, an organization that focuses on breaking down barriers, creating spaces, and connecting communities to support and empower 2SLGBTQ+ people to thrive, agreed.

“At QueerTech we recognize that many industries — including the tech industry — have been shaped by and for cisgender men, resulting in a system that largely overlooks and excludes diverse communities. This systemic bias has created significant barriers for underrepresented communities, including members of the 2SLGBTQIA+ community, ranging from discrimination and a stark lack of representation, to limited access to mentorship and professional networks,” they said in a statement. 

“Equity is not about treating everyone the same; it’s the recognition that existing barriers require varying levels and types of support in order to ensure fair and equal access to opportunities,” the QueerTech spokesperson added.

Creating safe event and career-building environments is crucial to empowering underrepresented communities, they said. “In order to create safe, equitable environments, we must always remember who it is we aim to serve, thoroughly understand their lived experiences and barriers to success, and work tirelessly to ensure these values, and understandings, are reflected in every single programming decision.”

It is all the more jarring for participants when a supposedly safe environment turns out not to be, as happened at GHC 23.

Said Pierre, “When something like this happens, it is usually a symptom of a larger issue. So even if we could wave our magic wand and magically change this, and they could change the celebration for this year to be a little more safe and inclusive, we still have a larger issue at play here. And that’s why it feels so catastrophic when it happens, because really what this shows us is that there’s still a severe lack of resources and opportunities for female and nonbinary talent.”

Organizations need to do a better job of attracting and retaining a diverse workforce, Pierre added. We need to look at diversity, equity, and inclusion (DEI) as something that benefits everyone, not just  female and nonbinary talent, she noted, since many of the things that make an employer attractive for underrepresented groups, including flexibility, work-life balance, and development opportunities, are good for all employees.

“I think we need to have more of an actionable approach and making sure that we’re really embedding DEI into our overall culture,” she said.

18-year-old browser bug still allows access to internal networks

Security company Oligo is warning that hackers can bypass firewalls and gain full access to local networks with the help of a bug found in most browsers for macOS and Linux. According to Oligo, the bug has been around for 18 years and all the hackers have to do is use the 0.0.0.0 address instead of 127.0.0.1.

Recently, more and more hackers have started exploiting the bug; updates to block the 0.0.0.0 address are on the way for Safari, Firefox, Chrome and Edge.

Note: the bug is not found in browsers for Windows.