Page 32 of 101

Are humans reading your AI conversations?

Generative AI (genAI) is taking over the tech industry. From Microsoft’s genAI-assistant-turned-therapist Copilot being pinned to the Windows taskbar to Google’s Android operating system being “designed with AI at its core,” you can’t install a software update anymore without getting a new whizz-bang AI feature that promises to boost your productivity.

But, when you talk to AI, you’re not just talking to AI. A human might well look at your conversations, meaning they aren’t as private as you might expect. This is a big deal for both businesses working with sensitive information as well as individuals asking questions about medical issues, personal problems, or anything else they might not want someone else to know about.

Some AI companies train their large language models (LLMs) based on conversations. This is a common concern — that your business data or personal details might become part of a model and leak out to other people. But there’s a whole other concern beyond that, and it could be an issue even if your AI provider promises never to train its models on the data you feed it.

Want more about the future of AI on PCs? My free Windows Intelligence newsletter delivers all the best Windows tips straight to your inbox. Plus, you’ll get free in-depth Windows Field Guides as a special welcome bonus!

Why humans are reviewing AI conversations

So why are humans looking at those conversations? It’s all about quality assurance and spotting problems. GenAI companies may have humans review chat logs to see how the technology is performing. If an error occurs, they can identify it. Think of it as performing “spot checks” with feedback from human reviewers then used to train the genAI model and improve how it will respond in the future.

Companies also review conversations when they suspect abuse of their service. It’s easy to imagine that the companies could also use AI tools themselves to dig through the masses of chat logs and find ones where there seems to be some sort of problem or a safety issue.

This isn’t new to AI. For example, Microsoft has had contractors listening to people’s Skype audio conversations for quality assurance purposes as well. Yikes.

A real privacy concern

Tools like OpenAI’s ChatGPT and Google’s Gemini are being used for all sorts of purposes. In the workplace, people use them to analyze data and speed up business tasks. At home, people use them as conversation partners, discussing the details of their lives — at least, that’s what many AI companies hope. After all, that’s what Microsoft’s new Copilot experience is all about — just vibing and having a chat about your day.

But people might share data that’d be better kept private. Businesses everywhere are grappling with data security amid the rise of AI chatbots, with many banning their employees from using ChatGPT at work. They might have specific AI tools they require employees to use. Clearly, they realize that any data fed to a chatbot gets sent to that AI company’s servers. Even if it isn’t used to train genAI models in the future, the very act of uploading data could be a violation of privacy laws such as HIPAA in the US.

For many knowledge workers, it’s tempting to give ChatGPT a big data set of customer details or company financial documents and have it do some of that informational grunt work. But, again, a human reviewer might see that data. The same is true when these tools are put to personal use.

Copilot on Windows 11
Humans may review your conversations with Microsoft Copilot.

Chris Hoffman, IDG

Do ChatGPT, Copilot, and Gemini use human reviewers?

To be clear, all signs suggest humans are not actively reading the vast majority of conversations with AI chatbots. There are far too many conversations to make that possible. Still, the main genAI tools you’ve heard of do at least occasionally use human reviews.

For example:

  • ChatGPT lets you turn off chat history by activating a “temporary chat.” With chat history on, the conversations will be used to train OpenAI’s models. With a temporary chat, your conversations won’t be used for model training, but they will be stored for 30 days for possible review by OpenAI “for safety purposes.” ChatGPT’s Enterprise plans provide more data protections, but human reviewers are still involved at times.
  • Microsoft says Copilot conversations are also reviewed by humans in some situations: “We include human feedback from AI trainers and employees in our training process. For example, human feedback that reinforces a quality output to a user’s prompt, improving the end user experience.”
  • Google’s Gemini also uses human reviewers. Google spells it out: “Please don’t enter confidential information in your conversations or any data you wouldn’t want a reviewer to see or Google to use to improve our products, services, and machine-learning technologies.”
ChatGPT Temporary Chat
ChatGPT’s Temporary Chat option provides more privacy, but humans may still review some of your conversations.

Chris Hoffman, IDG

How to ensure no one is reading your AI conversations

Companies that need to safeguard business data and follow the relevant laws should carefully consider the genAI tools and plans they use. It’s not a good idea to have employees using a mishmash of tools with uncertain data protection agreements or to do anything business-related through a personal ChatGPT account.

In the long run, AI models that run locally could prove to be the ideal answer. The dream is that an AI chatbot would run entirely on your own computer or mobile device and wouldn’t ever need to “phone home.” Companies could run their own AI software in their own data centers, if they so chose — keeping all of their data entirely under their own control.

Despite all the criticism of Microsoft’s Recall tool, which will let you search through your Windows 11 desktop usage on a Copilot+ PC when it launches, Recall had the right idea in many ways. It will do everything on your own PC without sending things to Microsoft. Human reviewers won’t see it.

On the flip side, Google recently launched AI history search for Chrome — and, again, human reviewers might examine your browser history searches if you try it out.

Chrome AI History Search
Google warns that humans may see your browsing history if you turn on AI history search in Chrome.

Chris Hoffman, IDG

Two sides of the AI-human question

Let’s come back to earth. I don’t mean to be Chicken Little here: The average person’s ChatGPT conversations or Copilot chats probably aren’t being reviewed. But what’s important to remember is that they could be. That’s part of the deal when you sign up to use these services. And now more than ever, that’s something critical for everyone to keep in mind — from businesses using AI professionally to people chatting with Copilot about their hopes and dreams.

Let’s stay in touch! My free Windows Intelligence newsletter delivers all the best Windows advice straight to your inbox. Plus, get free Windows Field Guides just for subscribing!

5 advanced Gboard tricks for smarter Android typing

QWERTY, QWERTY, QWERTY. QWERTY.

Oh — hi there! Sorry for the slightly nonsensical greeting. I’ve been thinking a lot about keyboards this week, y’see, and how that trusty ol’ QWERTY surface has evolved in our lives.

Also, saying “QWERTY” over and over again is surprisingly fun to do. Go ahead and try it. I’ll wait.

Back? Cool. So, about that QWERTY contemplation: ‘Twas a time not so long ago that our QWERTY interactions on the mobile-tech front revolved almost entirely around actual physical keys. (Drooooooid, anyone?) Then, even when we started relying on on-screen QWERTY surfaces, we were hunting and pecking and doing an awful lot of correcting.

I remember when Google bought out a now-forgotten promising Android keyboard project called BlindType. BlindType’s entire premise was that it was smart enough to figure out what you were trying to type, even when your fingers didn’t hit all the right letters.

The concept seemed downright revolutionary at the time — which is funny now, of course, ’cause that feels like such a common and expected feature in the land o’ Android keyboards. But my goodness, have we come a long way.

These days, you can absolutely type like a clumsy caribou and still see your thoughts come out mostly the way you’d intended. You can seamlessly switch between tapping and swiping, too, and you can even speak what you want to write with surprisingly decent reliability (…most of the time).

But when it comes to Google’s Gboard keyboard, your options for intelligent text input don’t end there. In addition to its many useful shortcuts and shape-shifting form choices, Gboard has some out-of-sight options for advanced text interactions that’ll save you time and make your days significantly easier.

They aren’t things you’ll use all the time, in place of the now-standard sloppy-tappin’, wild-swipin’, and hopeful-speaking methods. Rather, they’re specific tools you’ll use alongside those other Android text input options — like smart supplements for the especially enlightened among us.

Check ’em out for yourself and see which of these Gboard goodies are either new to you or maybe just gems you’ve gotten out of the habit of using.

[Psst: Love shortcuts? My free Android Shortcut Supercourse will teach you tons of time-saving tricks for your phone. Get your first lesson this instant!]

Gboard Android trick #1: The on-demand scan

First up is a super-handy way to import text from the real world and then use it as a starting point for whatever it is you’re typing.

It’s all too easy to overlook or forget, but Gboard has a simple built-in trick for snagging text from a monitor, document, or anything else around you and then importing it directly into your current text field.

Just find the icon that looks like a document with arrows on its corners — either in Gboard’s top row or in the menu of options that comes up when you tap the four-square menu button in the keyboard’s upper-left corner. (And remember: You can always change what’s in that Gboard top row by touching and dragging any icons in that full menu area and placing ’em in whatever position you want.)

Tap that bad boy, point your phone at the text in question — and hey, how ’bout that?!

Google Gboard Android text input: Scan
Scanned words, in a text field and ready — with precisely three taps in Google’s Gboard Android keyboard.

JR Raphael, IDG

You’ve got words from the real world right in front of you — ready to write around or edit as you see fit.

Gboard Android trick #2: Write right

Few mere mortals realize it, but in addition to tapping, swiping, and talking, you can also enter text into any field on Android with some good old-fashioned handwriting on your fancy phone screen.

It’s an interesting option to keep in mind for moments when you feel like your own scribbly scrawling might be more efficient than any other text input method.

This one takes a little more legwork to get going the first time, but once you do that, it’ll never be more than a quick tap away:

  • First, head into Gboard’s settings by tapping the four-square menu icon in the keyboard’s upper-left corner and then tapping the gear-shaped Settings option in the full Gboard menu.
  • Select “Languages” followed by the Add Keyboard button. Type “English (US)” (or whatever language you prefer), then make sure “Handwriting” is active and highlighted at the top of the screen.
  • Tap the Done button to apply the changes.

Now, make your way to any open text field to pull up Gboard, and you should be able to either press and hold the space bar or hit the newly present globe icon next to it to toggle between the standard keyboard setup and your snazzy new handwriting recognition system.

And once you’ve got that handwriting canvas open, all that’s left is to write, write, write away and watch Google’s virtual genie translate your illegible squiggles into regular text almost instantly.

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/10/gboard-android-text-input-handwriting.webp?quality=50&strip=all 900w, https://b2b-contenthub.com/wp-content/uploads/2024/10/gboard-android-text-input-handwriting.webp?resize=150%2C150&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2024/10/gboard-android-text-input-handwriting.webp?resize=300%2C300&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2024/10/gboard-android-text-input-handwriting.webp?resize=768%2C765&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2024/10/gboard-android-text-input-handwriting.webp?resize=699%2C697&quality=50&strip=all 699w, https://b2b-contenthub.com/wp-content/uploads/2024/10/gboard-android-text-input-handwriting.webp?resize=169%2C168&quality=50&strip=all 169w, https://b2b-contenthub.com/wp-content/uploads/2024/10/gboard-android-text-input-handwriting.webp?resize=84%2C84&quality=50&strip=all 84w, https://b2b-contenthub.com/wp-content/uploads/2024/10/gboard-android-text-input-handwriting.webp?resize=482%2C480&quality=50&strip=all 482w, https://b2b-contenthub.com/wp-content/uploads/2024/10/gboard-android-text-input-handwriting.webp?resize=361%2C360&quality=50&strip=all 361w, https://b2b-contenthub.com/wp-content/uploads/2024/10/gboard-android-text-input-handwriting.webp?resize=251%2C250&quality=50&strip=all 251w" width="900" height="897" sizes="(max-width: 900px) 100vw, 900px">
Gboard’s handy handwriting option in action. (Clarity not required.)

JR Raphael, IDG

As you can see above, it works even if your handwriting resembles the harried scrawls of a clumsy caribou. (No offense intended to my caribou comrades.)

Gboard Android trick #3: Quick clips

One of my all-time favorite Gboard tricks is the keyboard’s intelligent integration of the Android system clipboard — and some incredibly helpful tricks that come along with that.

Look for the clipboard-shaped icon either in the keyboard’s top row or within the main Gboard menu to get started. The first time you tap it, you might have to activate the system (via the toggle in the upper-right corner of its interface) and also grant Gboard permission to access your system clipboard. You may also need to mosey back into the Gboard settings to find the “Clipboard” section and enable all the options there to get every piece of the puzzle up and running.

Once you do, though, good golly, is this thing amazing. It’ll automatically show every snippet of text and any images you’ve copied recently, for one-tap inserting into whatever text field you’re working in — and it’ll show your recently captured screenshots for the same purpose, too.

Google Gboard Android text input: Clipboard
Gboard’s clipboard integration makes it easy to find anything you’ve copied and insert it anywhere.

JR Raphael, IDG

Perhaps most useful of all, though, is the Gboard clipboard’s capability to store commonly used items and then make ’em readily available for you to insert anytime, anywhere. You could use that for email addresses, physical addresses, Unicode symbols, snippets of code, or even just phrases you find yourself typing out often in Very Important Work-Related Emails™.

Whatever the case may be, just copy the item in question once, then pull up the Gboard clipboard and press and hold your finger onto the thing you copied. Tap the “Pin” option that pops up, and poof: That text (or image) will be permanently stored in the bottom area of your Gboard clipboard for easy retrieval whenever you need it.

Google Gboard Android text input: Clipboard pin
Pinned items in the Gboard clipboard are like your own on-demand scratchpad for easy inserting anywhere.

JR Raphael, IDG

As an extra bonus, Gboard also now syncs your pinned clipboard data and continues to make any pinned items available on any Android device where you sign in.

Gboard Android trick #4: Your personal editor

When you’re banging out a Very Important Business Email And/Or Document™ on your phone, it’s all too easy to mix up a word or inadvertently accept an errant autocorrect. We’ve all been there — and all had the same ducking reaction — right?

You may not always have a second set of human (or even caribou) eyes to look over your words whilst composing on the go, but Gboard’s recently added proofreading feature can at least give you some second layer of assurance before you hit that daunting Send button.

To find it, tap the four-square menu icon in Gboard’s upper-left corner and look for the Proofread button — with an “A” and a checkmark on it.

Tap that bad boy and tap it good, and in a split second, Gboard will analyze whatever text you’ve entered and offer up suggestions to improve it.

Google Gboard Android text input: Proofread
Need a quick confirmation that your text makes sense? Gboard’s proofreading feature’s got your back.

JR Raphael, IDG

Not bad, Gboard. Not bad. You is clearly the one who is gooder at typings today.

Gboard Android trick #5: The translation station

Last but not least in our text input improvement list is a serious time-saver and communication-booster, and that’s the Gboard Android app’s built-in translation engine.

Hit that four-square menu icon in your keyboard’s upper-left corner once more, find the Translate button, and tap it — then select whatever languages you want and type directly into the Gboard translate box.

Gboard will translate your text in real-time and insert the result, in whatever language you selected, directly into whatever text field you had selected.

Google Gboard Android text input: Translate
Any language, anytime, with Gboard’s on-demand translation system.

JR Raphael, IDG

Pas mal, eh? 

Keep all these advanced input tricks in mind, and you’ll be flyin’ around your phone’s keyboard like a total typing pro — with or without a caribou at your side.

Get six full days of advanced Android knowledge with my free Android Shortcut Supercourse. You’ll learn tons of time-saving tricks for your phone!

Asana launches AI Studio, a no-code tool for building AI agents

Asana has launched AI Studio, a no-code tool for building generative AI agents that can automate work planning tasks. 

The company first unveiled its “AI teammate” plans in June, promising to enable customers to build autonomous agents that can perform tasks independently within the work management app.  

On Tuesday, Asana said that the AI agent builder — renamed Asana AI Studio — is now out of beta and available to customers on its Enterprise and Enterprise+ plans in “early access.” There are two options for accessing AI Studio at this stage: a free plan with daily limits on usage, and a paid add-on. (Asana declined to provide specifics on pricing.)

Customers trialing AI Studio during the beta noted several advantages when deploying AI agents, said Paige Costello, head of AI at Asana. “The key benefits we’re seeing are the speed of decision-making and the overall acceleration of work and reduction in administrative and busy work,” she said.

“There is tremendous potential in AI-based agents to expedite workflow,” said Wayne Kurtzman, research vice president covering social, communities and collaboration at IDC. “The ability to deploy agents in the stream of work, where teams work, and without code becomes a powerful proposition.”  

With the launch, Asana also announced additional features for AI Studio. These include a wider variety of potential AI agent actions, more control over smart workflow capabilities such as data access and costs, and an increase in the amount of context the AI agent can reference in its decision-making. 

Users can also view a record of an AI agent’s actions and decisions. “You can actually dig into work that has happened and understand why the custom agent that you’ve built made a specific choice and undo the selection that it’s done,” said Costello. 

Users can choose from four language models to power AI agents: Anthropic’s Claude 3.5 Sonnet and Claude 3 Haiku, and OpenAI’s GPT-4o and GPT-4o mini. 

With AI agents able to complete tasks autonomously, the propensity for language models to “hallucinate” and provide inaccurate outputs could be a concern for businesses.  Costello said there are safeguards in place to help reduce the likelihood of AI agents generating and acting on incorrect information, and argued those designing the AI- workflows are “in the driver’s seat.” 

For example, a user can require an AI agent to seek human approval before carrying out actions deemed higher risk, such as sending an external email to a customer. “People are the decision makers –– they’re the ones ultimately accountable for work,” said Costello.

Adoption of AI agents is at an early stage for most organizations, but it’s accelerating, said Margo Visitacion, vice president and principal analyst at Forrester, covering application development and delivery. Successful deployments will require “experimentation, failing fast, and learning from those experiments,” she said.

“It takes the right level of oversight, focus on the problems you’re solving, and gathering feedback to ensure you’re using the right model that suits your needs,” said Visitacion.

AI dominates Gartner’s 2025 predictions

Artificial Intelligence continues to dominate this week’s Gartner IT Symposium/Xpo, as well as the research firm’s annual predictions list. 

“It is clear that no matter where we go, we cannot avoid the impact of AI,” Daryl Plummer, distinguished vice president  analyst, chief of research and Gartner Fellow told attendees. “AI is evolving as human use of AI evolves. Before we reach the point where humans can no longer keep up, we must embrace how much better AI can make us.”

Continue reading on Network World for Gartner’s top predictions for 2025.

The iPad mini is still the best small tablet around

We waited three years for Apple to update the iPad mini, and what we got is more of the same — boosted with a much better processor. Apple’s iPad mini has consistently been positioned as either the most versatile, or the most underpowered tablet in the company’s lineup. That’s changed with the latest iteration, which remains small enough to take anywhere, but also becomes much more powerful than ever, thanks to a new A17 Pro chip. 

Whole lotta processor

If you’ve used an iPad mini before, you know what you’re getting: a reliable, powerful, and eminently mobile device with a large display you can slip inside an inner coat pocket and use just about anywhere, except in the rain.

Available in two new colors (four in all, including space gray, starlight, purple, and blue), the chip is the real substance here. Apple promises a 30% faster CPU, a 25% increase in GPU performance, and double the machine learning performance of the last-generation iPad mini. Even in comparison to the last model, that processor means you really will feel the difference in terms of performance — and the internal memory has been beefed up to 8GB, which also makes a big difference.

Geekbench 6 tests on the review device generated single-core results of 2,520 and 6,440 on multi-core performance. In comparison, the iPad mini 6 with an A15 Bionic chip achieved 2,121 single-core, and 5,367 multi-core scores. While aggregate test results will be more accurate, my own testing confirms substantial improvement in the processor. What you are getting is an iPad that will (a) run Apple Intelligence, and (b) deliver the performance you require to run apps for the next few years. 

That’s great for consumer users, but also important to education and enterprise deployments. This is, after all, an iPad that seems a good fit for real-world business applications such as retail, hospitality, warehousing, or distribution. You’ve probably already seen it used in one of those fields, so the processor upgrade will be a significant benefit for firms seeking to deploy AI solutions within their digital transformation efforts.

I recently learned that pilots sometimes fly with an iPad mini so they can use it to check flight maps and flight support apps while they are in the air. They (and their passengers) should be happy with the lack of lag and hardware-accelerated ray tracing they get while using those apps in flight. 

The song remains the same

The display remains the same — in this case, the well-received Liquid Retina system we saw last time around. You can expect a wide range of supported colors at 2255×1488-pixel resolution (at 326ppi) and a nice and bright 500 nits. What changes is you now also gain Wide Color P3 support and support for True Tone. (The latter uses advanced sensors to adjust the color and intensity of your display to match the ambient light, so images appear more natural.)

I’ve been testing the iPad mini for a few days and have not experienced any cases of so-called “jelly-scrolling,” when one side of the screen refreshes at a different rate than the other when scrolling up and down.

When it comes to most of the hardware, the music hasn’t changed. You’ll still find it an easy-to-hold device because it is light, thin, and small. Dimensions remain 7.7 x 5.3 x 0.25 inches. The weight is 0.65 pounds, same as the last generation. You get USB-C (though this supports data transfers at 10Gbs, twice as fast as before), Touch ID, and no headphone port.

Under-the-hood changes are all about networking — Wi-Fi 6E, 5G support, and Bluetooth 5.3. The cameras remain more or less the same, though images captured on the device use machine intelligence to optimize the results. One older image Apple Intelligence selected as my Home screen picture really pops thanks to the AI tweaks.

Good times, AI times

Now, for most mortals, Apple Intelligence remains something that’s nice to have, rather than something essential. That’s how it will remain until the first services under that moniker appear next week. On this device, one thing Apple Intelligence is already good at is summarizing emails and helping write better ones, while the Siri improvements in iOS 18.1 bode well for additional contextual intelligence expected to appear next year.

Given that Apple Intelligence isn’t available yet and won’t be available for some time in China or Europe, it’s too early to surmise the extent to which that will change the user experience. But even while that jury remains out, iPad Air continues to keep its promise to provide a good balance of mobility and usable display space. Its processor also gives third-party AI developers a platform on which to build other non-Apple AI-augmented experiences. That could be useful to some businesses. (And business and education users might appreciate that the camera takes excellent document scans.)

Bad times

One complaint concerns the screen refresh rate. While other tablets have already jumped to 120Hz, the iPad mini remains confined to 60Hz. I don’t know if that decision is based on costs, heat dissipation, energy management, or parsimony, but I have seen enough people commenting on this to know that it’s something Apple will need to deal with in the future. 

If you are using an Apple Pencil with your current iPad mini, I have bad news: the new model will not support any Apple Pencil other than the newish Apple Pencil Pro and the USB-C Apple Pencil. If you’ve been using a second-generation Apple Pencil, you’re out of luck. (First-generation pencil support in iPad mini disappeared with iPad mini 6.)

It is also interesting that Apple continues to avoid kitting out its iPad mini or standard iPad with the M-series chips found inside the iPad Air and iPad Pro. The decision makes it clear that Apple is ultimately using the built-in processor as its way to compete with itself, as no one else in the tablet market is really in its league. Want a more performant tablet? Apple has them, but not in the mini range.

Finally, when it comes to the built-in camera, you probably have a better camera in your smartphone than what’s here. It’s good enough, has been updated to deliver better, and Apple’s image intelligence software means it takes good pictures, if you want to do so with a tablet. It is more than suitable for video conferencing, of course.

What else to know

There is one more thing, the price. Just as in 2021, the entry-level model still costs $499 (Wi-Fi only, add $150 for the cellular model), but it now ships with 128GB of storage rather than 64GB; 256GB and 512GB models are also available. More storage is always a win, but don’t ignore that Apple Intelligence requires at least 4GB of that space for its own use. 

Should you buy it?

I’ve used iPads since the beginning. Over that time, my preferences have kind of coalesced around the iPad Air, which I think provides a brilliant balance between power and affordability, and the iPad mini, which I rate highly for its inherent portability, but always deserved a little more power. 

The move toboost internal storage and memory while pepping things up with an A17 Pro chip means the small device can handle most tasks; you probably won’t be using it extensively for video processing in the field or advanced image editing, but could use it for some of both of those tasks. You’ll also enjoy reading books, gaming, watching videos, or listening to music. (Apple’s audio teams really know how to create great soundscapes in these devices.)

The processor upgrade means that if you are using an iPad mini 6 or earlier, a move to this model makes sense — particularly if you want to explore what Apple Intelligence is all about. While it’s mostly the same, it is overall better, and if you’ve convinced yourself you have a solid reason to invest in a smaller tablet, the iPad mini 7 remains the best small tablet around.

Please follow me on LinkedInMastodon, or join me in the AppleHolic’s bar & grill group on MeWe.

Two-thirds of leaders feel genAI will reshape entry-level jobs

Generative AI is on track to transform entry-level roles across industries, with 64% of leaders expecting these positions to evolve from creation to review and refinement of outputs within the next three years. The percentage of managers holding a similar view stood at 71%, according to new research from the Capgemini Research Institute.

The findings suggest the traditional model of junior employees manually creating content, data, or code is rapidly being replaced by generative AI, which can generate these outputs in a fraction of the time. As a result, employees in these positions will focus more on quality control, critical analysis, and ensuring AI outputs meet business standards, the report added.

The findings are based on Capgemini Research Institute’s May 2024 survey that involved 1,500 executives and 1,000 entry-level employees from 500 organizations with over $1 billion in revenue, to explore generative AI adoption.

The report revealed that employees expect genAI will manage an average of a third of their tasks (32%) within the next 12 months, indicating a potential productivity boost. This shift is expected to be felt most acutely in roles that traditionally involve manual content creation, data entry, or routine customer service tasks.

“Generative AI tools are becoming more adept at assisting with complex managerial tasks, which could challenge the status quo of organizational structure and ways of working,” Roshan Gya, CEO of Capgemini Invent and a member of the Group Executive Committee said in a press note. “This shift allows employees to focus on higher-value activities, unlocking new perspectives and challenging assumptions.”

Increased autonomy for junior employees

The widespread integration of AI into the workplace will not only change the nature of entry-level work but also grant more autonomy to junior employees.

According to the data, 52% of leaders and managers expect entry-level positions to gain greater independence as AI becomes more embedded in daily workflows. For example, in industries such as supply chain and logistics, AI will take on tasks like inventory management and order processing, allowing junior analysts to focus on strategic tasks and project management.

As AI continues to manage mundane tasks, junior employees will have more opportunities to make decisions that were previously the domain of higher-level staff. This will enable them to fast-track their careers and assume greater responsibilities early on, the report stated.

The report reveals that 51% of leaders believe AI will accelerate the career progression of entry-level employees. With AI automating routine functions, junior employees will gain exposure to more strategic elements of their roles, moving into management positions much faster than traditional career paths have allowed. AI will facilitate this shift by providing employees with the tools and data necessary to make informed decisions and take on supervisory roles.

“It should be noted that this shift depends on several factors: clarity on skills requirements at higher levels; the ability of junior employees to develop these skills (often tied to experience, which cannot be fast-tracked); and the availability of opportunities available for the shift,” the report pointed out.

It further added, “Organizations must prioritize building the skills and readiness of junior employees as part of a clear roadmap for employees’ journeys to people leadership or functional/technical leadership. This requires proactive steps around talent acquisition, development, skilling, and review and reward mechanisms.”

This transformation is already visible in fields such as marketing, customer service, and even technical domains like software development.

“We’re seeing AI take over foundational tasks in these sectors, and junior employees are becoming curators of AI work, rather than creators,” the report said.

With AI’s involvement, the proportion of managers within teams could expand from 44% to 53%, reflecting a broader move toward specialized roles that focus on managing AI-human collaborations.

Productivity gains, but upskilling gaps remain

The findings from the report suggest that while AI adoption promises significant productivity gains — potentially saving 18% of time for entry-level workers — there are concerns about the readiness of employees to leverage these tools.

Despite the optimism about AI’s role in career acceleration, the report highlights a significant gap: only 16% of employees feel they are receiving adequate training in AI-related skills from their organizations. This gap poses a major challenge for companies that want to fully harness the benefits of AI, especially at the entry level.

“Without the proper training and resources, employees won’t be able to maximize the potential of AI,” the report cautioned, urging organizations to prioritize formal training programs to ensure their teams are ready for the AI-driven future​.

The path ahead

Despite the promise of genAI, adoption remains nascent. While 64% of workers already use genAI tools, only 20% use them daily, the report stated.

This gap between AI’s potential and actual usage underscores the need for clearer guidelines, comprehensive training, and better integration of AI tools into existing workflows. As organizations continue to explore AI’s capabilities, defining roles and responsibilities for human-AI collaboration will be key to ensuring accountability and cohesion across teams.

With 81% of leaders expecting new roles like AI ethics specialists and data curators to emerge, the landscape of entry-level work is on the verge of a significant transformation. The future of work, it seems, will be less about replacing human effort and more about enhancing it through strategic collaboration with AI.

Two-thirds of leaders feel Gen AI will reshape entry-level jobs

Generative AI is on track to transform entry-level roles across industries, with 64% of leaders expecting these positions to evolve from creation to review and refinement of outputs within the next three years. The percentage of managers holding a similar view stood at 71%, according to new research from the Capgemini Research Institute.

The findings suggest the traditional model of junior employees manually creating content, data, or code is rapidly being replaced by generative AI, which can generate these outputs in a fraction of the time. As a result, employees in these positions will focus more on quality control, critical analysis, and ensuring AI outputs meet business standards, the report added.

The findings are based on Capgemini Research Institute’s May 2024 survey that involved 1,500 executives and 1,000 entry-level employees from 500 organizations with over $1 billion in revenue, to explore generative AI adoption.

The report revealed that employees expect gen AI will manage an average of a third of their tasks (32%) within the next 12 months, indicating a potential productivity boost. This shift is expected to be felt most acutely in roles that traditionally involve manual content creation, data entry, or routine customer service tasks.

“Generative AI tools are becoming more adept at assisting with complex managerial tasks, which could challenge the status quo of organizational structure and ways of working,” Roshan Gya, CEO of Capgemini Invent and a member of the Group Executive Committee said in a press note. “This shift allows employees to focus on higher-value activities, unlocking new perspectives and challenging assumptions.”

Increased autonomy for junior employees

The widespread integration of AI into the workplace will not only change the nature of entry-level work but also grant more autonomy to junior employees.

According to the data, 52% of leaders and managers expect entry-level positions to gain greater independence as AI becomes more embedded in daily workflows. For example, in industries such as supply chain and logistics, AI will take on tasks like inventory management and order processing, allowing junior analysts to focus on strategic tasks and project management.

As AI continues to manage mundane tasks, junior employees will have more opportunities to make decisions that were previously the domain of higher-level staff. This will enable them to fast-track their careers and assume greater responsibilities early on, the report stated.

The report reveals that 51% of leaders believe AI will accelerate the career progression of entry-level employees. With AI automating routine functions, junior employees will gain exposure to more strategic elements of their roles, moving into management positions much faster than traditional career paths have allowed. AI will facilitate this shift by providing employees with the tools and data necessary to make informed decisions and take on supervisory roles.

“It should be noted that this shift depends on several factors: clarity on skills requirements at higher levels; the ability of junior employees to develop these skills (often tied to experience, which cannot be fast-tracked); and the availability of opportunities available for the shift,” the report pointed out.

It further added that “Organizations must prioritize building the skills and readiness of junior employees as part of a clear roadmap for employees’ journeys to people leadership or functional/technical leadership. This requires proactive steps around talent acquisition, development, skilling, and review and reward mechanisms.”

This transformation is already visible in fields such as marketing, customer service, and even technical domains like software development.

“We’re seeing AI take over foundational tasks in these sectors, and junior employees are becoming curators of AI work, rather than creators,” the report said.

With AI’s involvement, the proportion of managers within teams could expand from 44% to 53%, reflecting a broader move toward specialized roles that focus on managing AI-human collaborations.

Productivity gains but upskilling gaps remain

The findings from the report suggest that while AI adoption promises significant productivity gains — potentially saving 18% of time for entry-level workers — there are concerns about the readiness of employees to leverage these tools.

Despite the optimism about AI’s role in career acceleration, the report highlights a significant gap: only 16% of employees feel they are receiving adequate training in AI-related skills from their organizations. This gap poses a major challenge for companies that want to fully harness the benefits of AI, especially at the entry level.

“Without the proper training and resources, employees won’t be able to maximize the potential of AI,” the report cautioned, urging organizations to prioritize formal training programs to ensure their teams are ready for the AI-driven future​.

The path ahead

Despite the promise of Gen AI, adoption remains nascent. While 64% of workers already use Gen AI tools, the report stated that only 20% use them daily.

This gap between AI’s potential and actual usage underscores the need for clearer guidelines, comprehensive training, and better integration of AI tools into existing workflows. As organizations continue to explore AI’s capabilities, defining roles and responsibilities for human-AI collaboration will be key to ensuring accountability and cohesion across teams.

With 81% of leaders expecting new roles like AI ethics specialists and data curators to emerge, the landscape of entry-level work is on the verge of a significant transformation. The future of work, it seems, will be less about replacing human effort and more about enhancing it through strategic collaboration with AI.

GenAI surges in law firms: Will it spell the end of the billable hour?

Saying the legal profession is document-intensive is like claiming that a library is filled with books. Unlike researchers using the Dewey Decimal System, though, lawyers face an ocean of big data from which they must fish out case evidence.

For more than a decade, law firms have been using machine learning and artificial intelligence tools to aid the discovery process, helping them hunt down paper trails and digital content alike. But it wasn’t until the arrival two years ago of OpenAI’s generative AI (genAI) conversational chatbot, ChatGPT, that the technology became common and easy enough to use that even first-year associates straight out of law school could rely on it for electronic discovery (eDiscovery).

The interest in genAI for legal discovery and automating other tasks is unprecedented, according to Ron Friedmann, a Gartner senior director analyst.

“There’s way more piloting that I’ve seen, especially in large law firms. So, there’s been a lot of expense, especially the allocating of staff and paying out of pocket for licensing fees,” Friedmann said.

“Part is keeping up with the Joneses, part of it is marketing, and part of it is just getting over the adoption challenges,” he continued. “In eDiscovery, before the advent of genAI, you needed some training to know how to interact with discovery database. There were a lot of tools, but they all had the same issue: You had to be pretty technically adept to tackle the database yourself.”

Law firms and corporate legal departments are adopting genAI for a myriad of purposes, ranging from document discovery and analysis to contract lifecycle management. GenAI can be used to categorize and summarize documents, draft new ones, and generate client communications.

A 2023 American Bar Association survey found that over 20% of large law firms (500+ attorneys) are using AI tools, with nearly 15% considering purchasing them. And over the past year, AI adoption in the legal sector has jumped from 19% to 79%, according to legal tech firm Clio.

All areas of law will use genAI, according to Joshua Lenon, Clio’s Lawyer in Residence. That’s because AI content generation and task automation tools can help the business side and practice efforts of law firms. However, areas that have repetitive workflows and large document volumes – like civil litigation – will adopt genAI e-discovery tools more quickly. Practice areas that charge exclusively flat fees – like traffic offenses and immigration – are already the largest adopters of genAi. 

Lenon said AI is expected to have the most immediate impact in areas like civil litigation, where repetitive workflows and large volumes of documents make it ideal for tasks such as eDiscovery.

“Additionally, practice areas that commonly rely on flat-fee billing, such as traffic offenses and immigration, are already leading in the adoption of genAI tools. The efficiency gains in these sectors are driving their early and widespread use of AI,” Lenon said.

In legal departments, for example, genAI has allowed workers to query digital contracts and get accurate answers to questions about things like existing clauses.

“There are all sorts of things buried inside contracts that once people can easily get access to will have a big impact on how companies operate, the risk they have, and how to mitigate those risks,” said David Wilkins, director of the Center on the Legal Profession at Harvard Law School.

Wilkins and other experts say that because law is document intensive, people have long sought ways to use technology to streamline, make more efficient, and cut down on work related to the discovery, creation, and analysis of documents.

“What we’re seeing now is lots of use of technologies of various kinds in contract formation and end-to-end contract lifecycle management. That is a huge area,” Wilkins said.

Corporate legal departments are not as far down the genAI adoption path as law firms, because companies primarily see those business units as cost centers — so the purse strings are cinched tighter. Nonetheless, legal departments are kicking the tires on genAI.

A clear win for pre-trial discovery

Multinational law firm Cleary Gottlieb said it has been able to dramatically cull the number of attorneys used for pre-trial discovery and has even launched a technology unit and genAI legal service: ClearyX.

In the past, it wasn’t uncommon for 150 or more attorneys to be assigned to a case to discover all the documents and other content, and it could take them months to complete the task.

While Cleary readily admits that genAI isn’t perfect in retrieving 100% of the documents related to a case or always creating an accurate synopsis of them, neither are humans. At this point in the technology’s development, it’s good enough most of the time to reduce workloads and costs.

Christian “CJ” Mahoney, global head of Cleary’s e-Discovery and Litigation Technology group, said he was just part of a lawsuit that involved analyzing 50 million documents (15 terabytes of data). “And we had to do it in matter of weeks to find out what we had to provide to the opposing party.

“We’re using more complex workflows using AI. I saw a 60-person to 45-person reduction. But on this kind of case, I would have had probably 150 attorneys doing this 15 years ago. Back then, it would just be like ‘OK, guys, here’s a mountain of evidence — go through it,’” Mahoney said.

Traditional ways to look through case documents simply aren’t feasible anymore. “You need to incorporate AI into the process for analysis now,” Mahoney said.

While his firm has been using machine learning and AI for about a decade, with the introduction of genAI, there’s ubiquity and ease of use, Mahoney said. That has allowed even junior associates to be able to use the tech for eDiscovery and other tasks.

“There’s a bit of an expectation that with the advent of genAI, things should be quicker and cheaper,” he said.

Carla Swansburg, CEO of ClearyX, the firm’s AI tech subsidiary, said that as recently as a year and a half ago, clients were telling her AI is too risky, but those same clients are now asking how Cleary is using AI to benefit them and make their services more efficient.

“Nobody went to law school to do this. I used to go through banker’s boxes with sticky notes as a litigator. Nobody wants to do that. Nobody wants to read 100 leases to highlight an assignment clause for you,” Swansburg said. “The good thing is [genAI is] moving up the value chain, but it’s starting with things that people really don’t want to be doing anyways.”

The interest in genAI hasn’t been lost on those selling traditional legal services and software. For example, legal research tools such as LexisNexis, Westlaw, and vLex and legal document automation software from Harvey.ai and Clio have built genAI into their products. Contract lifecycle management and analytics vendors such as Icertis, Sirion, LinkSquares, and Ironclad have also added AI to their lines.

The number of proven and routine use cases for genAI in legal fields is limited, however, because of ongoing accuracy and hallucination problems, according to Freidmann. And genAI isn’t always less expensive than using people.

“We are still trying to collectively figure out what the economics of it is. I’ve spoken to friends who say in the end genAI took more time and cost than doing it the old-fashioned method,” Friedmann said. “But people are remaining open to it and continuing to experiment.”

The death of the billable hour?

Over the past two decades, the vast volumes of structured and unstructured data generated through traditional means, such as contracts, records, corporate policies, and so on has been joined by electronic communications — adding new challenges in eDiscovery.

Once only a paper chase, legal discovery now involves scouring emails, messaging, social media records — even video and photos — in the lead-up to a trial.

Nearly three-quarters of a law firm’s hourly billable tasks are exposed to AI automation, with 81% of legal secretaries’ and administrative assistants’ tasks being automatable, compared to 57% of lawyers’ tasks, according a survey of both legal professionals (1,028) and another adults (1,003) in the U.S. general population, by Clio.

Hourly billing has long been the preference of many professionals, from lawyers to consultants, but AI adoption is upending this model where clients are charged for the time spent on services. 

In 2023, 19% of law firms reported using AI. Now, 79% of legal professionals are using AI in their practice, according to legal tech company Clio.

As AI adoption continues to accelerate in the legal industry, executives may need to rethink key elements of their business, including their billing models. Billable work could be automated by AI, according to experts.

This month, Clio released the results of a survey showing that law firms are charging 34% more of their cases on a flat-fee basis compared to 2016.

The billable hour will continue, but the frequency of use and types of activities that will be billed hourly will diminish. Automatable tasks will switch to flat fees, with the AI output being reviewed at hourly rates,” said Clio’s Lenon. “The billable hour is unlikely to be fully eliminated, but its dominance in the legal industry is expected to decrease. AI-generated outputs, particularly those requiring human review, may still be billed on an hourly basis.

“Ultimately, the billable hour will remain, but in a more selective capacity,” Lenon continued.

Clio’s research over 7,000,000 time entries found that 74% of billable legal work activities will be impacted by AI automation.

While hourly billing remains predominant in law firms, their clients are driving the shift towards flat fees, with 71% now preferring to pay a flat fee for their entire case, and 51% favoring flat fees for individual activities, according to Clio’s report.

In addition, law firms using flat fees benefit from quicker billing cycles and faster payment collection, as they are five times more likely to send bills — and nearly twice as likely to receive payments — as soon as they complete their work for clients.

Last year, firms were testing on average as many as three to five genAI models in the hope of reducing workloads, and that also meant employing supportive resources such as innovation teams and knowledge management professionals, Gartner’s Friedmann said.

People have been talking about the demise of the billable hour for about 30 years “and nothing’s killed it yet,” said Ryan O’Leary, research director for privacy and legal technology at IDC. “But if anything will, it’ll be this.”

However, there are still a lot of issues with genAI that need to be settled before it could automate legal services, O’Leary cautioned — not least of which is how much genAI may cost to use and how accurate and secure it can be.

“The cost of using AI may be as much as using an associate,” O’Leary said.

Is genAI cheaper and more accurate than an attorney?

Along with AI’s ability to perform tasks previously accomplished by attorneys and other legal workers, there remains a big concern over accuracy, security, and hallucinations. As in the healthcare industry, the stakes are high when it comes to client confidential information.

“There are big issues around copyright protection and whether these large language models are being trained on copyrighted materials,” Harvard’s Wilkins said. “So, what you’re seeing is a lot of experimentation with trying to build customized AI models and large language models. AI providers claim their models are trained exclusively on legal materials, cutting down on hallucinations.”

While law firms are aware of AI’s pitfalls, attorneys are still going to use the technology, Wilkins said, whether or not that’s in line with a corporate policy. GenAI is simply too “transformative” a technology to not use simply because there are risks, he said.

One problem in comparing human workers to the technology is that the bar is often set too high for AI, Wilkins said.

“I’ve heard people say, ‘We could never use this unless it’s 98% effective and reliable.’ I said, ‘Well, does it have the reliability of sending an associate to a windowless warehouse in Phoenix, Arizona to find documents related to a case? Is that 99% accurate?’” Wilkins said.

In the end, whether genAI assists in a task or not, ultimately the attorneys involved will be held responsible for the outcome — good or bad. Whether the technology will replace attorneys and legal aides remains to be seen.

“Our experience has been — and we’ve kicked tires on a lot of language models and purpose-designed tools — [genAI tools] are not good enough to replace people for a lot of the work we do,” ClearyX’s Swansburg said. “For something like due diligence…, you often must be [100%] right. You need to know whether you can get consent to transfer something. In other use cases, such as summarization and initial drafting, that sort of thing is a little more accessible.

“In my world, it’s not really replacing jobs yet, but it’s changing how you do jobs,” she continued. “So, it’s allowing people to move up the value chain a little bit. It’s taking away rote and repetitive work.”

Harvard’s Wilkins placed the adoption of AI by law firms and other legal entities as still being “in the Stone Age” but with massive potential.

“The potential efficiencies are great,” he said. “We’re just working out what are the real advantages.”

One of the world’s largest book publishers adds AI warnings to its books

To make generative AI tools (genAI) work as well as possible, tech companies have chosen to train their large language models (LLMs) on large amounts of text, even though doing so could run afoul of copyright laws.

Most recently, book publisher Penguin Random House has chosen to include a warning in its books stating the content may not be used or reproduced for the purpose of training AI models. And, according to The Bookseller, the AI warning will not only be added to new books but also to reprints of older titles.

The move is likely to spur more publishers to follow suit with similar warnings to their books.