Author: Security – Computerworld

Will AI help doctors decide whether you live or die?

Dr. Adam Rodman, a physician and researcher at Beth Israel Medical Center in Boston, was shocked by the results of an experiment he ran comparing the diagnostic abilities of doctors to generative artificial intelligence (genAI) tools. The data showed that physicians, even when assisted by the technology, were worse at correctly diagnosing patients than genAI alone — a lot worse.

“I reran the experiment, and my jaw dropped,” said Rodman, who is also director of AI Programs at Beth Israel. “AI alone was almost 20% better, like 90% accurate.

“We assumed that the humans would be better than the AI, and quite frankly we were shocked when we realized the AI really didn’t improve the physicians, and actually the AI alone was much better at diagnosing correctly,” he said.

The genAI model used, GPT-4 Turbo, came from OpenAI; it’s the same technology that powers Microsoft’s Copilot chatbot assistant. Not only did the model outperform physicians, but it also outdid every single AI system that had been developed for healthcare over the last 50 years, Rodman said. And it did so without any medical training.

The study, by Rodman and Dr. Jonathan Chen, an assistant professor of Medicine at Stanford University, was performed in 2023; it’s not unusual for study results to be published more than a year after completion.

The results of another study, to be published in Nature in two months, will be even more startling, Rodman said. “We’re also working on next-generation models and AI tools that try to get physicians to be better. So, we have a number of other studies that will be coming out.”

Rodman noted that the original study was performed around the same time health systems were rolling out secure GPT models for doctors. So, while new to the workplace, Rodman and Chen both believed combining physicians and genAI tools would yield better results than relying on technology alone.

“The results flew in the face of the ‘fundamental theorem of informatics’ that assumes that the combination of human and computer should outperform either alone,” Chen said. “While I’d still like to believe that is true, the results of this study show that deliberate training, integration, and evaluation is necessary to actually realize that potential.”

Chen compared physicians’ use of genAI to the public’s early understanding of the Internet, noting that daily activities like searching, reading articles, and making online transactions are now taken for granted, though they were once learned skills. “Similarly,” Chen said. “I expect we will all need to learn new skills in how to interact with chatbot AI systems to nudge and negotiate them to behave in the ways we wish.”

Jonathan Chen, PhD, Md

Dr. Jonathan Chen, an assistant professor of Medicine at Stanford University, reviews data from the study of GPT-4 in diagnosing and recommending patient care.

Dr. Jonathan Chen

AI is nothing new in healthcare

Healthcare organizations have utilized machine learning and AI since the early 1970s. In 1972, AAPhelp, a computer-based decision support system, became one of the first AI-based assistants developed to help in diagnosing appendicitis.

Two years ago, when OpenAI released GPT into the wild, things began to change, and adoption among healthcare providers grew quickly as natural language processing made AI tools more user friendly.

By 2025, more than half (53.2%) of genAI spending by US healthcare providers will focus on chatbots and virtual health assistants, according to Mutaz Shegewi, IDC’s senior research director for healthcare strategies. “This reflects a focus on using genAI to personalize patient engagement and streamline service delivery, highlighting its potential to transform patient care and optimize healthcare operations,” Shegewi said.

According to IDC, 39.4% of US healthcare providers see genAI as a top-three technology that will shape healthcare during the next five years.

Large language models like GPT-4 are already being rolled out across the country by businesses and government agencies. While clinical decision support is one of the top uses for genAI in healthcare, there are others. For example, ambient listening models are being used to record conversations between physicians and patients and automatically write clinical notes for the doctor. Patient portals are adopting genAI, too, so when patients message their physicians with questions, the chatbot writes the first draft of responses.

In fact, the healthcare industry is among the top adopters of AI technology, according to a study by MIT’s Sloan School of Management.

In 2021, the AI in healthcare market was worth more than $11 billion worldwide, with a forecast for it to reach around $188 billion by 2030, according to online data analysis service Statista. Also in 2021, about one-fifth of healthcare organizations worldwide were already in early-stage initiatives using AI.

Today, more than 70% of 100 US healthcare leaders — including payers, providers, and healthcare services and technology (HST) groups — are pursuing or have already implemented genAI capabilities, according to research firm McKinsey & Co.

Where AI can be found in healthcare today

AI is mostly being used for clinical decision support where it can analyze patient information against scientific literature, care guidelines, and treatment history and offer physicians diagnostic and therapeutic options, according to healthcare credentialling and billing company Medwave.

AI models, such as deep learning algorithms, can predict the risk of patient readmission within 30 days of discharge, particularly for conditions like heart failure, according to Colin Drummond, assistant chair of the Department of Biomedical Engineering at Case Western Reserve University.

Natural language processing models can analyze clinical notes and patient records to extract relevant information, aiding in diagnosis and treatment planning. And AI-powered tools are already being used to interpret medical images with a high degree of accuracy, according to Drummond.

“This can streamline the workflow for clinicians by reducing the time spent on documentation,” he said. “These do, of course, need to be vetted and verified by staff, but, again, this can expedite decision-making.

“For instance, AI systems can detect diabetic retinopathy from retinal images, identify wrist fractures from X-rays, and even diagnose melanoma from dermoscopic images,” Drummond said. “Imaging seems to lead in terms of reimbursable CPT coding, but many other applications for screening and diagnosis are on the horizon.”

AI also enables early intervention to reduce readmission rates and enhances risk calculators for patient care planning. It helps stratify patients by risk levels for conditions like sepsis, allowing for timely intervention.

“AI today seems more prevalent and impactful on the operational side of things,” Drummond said. “This seems to be where AI is being most successfully monetized. This involves examining operational activity and looking for optimal use and management of assets used in care. This is not so much aligned with clinical decision-making, but underpins the data available for decisions.”

For example, Johns Hopkins Medical Center researchers created an AI tool to assist emergency department nurses in triaging patients. The AI analyzes patient data and medical condition to recommend a care level, coupled with an explanation of its decision — all within seconds. The nurse then assigns a final triage level.

Dayton Children’s Hospital used an AI model to predict pediatric leukemia patients’ responses to chemotherapy drugs; it was 92% accurate, which helped inform patient care.

A second use of AI in healthcare is in operational analytics, where the algorithms analyze complex data on systems, costs, risks, and outcomes to identify gaps and inefficiencies in organizational performance.

The third major use is in workflow enhancement, where AI automates routine administrative and documentation tasks, freeing clinicians to focus on higher-value patient care, according to Drummond.

Beth Israel’s Rodman understands the skepticism that comes from having a computer algorithm influence physicians’ decisions and care recommendations, but he’s also quick to point out healthcare professionals also aren’t perfect.

“Remember that the human baseline isn’t that good. We know 800,000 Americans are either killed or seriously injured [each year] because of diagnostic errors [by healthcare providers]. So, LLMs are never going to be perfect, but the human baseline isn’t perfect either,” Rodman said.

GenAI will become a standard tool for clinical decisions

According to Veronica Walk, a vice president analyst at Gartner Research, there is “huge potential and hype” around how genAI can transform clinical decision-making. “Vendors are incorporating it into their solutions, and clinicians are already using it in practice — whether provided or sanctioned by their organizations or not,” she said.

Healthcare has primarily focused on two types of AI: machine learning (ML), which learns from examples instead of predefined rules, and natural language processing (NLP), which enables computers to understand human language and convert unstructured text into structured, machine-readable data. (An example of ML in use is when it suggests purchases based on a consumer’s selections, such as a book or shirt, while NLP analyzes customer feedback to identify sentiment trends and guide product improvements.)

“So, we didn’t just look at clinical accuracy,” Rodman said. “We also looked at things that in the real world we want doctors to do, like the ability to figure out why you could be wrong.”

Over the next five years, even just using today’s technology, AI could result in savings of 5% to 10% of healthcare spending, or $200 billion to $360 billion annually, according to a study by the National Bureau of Economic Research (NBER).

Savings for hospitals primarily come from use cases that enhance clinical operations (such as operating room optimization) and improve quality and safety, which involves detecting adverse events. Physician groups benefit mainly from improved clinical efficiencies or workload management and continuity of care (such as referral management).

Insurance companies see savings, too, from improved claims management; automatic adjudication and prior authorization; reductions in avoidable readmissions; and provider relationship management.

Case Western Reserve’s Drummond breaks AI into healthcare into two categories:

  • Predictive AI: using data and algorithms to predict some output (e.g., diagnosis, treatment recommendation, prognosis, etc.)
  • Generative AI: generating new output based on prompts (e.g., text, images, etc.)

The problem with genAI models is their chatbots can mimic human language and quickly return detailed and coherent-seeming responses. “These properties can obscure that chatbots might provide inaccurate information,” Drummond said.

Risks and AI biases are built in

One of the things GPT-4 “was terrible at” compared to human doctors is causally linked diagnoses, Rodman said. “There was a case where you had to recognize that a patient had dermatomyositis, an autoimmune condition responding to cancer, because of colon cancer. The physicians mostly recognized that the patient had colon cancer, and it was causing dermatomyositis. GPT got really stuck,” he said.

IDC’s Shegewi points out that if AI models are not tuned rigorously and with “proper guardrails” or safety mechanisms, the technology can provide “plausible but incorrect information, leading to misinformation.

“Clinicians may also become de-skilled as over-reliance on the outputs of AI diminishes critical thinking,” Shegewi said. “Large-scale deployments will likely raise issues concerning patient data privacy and regulatory compliance. The risk for bias, inherent in any AI model, is also huge and might harm underrepresented populations.”

Additionally, AI’s increasing use by healthcare insurance companies doesn’t typically translate into what’s best for a patient. Doctors who face an onslaught of AI-generated patient care denials from insurance companies are fighting back — and they’re using the same technology to automate their appeals.

“One reason the AI outperformed humans is that it’s very good at thinking about why it might be wrong,” Rodman said. “So, it’s good at what doesn’t fit with the hypothesis, which is a skill humans aren’t very good at. We’re not good at disagreeing with ourselves. We have cognitive biases.”

Of course, AI has its own biases, Rodman noted. The higher ratio of sex and racial biases has been well documented with LLMs, but it’s probably less prone to biases than people are, he said.

Even so, bias in classical AI has been a longstanding problem, and genAI has the potential to exacerbate the problem, according to Gartner’s Walk. “I think one of the biggest risks is that the technology is outpacing the industry’s ability to train and prepare clinicians to detect, respond to, and report these biases,” she said. 

GenAI models are inherently prone to bias due to their training on datasets that may disproportionately represent certain populations or scenarios. For example, models trained primarily on data from dominant demographic groups might perform poorly for underrepresented groups, said Mutaz Shegewi, a senior research director with IDC’s Worldwide Healthcare Provider Digital Strategies group.

“Prompt design can further amplify bias, as poorly crafted prompts may reinforce disparities,” he said. “Additionally, genAI’s focus on common patterns risks overlooking rare but important cases.”

For example, research literature that’s ingested by LLMs is often skewed toward white males, creating critical data gaps regarding other populations, Mutaz said. “Due to this, AI models might not recognize atypical disease presentations in different groups. Symptoms for certain diseases, for example, can have stark differences between groups, and a failure to acknowledge such differences could lead to delayed or misguided treatment,” he said.

With current regulatory structures, LLMs and their genAI interfaces can’t accept liability and responsibility the way a human clinician can. So, for “official purposes,” it’s likely a human will still be needed in the loop for liability, judgement, nuance, and the many other layers of evaluation and support patients need.

Chen said it wouldn’t surprise him if physicians were already using LLMs for low-stakes purposes, like explaining medical charts or generating treatment options  for less-severe symptoms.

“Good or bad, ready or not, Pandora’s box has already been opened, and we need to figure out how to effectively use these tools and counsel patients and clinicians on appropriately safe and reliable ways to do so,” Chen said.

US expands curbs on China’s AI memory and chip tools, raising supply chain concerns

The US has announced sweeping new measures targeting China’s semiconductor sector, restricting the export of chipmaking equipment and high-bandwidth memory. This move has sparked concerns over potential supply chain disruptions.

The rules impose export restrictions on equipment from manufacturers in countries including Israel, Malaysia, Singapore, South Korea, and Taiwan, while granting exemptions to firms in Japan and the Netherlands.

Panasonic revives its late founder as AI

Japanese electronics company Panasonic has created an AI version of its founder, Kōnosuke Matsushita, who died in 1989, The Japan Times reports. Matsushita was one of the most respected figures in the Japanese business world and his book, The Path, is usually considered as mandatory reading for Japanese entrepreneurs.

According to Panasonic, the number of people Matsushita personally trained at the company has decreased, making an AI ​​version necessary. “We believe it is important for our employees to understand our founder Kōnosuke Matsushita’s leadership philosophy, on which our basic management policy is based, and to pass it on through generations,” Panasonic said in a statement, according to The Register.

Panasonic reportedly collaborated with Tokyo University’s Matsuo Institute to develop the AI Matsushita, which ​​will be trained on 3,000 recordings of Matsushita, as well as his writings, lectures and digitized interviews.

The plan is to do the same with Matsushita’s direct contacts and researchers so they can help users make management decisions based on what the founder might have thought or felt about a situation.

It looks like Apple’s forthcoming M5 chips are all about AI

What processors will be running the Apple bargains we’ll be seeking to purchase on Cyber Monday 2025? Presumably, Apple’s M5 chips will be inside some of them, with work already under way on the next amazing Apple Silicon processor.

Apple has requested that TSMC begin production/development of M5 chips with the aim of beginning production in late 2025, The Elec claims, supporting speculation from analyst Ming-Chi Kuo. (We saw similar claims to this effect from Trendforce earlier this year, when trial production allegedly began.)

These new processors will maintain the impressive legacy Apple how has with its own silicon; expect faster 3nm processors with even better GPUs, artificial intelligence (AI) support, and impressively low energy requirements.  If the reports are correct, this will be the third year running in which Apple has deployed 3nm chips, though this isn’t a bad thing — the company has decided to apply a little TSMC chip magic to push next year’s big performance gains.

The TSMC magic show

That magic comes in the form of TSMC’s System on Integrated Chip (SoIC) technology. Both AMD and Nvidia already collaborate with TSMC on this for use in chips to drive AI — and that kind of support has become equally important to Apple, which is converting its entire ecosystem into a full-fledged edge AI delivery system.

Of course, Apple already recognizes the challenges of building intelligence at the edge — principally, that some tasks require more processing power than you can provide in a handheld smartphone, no matter how advanced the chip.

This is why Apple Intelligence uses its own Private Cloud Compute system to handle some tasks, and offloads others to third-party vendor such as OpenAI’s ChatGPT. No surprise, then, that these SoIC-supporting Apple M5 chips will see service in Apple’s ultra-secure Private Cloud Computer servers, as well as across Macs and other devices.

So, what’s so magical about SoIC?

More transistors, lower energy too

There’s a lot to learn about TSMC’s SoIC technology. The technology significantly increases transistor density — boosting speed and power efficiency — and increases the maximum number of cores per chip.  To achieve this, TSMC uses its own state-of-the-art packaging solution to bond multiple chips to a single wafer, crafting processors that are smaller, thinner, and more performant as a result.

We don’t know how much performance these new chips will deliver inside future Apple hardware, but we can expect that AI will be Apple’s primary focus — in part, because AI seems to be Apple’s primary focus everywhere.

That’s got to mean the final design will focus on enhancements in the GPU and Neural engine, and likely also means additional cores for both. It could also be that Apple’s huge investments in 5G networking will make even more sense by the time these chips ship, as network performance in terms of data downloads and uploads will be of major significance when it comes to perceived performance of its own AI technologies.

Bringing the team together, as it were, could further widen Apple’s competitive moat, though the lack of 5G chips from Apple could complicate things. (They are expected in next year’s iPhone SE, but that remains to be seen.) The speculation about a connection between processor advances and new 5G radios could be off, but I feel that when it comes to edge AI, probably not by much.

All about AI?

It remains to be seen which faster Apple Intelligence services will support future product sales, of course, but what does seem true is that these new M5 chips will begin to rollout in Macs and iPads toward the end of next year. When they make their appearance, these will inevitably deliver performance increases similar to what we’ve seen so far with the M4 family of chips; they’ve already propelled all of Apple’s latest Macs to the top of the list.

The recently introduced super-powered M4 Mac mini and MacBook Pro models already illustrate the huge leaps in processor design Apple has been able to pull off, thanks to its work with ARM and TSMC. That journey is remarkable in itself, and it underlines the commitment to chip production the company has made.

That commitment means any enterprise migrating to Apple’s AI ecosystem (PC, smartphone, tablet) can do so in full expectation that they will not need to migrate again for at least a decade. That has to be a good thing for any business seeking stability amid change and wanting to deploy a trusted and secure platform for AI.

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe. (New BlueSky user? Check this collection of useful tips and services).

Intel CEO Pat Gelsinger retires

Intel CEO Pat Gelsinger retired Sunday, after more than 40 years in the industry.

The company quickly named two interim co-CEOs to hold the fort while it searches for a long-term replacement.

The two co-CEOS are David Zinsner, who currently serves as CFO, and Michelle Johnston Holthaus, who has also been named as CEO of Intel Products. In this newly created role she will oversee the company’s client computing group, data center and AI group, and network and edge group.

2025 will be a bad year for remote work

“Mr. Musk Goes to Washington.” And he’s planning to roll back remote work in the federal government. 

Tech entrepreneur Elon Musk and one-time presidential candidate Vivek Ramaswamy are expected to be appointed by incoming president Donald J. Trump to run an advisory commission called the Department of Government Efficiency (DOGE). Trump wants Musk to do to the federal bureaucracy what he did to Twitter — dramatically reduce the number of employees. 

After Musk bought Twitter (in addition to changing its name to X, killing verification, enabling revenue sharing, increasing the character limit of posts, and other changes), he laid off 80% of the staff and issued an edict banning remote work.

The first step in the nearly impossible mission of radically downsizing government while using those agencies and their employees to govern the nation (CBS Daily Report newsman John Dickerson likened it to dismantling an airplane while flying it) is to implement a draconian mandate to work in offices rather than from home.

The expectation is that the in-office mandate will inspire thousands of employees to quit, reducing the number of employees who will have to be laid off. (Hey, if it worked for Twitter, it should work for history’s biggest-budget and most complex organization.) 

A policy ending remote work mandated by Musk (who might well be ineligible for such a role given that his task will be to advise on downsizing or eliminating agencies charged with regulating or even currently investigating Musk-owned or Musk-run companies) contradicts findings by the government itself. 

The US Bureau of Labor Statistics found that remote work increased productivity across 61 private business sectors. Even more intuitively, non-labor costs, like office space, decreased more in industries with more remote work. Workers benefited from not commuting, which saved them time and money. 

The Government Accountability Office (GAO) found in a study that telework in US businesses improved employee productivity and morale and helped companies hire and retain employees — something the DOGE is not interested in. (The GAO acknowledged that the long-term effects are unknown.) 

In short, the government found that remote work is more efficient. The Department of Government Efficiency’s first order of business will be to make government less efficient. 

The most efficient, cost-effective policy would be to lay off whomever you’re going to cull, but mandate remote work for the remaining employees. 

According to the GAO, government employees in the Executive branch (more than four million people, including military personnel) often work from home. That practice varies by department, ranging from 11% of total hours worked at the Farm Service Agency to around 66% at the Veterans Benefits Administration. 

One impact of a new government mandate to work in offices, beyond the expected employee downsizing, will be more employees on the market looking for work, which could affect hiring, salaries, and corporate remote work policies in the private sector.

Changes in attitudes, but fewer changes in lattitudes

Meanwhile, there’s been an enormous shift in attitudes among business leaders about remote work in recent months (and by extension, all its variants of flex work, digital nomad living, workations, and bleisure travel). According to the KPMG 2024 CEO Outlook report, which surveyed more than 1,300 CEOs globally — including 400 in the United States — the pendulum has swung decisively against CEOs supporting remote work. 

According to the survey, 34% of CEOs favored an in-office model earlier this year. Now, 79% do, which is a massive change. 

CEOs are increasingly willing to offer incentives to encourage employees to return to the office. A big majority (86%) of CEOs said they would reward employees who try to come into the office with benefits such as favorable assignments, raises, or promotions.

Why CEOs like back-to-office mandates

CEOs (like Musk, Ramaswamy, and, for that matter, Trump) typically favor remote work for themselves, but oppose it for their employees. These leaders have both good reasons and bad for opposing remote work. 

Among the good reasons, they believe that in-person collaboration generates more and better ideas than remote collaboration. 

While this may be true, it reveals a bias in favor of collaboration and against deep work (long, quiet, uninterrupted, and focused solitary work). Different companies, professions, departments, and industries benefit in varying ways from collaboration and deep work, and they’re both valuable. Office work best enables the former, and remote work the latter. But deep work is the more monetarily valuable kind of work, according to Cal Newport, the author of Deep Work: Rules For Focused Success In a Distracted World.

CEOs also point out that physical proximity is better for mentoring, innovation, and maintaining company culture.

On the flip side, CEOs don’t trust their employees to work hard at home and fear they’re watching daytime TV in their pajamas while on the clock. They intuit office presence and the supervision of employees who appear to be working as a metric for productivity. They can feel personally more comfortable when they can walk around, interact with employees, and manage and supervise in person. Some CEOs also feel the need to justify their spending on office space, office equipment, and other costs associated with office work.

Whatever the reasons, there’s a general disagreement between employees, who mostly want the option to work from home, and CEOs, who mostly want to require employees to come into the office.

A prediction

The remote work revolution will take a serious hit next year, both in government and business. Then, with new generations of workers and leaders gradually rising in the workforce in the coming decade, plus remote work-enabling technologies like AI (specifically agentic AI) and augmented reality growing in capability, remote work will make a slow, inevitable, and permanent comeback. 

In the meantime, 2025 will be a rough year for remote workers. Bu it also represents a huge opportunity for startups and even established companies to hire the very best employees who are turned away elsewhere because they insist on working remotely.

Two years of ChatGPT: the conversation that never ends

Two years can be an eternity for the technology industry: plenty of time for enterprises to innovate, launch a new product, peak on the stock market… and then plummet again.

Think about the bubbles that briefly surrounded 3D printing, smart glasses, or the metaverse.

But somehow ChatGPT has escaped that fate because two years after its launch —around the time when enthusiasm for the metaverse began to collapse — it is still on everyone’s lips and has managed to revolutionize the way many of us live and work.

OpenAI’s well-known chatbot has put generative artificial intelligence (genAI) firmly in the public sphere, prompting a wave of imitators and even moving the agendas of the highest political bodies.

The European Union, for example, had been working for several years on a new regulation for AI, but this was completely disrupted by the appearance of generative AI. It was renegotiated in record time, resulting in the AI Act approved last December.

This fact shows that this technology is not only about possibilities, but also about laws, ethics and philosophy, and security and privacy challenges. In addition, it has revealed the opposing strategies of the geopolitical blocs in the race for the digital economy.

All this, due in large part to the explosion of ChatGPT. In fact, six months after the chatbot’s release, the Future of Life Institute asked for a pause in its development in an open letter, saying its risks could not be controlled, even going so far as to say that it could pose a danger to our civilization as we know it if systems were built that surpassed humans. More than 31,000 people signed the letter, including industry figures such as Apple cofounder Steve Wozniak and OpenAI cofounder Elon Musk.

ChatGPT broke all predictions. A study by UBS found that it was the fastest consumer application to reach 100 million users, in just two months, although it has since been surpassed by Meta’s social network Threads. And, at the business level, it has one million licenses. In total, it has more than 180.5 million monthly active users as of April of this year, and its page was accessed by 1,625 million visitors in the month of February, according to PrimeWeb.

“It has transformed the way we interact with technology,” says Fernando Maldonado, an independent analyst. “Today, anyone can access AI without the need for advanced knowledge or intermediaries, something that was previously reserved for specialists.” 

Sara Robisco, a data scientist and author of the book Historia de la Inteligencia Artificial, adds that there has also been a great marketing movement to get it used by everyone.

Evolving intelligence

It has been possible to reach this point, the two experts say, through the use of vast computing capacity, fed by new sources of data from a multitude of forums, documents, and social networks.

“Generative AI stands out because its improvements are due to the intensive use of resources, which depend directly on these two variables. For example, that the model processes more contextual information or its access to more up-to-date or specialized cases,” says Maldonado.

Thus, as far as society is concerned, ChatGPT has caused people to start looking for “more or less acceptable” information in a chatbot, in Robisco’s eyes.

Now, we are fully in the ‘GPT4 era’, the latest version of the system that improves in text, speech recognition and can even generate code, which has given rise to multimodal models. “It is possible to create videos from text,” says Maldonado. “In particular, this year we have seen how we have been able to ask it to draw something, thus expanding the ways of communicating with AI.”

Its evolution is clear, Robisco adds. “It is a model that has already been trained, which does not have to be started from scratch, which means that in a short time we can see significant improvements.” But ChatGPT still hallucinates a lot. “You have to ask them very specific questions and keep in mind that you can’t ask for something too current.”

And Maldonado sees the evolution continuing: “We are at a stage in which generative AI is developing reasoning capabilities, understood as planning and solving problems autonomously. These are the so-called AI agents, which can be understood as an evolution of virtual assistants. Although there is still a long way to go, I think it is useful to think that it is going from being a doctor’s office to a collaborator that does things for you.”

Risks and challenges

Given generative AI’s potential and upward progress, it raises many questions. One of the most controversial and feared is that it may take away jobs, if it is not already doing so — at least the most repetitive and automatable. Forrester estimates that generative AI replaced about 90,000 jobs globally in 2023, and that by 2030 the figure will increase to 2.4 million.

Maldonado believes that it is not doing it massively or directly. “In reality, it does not seek to replace people, but to empower them. However, as these models become more sophisticated and numerous, worker productivity will grow exponentially. As a result, fewer people will be needed to perform the same tasks.

Robisco, on the other hand, is optimistic about this and believes that it will only remove the most repetitive tasks, leaving the most creative, important and value-added part to humans.

But this is not the only issue of concern about generative AI: there are also the hallucinations themselves, bias in the data, or the lack of transparency and traceability. “This is going to limit some use cases, covered by current regulations and those that are to come,” says Maldonado.

And let’s not forget the security and privacy of the data those models are being fed, and how attackers use them to refine their threats. “There will even be people who can get private information just by knowing how to interrogate the machine,” Maldonado says.

OpenAI, in the middle of the maelstrom

If ChatGPT’s career has been dizzying and not without debate two past years, that of its creator company is not far behind. OpenAI was founded as a non-profit, then as it began to release products, Microsoft became its main investor. OpenAI CEO Sam Altman left abruptly, briefly joined Microsoft and, shortly after, returned.

Among the company’s other founders, Elon Musk, who had already left the company, sued its directors for breaking with the original statutes and becoming a for-profit company. He was right, as the latest movements of the organization confirm it, with many executives leaving and the company searching for more funding.

There are also those who wonder whether the illustrious Altman has become a liability for OpenAI itself.

In any case, Robisco summarizes, the company’s still-brief history corresponds to a “typical case of someone who want to innovate with a toy that they see no future for. But people have started using it and want it. The product is no longer a toy and now they want to price it.”

This article was first published, in Spanish, on Computerworld España as Dos años de ChatGPT: la conversación que no cesa.

Q&A: Can chiplets save the US semiconductor industry?

The effort to re-shore chip manufacturing in the US could be in peril as a new presidential administration has signaled a shift in direction, all while a semiconductor industry seems at times to be struggling.

President-elect Donald J. Trump has indicated the CHIPS and Science Act passed under the Biden Administration could be on the chopping block once he takes office on Jan. 20, 2025.

Just last week, the federal government said it plans to cut back by more than $500 million the funding it planned to divvy out to Intel to build new fabrication facilities as the company has undergone layoffs in the face of financial challenges.

Brandon Lucia, a Carnegie Mellon professor of electrical and computer engineering and CEO of chip startup Efficient Computer, believes the success of the CHIPS Act will hinge on three factors: substantial funding, advancements in manufacturing capabilities, and a thriving ecosystem of innovators in the U.S. 

Efficient Computer is planning to launch its first commercial chip — an energy-efficient, general-purpose processor — in the first half of 2025. As demand for more powerful chips grows in tandem with the evolution and adoption of artificial intelligence, Lucia predicts chipmakers will prioritize energy efficiency for improved longevity and performance; they’ll also be forced to address sustainable manufacturing in semiconductor fabs to address environmental issues such as water runoff and carbon footprints.

Computerworld spoke with Lucia about the state of the CHIPS Act and the future of chip manufacturing. The following are excerpts from that interview.

Why do you believe President Biden has held off on actually disbursing the CHIPS Act funding? “I wouldn’t want to speculate on anything related to the enactment of government policy. I can tell you relative to the enactment of CHIPS Act…that I think there’s a big opportunity, whether it’s the CHIPS Act or something else. There’s a lot of opportunity for big-time innovation, but you need a lot of money to get this stuff done.

“So, while I’m not a political pundit, I think a big allocation of resources going into the domestic semiconductor industry is a great way to support innovation and to level up across the entire industry — from innovation to manufacturing and everything in between.”

How can the current administration get the funds distributed over the next two months? “I believe he’s on a very tight schedule. I think that you have to get the funding out there. It’s important to support the innovation economy around the semiconductor industry. It would be a real boon for the industry, whether through the CHIPS Act or not; it’s important to have that big allocation of resources into the semiconductor industry…. That means university innovation, basic research and start-up companies.

“It’s also about growing the ecosystem, and this is where I think the resources really begin to pay off — in developing new standards and new processes, supporting things like advanced manufacturing and new technologies like chiplets and advanced packaging.

“When I choose a process node through which I will implement my semiconductor product, I’m compelled by pricing, competitive performance and efficiency, time to market, and the complexity on the business side for manufacturing.”

How do you see chiplet-based processor packaging playing into the future success of the US semiconductor industry? “For different components of my system, I may choose to make one chiplet in one technology node and a second chiplet in another technology node because they offer different advantages from a technology perspective. The opportunity with advanced packaging of chiplet technology is you can integrate multiple heterogeneous chiplets together; they can come from different fabs. It’s very cool.

“Then you put it into a layer called an interposer and that’s something you use to glue the chiplets together and also communicate between them. So it gives them channels and wires to talk back and forth. When you do that, you can produce very sophisticated designs that can take advantage of the best options in the market.

“It’s also supportive of an ecosystem where a company like Efficient, can produce a chiplet — the biggest value in our design — and distribute that broadly. In the old days, even today, the way that typically happens is by selling licenses to the IP inside of your chip. So, that means I talk to a customer developing a chip and I can sell them design code resources and resources to use our architecture.

“Chiplets change the game. They say we can now produce a piece of hardware and sell that as a bare die off the manufacturing line, and you can integrate that into your heterogeneous chiplet-based product. So, it supports this innovation ecosystem where you can have many suppliers of chiplets with different capabilities and you can have a much simpler integration path. That path is emerging and it’s a very important piece of the innovation ecosystem moving forward.”

srcset="https://b2b-contenthub.com/wp-content/uploads/2024/11/shutterstock_2195879183.jpg?quality=50&strip=all 7680w, https://b2b-contenthub.com/wp-content/uploads/2024/11/shutterstock_2195879183.jpg?resize=300%2C168&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2024/11/shutterstock_2195879183.jpg?resize=768%2C432&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2024/11/shutterstock_2195879183.jpg?resize=1024%2C576&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2024/11/shutterstock_2195879183.jpg?resize=1536%2C864&quality=50&strip=all 1536w, https://b2b-contenthub.com/wp-content/uploads/2024/11/shutterstock_2195879183.jpg?resize=2048%2C1152&quality=50&strip=all 2048w, https://b2b-contenthub.com/wp-content/uploads/2024/11/shutterstock_2195879183.jpg?resize=1240%2C697&quality=50&strip=all 1240w, https://b2b-contenthub.com/wp-content/uploads/2024/11/shutterstock_2195879183.jpg?resize=150%2C84&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2024/11/shutterstock_2195879183.jpg?resize=854%2C480&quality=50&strip=all 854w, https://b2b-contenthub.com/wp-content/uploads/2024/11/shutterstock_2195879183.jpg?resize=640%2C360&quality=50&strip=all 640w, https://b2b-contenthub.com/wp-content/uploads/2024/11/shutterstock_2195879183.jpg?resize=444%2C250&quality=50&strip=all 444w" width="1024" height="576" sizes="(max-width: 1024px) 100vw, 1024px">

An example of a chiplet-based semiconductor where multiple smaller processors can work in tandem to address varying applications that are all tied together to serve a single purpose.

Shutterstock/Pete Hansen

How do sustainability efforts factor into the future of semiconductor production? “There are several dimensions to sustainability when it comes to manufacturing and operating semiconductors. The first is carbon emissions [which is] wrapped up in production and distribution of semiconductor parts. In the academic research, we call that the embodied system in production. That’s all the carbon emissions accounted for in the production and distribution.

“Then you have operational carbon emissions. This gets a lot of attention in the media because of the enormous amounts of energy that go into running data centers handling AI functions. So, you have those two categories and any system represents a point in the tradeoff of the space between those. You can produce more specialized chips that increase the amount of embodied carbon. You have special purpose accelerators for every function of AI you may want to do in a data center. Each of them would have potentially higher speed and efficiency, so that decreases their operational carbon.

“But the embodied carbon in a system that has that many chips is higher because we had to design, manufacture and distribute each of those different designs, and so the costs go up.”

Which semiconductor manufacturer has the best chance of re-shoring its manufacturing in the US? “I think there [are] a lot of interesting things going on inside Intel right now. I think they’re at an important moment in their existence. I think they’re pushing innovation in trying to develop what I think is the next generation and the generation after that in retail foundry services and manufacturing tech nodes. The conversation tends to turn toward their 18A node, which is a very advanced tech node. I think that will be a big win when Intel stands up manufacturing in the US. That turns into a roadmap for the future.

“Then you can’t have [that] conversation without talking about TSMC in the southwest with their massive fabrication implantation effort going on there. When that comes online, that will bring some of most advanced tech nodes that exist today to domestic production, and that’s important for a variety of reasons. I think over next few years the onshoring of semiconductor manufacturing for defense and security applications will be very important given the geopolitical state of the world today.”

Brandon Lucia and Efficient Computer

Brandon Lucia holds one of his company’s prototype chips.

Brandon Lucia

Trump wants the federal government to put tariffs on overseas semiconductor makers who ship their products to the US instead of funding companies to incentivize chips manufacturing here. Do you agree? “I think in order to answer that question, I’d have to be more tied into the function of government and the foundation for the tariffs. What I can say is both are approaches attempt to get at the same end point.

“I think getting the resources in to support the semiconductor manufacturing industry is incredibly important. Those are two ways to do that, but there are many other ways to grow the domestic ecosystem through policy, industry efforts and advanced packaging techniques.

“Regardless of the mechanism, I think the need is there. I think what we need are resources to do basic research and take those ideas and put them into innovative start-up companies, even in support of incumbent [companies] in expanding their efforts in advanced manufacturing and foundries. Again, we’re seeing that with Intel and their 18A chip and TSMC nodes that will be manufactured in the southwest.

What kind of a thriving ecosystem is needed to support a healthy domestic semiconductor industry? “I have a strong belief in fundamental research in academia and industry as a driver of this. You have successful examples in some of the bigger companies with research arms and many examples of big ideas emerging from more basic research that happens in academia settings.

“It’s really driven economically, but also by things like the National Science Foundation directly supporting early-stage research that can be 10 years out from commercialization of a product sometimes. Much of what we’re doing at Efficient…was funded through the National Science Foundation. It goes back almost a decade.

“The other leg here is a thriving ecosystem of start-ups and an environment in which they can grow and produce value. You put all those things together and it’s a very big roadmap to how we could revolutionize the semiconductor industry. We can do fabrication of advanced nodes and packaging. This a la carte idea allows for new markets to emerge around this chiplet packaging technology.

“Then you have the ability for a startup to launch in a straightforward way to bring technology like that to market. Over a five-year window, a company could emerge and tap into all that research and tap into fabrication and manufacturing ecosystem to produce something that’s really new and has a lot of value.

“This is really our origin story. We came from the result of research that went on for nearly a decade. We realized we were doing something new that was untapped in the market today.”

Is your company’s design based on chiplets? “Right now, ours is not a chiplet-based design. In the future, we see a big opportunity for going into Chiplet integration. Our architecture is called Fabric and it’s a way of mapping computation in space across computing resources implemented in a chip. The basic idea is we have our architecture and it’s scalable to include more computing resources, and without decreasing efficiency, we can increase the performance in a chip.

“With chiplet technology, we can have multiple chiplets on our fabric architected together, which is a big opportunity for us to scale up from where we are today, which is focused on embedded applications and things like infrastructure, and wearables and space and defense applications. We can scale up further toward the edge, maybe even edge-cloud and, some day, data center applications.”

My spur-of-the-moment Chromebook surprise

If you’ve read this column for long, you probably know that when it comes to tech purchases — and tech decisions in general, really — I’m typically not one to be hasty.

It’s practically in my blood at this point. I’ve spent so many years studying, researching, and obsessing over this stuff (both personally and professionally!) that it’s tough for me to commit to buying a new product or even using a new service without really digging in and thinking through the implications.

Plain and simple, I like to feel confident that whatever I’m using is not only “the best” in some broad, general sense — but is, critically, the best for me and my specific work purposes. It’s the same thing I encourage everyone else to do, too, when considering new tech twists and turns (whether via my own recommendations or any other source).

That’s why I really surprised myself when I happened to be walking through a Best Buy the other day and ended up walking out with a brand spankin’ new $600 laptop — a Chromebook, to be specific.

But Goog almighty, am I ever glad I made that uncharacteristically fast decision.

[Get fresh Googley insight in your inbox with my free Android Intelligence newsletter. Three new things to know and try each Friday!]

Google’s Pixelbook and my ChromeOS journey

Before I get to the specific Chromebook I purchased and why, let me back up real quick for a pertinent bit of perspective.

I’ve been both using and writing about Google’s ChromeOS platform since its very earliest days — back before the devices around it were even called Chromebooks.

I’ve personally owned and relied on Google’s precious Pixelbook since that device’s debut in 2017. While I’d had plenty of other ChromeOS vessels before it, the Pixelbook was the first Chromebook I truly fell for — thanks to its rare combination of power, practicality, and design. The device’s sleek and minimalist form and in-a-league-of-its-own keyboard made it a singular treat to use and served as the perfect match for the more-than-capable computing power inside.

Google Pixelbook (ChromeOS Chromebook)
The Google Pixelbook Chromebook, released in 2017.

Google

For years, the Pixelbook left me with little to ask for. But despite the fact that the laptop is still technically being supported with regular ongoing ChromeOS updates — and will continue to be all the way through August of 2027 (!) — the system has more recently started showing its age.

I’ve been thinking about a replacement for it for a while now. But while I’ve tried out tons of perfectly capable and decent Chromebook options, I’ve yet to find one that really speaks to me and stands out in the same way.

The reason, I’ve come to realize, is that more and more, current Chromebooks are mostly about being good enough. For most people and purposes — whether businesses, schools, or just budget-conscious individual device-buyers — that’s perfectly fine and probably makes a lot of sense. But for those of us who place an emphasis on design and device quality both inside and out, the options have been a little lackluster lately. And so despite my motivation to find a suitable replacement for my rapidly aging Pixelbook, nothing I’ve considered had quite fit the bill.

At least, that was the case — until an alluring new digital vixen caught my eye. And, as you’ve no doubt realized by now, it didn’t take long to realize it was the one I’d been waiting for.

ChromeOS, take 2: My next Chromebook chapter

I’d been intrigued by the new Samsung Galaxy Chromebook Plus since I first heard about it ahead of its arrival this autumn. But it wasn’t until I spent several minutes with the system in a physical store that I realized just how special it actually was — and how much it filled the void I’d been seeking to satisfy since my poor Pixelbook started growing a little rusty.

So, first things first: The Samsung Galaxy Chromebook Plus is far less clunky than its awkward name suggests. It’s sleek and almost shockingly minimal in its design — with an eye-catchingly subtle blue casing and an understated Samsung logo on the outer cover but no branding of any sort on the inside. (One of my biggest tech-nerd pet peeves is paying hundreds of dollars for a device and then being forced to stare at an ugly company tramp stamp right atop or beneath the screen for every second that I use the damn thing.)

Samsung Galaxy Chromebook (open)
Samsung’s Galaxy Chromebook Plus is refreshingly sleek and free from over-the-top branding.

Samsung

That alluring appearance is definitely what drew me in. But the way the Chromebook feels is what ultimately won me over.

No two ways around it: The Galaxy Chromebook Plus is a skinny fella. This thing is thin in a way I haven’t experienced on a laptop since — well, the Pixelbook.

Samsung Galaxy Chromebook (closed)
Thin is in with Samsung’s Galaxy Chromebook Plus.

JR Raphael, IDG

In a sharp-as-can-be contrast to the typical utilitarian Chromebook of our current moment, the Galaxy Chromebook Plus feels light and luxurious. It’s a body that makes you want to carry and, dare I say, caress it constantly. All over-the-top tenderness aside, it’s a premium computer through and through and the closest thing I’ve seen to a spiritual successor to Google’s fading Pixelbook star.

Now, do all those surface-level superficialities matter, you might ask? As with so many things in the tech decision-making matrix, the answer depends entirely on you. Some people are perfectly satisfied with a utilitarian tech approach and knowing that their laptop has what counts on the inside — and hey, that’s okay! But some of us also appreciate the design and form and how those factors affect the overall experience of carrying and using a device.

And my goodness, when it comes to the Galaxy Chromebook Plus, is that experience ever exceptional.

The sweet surprise of Samsung’s Galaxy Chromebook Plus

No exaggeration: Samsung’s Galaxy Chromebook Plus is just a delight to work on — a true treat that makes you want to carry it everywhere and never put it down. The display is an AMOLED panel, which results in richer colors and deeper blacks than most standard laptop screens, and the 15.6″ display size is such a pleasant change after years of squeezing into smaller laptop dimensions. That arrangement also includes the side perk of a larger, less cramped keyboard, complete with a number pad at the right.

Samsung Galaxy Chromebook (keyboard)
Even the keyboard is unusually roomy on the Galaxy Chromebook Plus.

Samsung

All in all, the Galaxy Chromebook feels lavishly spacious, and the slim frame keeps it from seeming at all bulky or even the least bit unwieldy. The whole “thin” race can slide into silly territory quickly, but I’m tellin’ ya: This laptop’s slimness is such a sweet surprise and something you really do appreciate. It’s an absolute pleasure to use — and, provided your budget can support it, adding that pinch of pleasure into your workday can make a world of difference when it comes both quality of life and productivity.

I’m also happy to share that the qualities of the Galaxy Chromebook that gave me pause early on haven’t proven to be particularly problematic in real-world use: 

  • I worried that the system’s 8GB of RAM might be insufficient for my multitasking-heavy, resource-intensive style of work — but so far, at least, the computer’s been quite capable of handling anything I throw at it.
  • As reported, Samsung has been permitted to preload some of its own apps on the Chromebook — something that hasn’t traditionally been permitted in the ChromeOS ecosystem — but outside of the presence of a single preinstalled Samsung Notes app (which, thankfully, can easily be removed), there hasn’t been anything out of the ordinary or concerning.
  • And while Google’s move to replace the signature ChromeOS Launcher/Search/Everything key with a weird new Gemini-connected “Quick Insert” key still strikes me as misguided, it’s an easy enough change to work around for anyone in the know. In fact, a quick tweak of the system keyboard settings is all it takes to restore the Launcher key to its proper position and to bump “Quick Insert” down to the keyboard’s bottom row, in a newly added “G” key that seems like a much more sensible spot.
Samsung Galaxy Chromebook — ChromeOS keyboard settings
The Search/Launcher/Everything key can be returned to its proper place with a couple quick clicks in the ChromeOS settings.

JR Raphael, IDG

After growing accustomed to having a convertible Chromebook that sports a touchscreen, I’d also been hesitant to go with a model that’s more of a traditional clamshell form without any touch capabilities. But over the past year or so, I’ve added a Pixel Tablet into my personal tech lineup and started relying on that for more casual video-watching and other such “lean-back”-style activity. This has pushed the laptop back into a more narrowly defined role of active keyboard-involving work for me, and consequently, I don’t find myself missing the touch and converting factors much at all — certainly nowhere near what I would’ve expected if I’d considered this same shift a couple years ago.

The only big question lingering in my mind now is what’s in the cards for ChromeOS from a longer-term perspective. As you may have read, rumors suggest Google could be looking at essentially replacing ChromeOS with Android — yes, again — or at least replacing it with some future version of Android that’s designed to provide a similar sort of desktop-friendly computing experience.

While I have my doubts about how effectively Google can pull such a feat off, I’m cautiously optimistic that a ChromeOS-Android combo could actually be a good thing at this point — and, potentially, could even be less of a drastic front-facing change than most of us might expect.

But regardless, if such a move ends up happening — and to be clear, that’s still a hefty “if” at this point — it’s almost certainly still years away. For now, I very much enjoy using ChromeOS in its current form. This laptop is guaranteed ongoing operating system updates through June of 2032, at a minimum, and one has to imagine that if Google were to start bringing a future Android version into Chromebooks down the line, it’d phase that change in gradually and avoid or at least make optional any sort of dramatic switch in how existing devices operate.

Long story short, I’m not too worried about what ChromeOS (or whatever we’re calling it) might look like a decade from now. If it stays the current course — hey, cool. I’m content! If an enticing option comes along to shift over to a more Chromebook-like version of Android in the future, maybe that could be interesting, too. For the moment, though, I couldn’t be happier with the laptop I’m using.

And, remember: This is coming from someone who had been stubbornly hanging onto his aging Pixelbook and refusing to accept any of the alternatives that had come along in the years since.

For now, this Chromebook is the one to beat. And this is the first time in a long time I’ve felt fully confident saying that — confident enough to pick up this system myself and happily head home with it the very same day I touched it for the first time.

Want even more Googley goodness? Come check out my free weekly newsletter to get next-level tips and insights delivered directly to your inbox.