Month: December 2024

IT at Tokio Marine means continuous learning — and many hats

The insurance industry is known for being slow and steady, but at Tokio Marine North America Services (TMNAS), a dogged emphasis on growth, diversity of thought, and continuous learning has helped accelerate an IT culture predicated on innovation and career mobility.

As a shared services organization supporting the Tokio Marine family of insurance providers, the 300+-person IT group operates in a consulting-like mode. This means that staffers are encouraged to stay abreast of disciplines such as agile development, artificial intelligence (AI), and user experience. They’re comfortable wearing many hats.

To promote a culture of continuous learning, TMNAS has opened up new career pathways, reworked 190 job descriptions, and refreshed its individual development programs. Long-standing IT job families — including programmer and systems analyst — have been updated to reflect current work patterns, and new roles have been added for data science, DevSecOps, intelligent automation, and more.

“It was time to run a comb through our job families to correct and add branches to reflect various modern disciplines,” explains Bob Pick, executive vice president and CIO at TMNAS, which ranked second overall and No. 1 in both the career development/training and DEI categories among small companies in Computerworld’s “Best Places to Work in IT 2025” survey.

With AI and generative AI in the spotlight, TMNAS also is standing up training programs and pilot initiatives to prepare IT staffers for responsible use of the new technologies. The company has formed two generative AI (genAI) working groups, one focused on technology and the other on risk. “We’re making incremental staff investments as well as looking to system integration and consulting partners to learn about genAI and do things safely,” Pick says.

TMNAS

TMNAS sponsors and promotes philanthropic activities throughout the year to strengthen local communities.

TMNAS

Beyond technical disciplines, TMNAS offers several additional pathways for career growth. One takes aim at IT professionals interested in individual advancement without the responsibility of managing people. The positions offer pay and prestige commensurate with a management-level post. A second gives seasoned managers a chance to advance even if they’re not ready for a top officer role or there isn’t an available C-level position. A third, delivered by Tokio Marine’s Global Training program, offers rotational experience and training for individuals in business roles interested in crossing over to IT, providing entrée to new careers in areas such as cybersecurity without having to leave the firm.

Flexibility and diversity of thought

The TMNAS IT organization has always supported flexible work arrangements, and post-COVID, the majority of TMNAS’s IT staff (75%) is now fully remote. IT employees and managers collaboratively establish the best working arrangement, based on job requirements, and the firm has implemented numerous technologies — from collaboration spaces to in-office A/V equipment — to make hybrid collaboration easier and fully productive. The hybrid model has also opened up IT recruiting in areas outside of the company’s Pennsylvania headquarters. “We now have 50-state recruiting, and the number and quality of résumés have shot through the roof,” Pick says.

Fostering diversity and community is ingrained in the TMNAS culture, and the company stood up a number of employee resource groups (ERGs) last year focused on women, generational workers, caregivers and LGBTQ+ staff members. The efforts are moving the needle on promoting more diversity in IT: 30% of the ERG leaders hail from the IT department. Many of the top company leaders — including the CFO and CHRO & Chief Legal Officer — are women, and historically the percentage of women working in IT at TMNAS has been above market averages. In addition, the presence of previously underrepresented groups in IT, such as the black, indigenous, and people of color (BIPOC) community, is increasing.  At TMNAS, nearly half of IT management and non-management identify as BIPOC.

“We focus on finding the best people for the job,” Pick says, “and the best people come from a variety of backgrounds.”

Read more about the Best Places to Work in IT 2025:

OpenAI expands multimodal capabilities with updated text-to-video model

OpenAI has released a new version of its text-to-video AI model, Sora, for ChatGPT Plus and Pro users, marking another step in expansion into multimodal AI technologies.

The original Sora model, introduced earlier this year, was restricted to safety testers in the research preview phase, limiting its availability.

The new Sora Turbo version offers significantly faster performance compared to its predecessor, OpenAI said in a blog post.

Sora is currently available to users across all regions where ChatGPT operates, except in the UK, Switzerland, and the European Economic Area, where OpenAI plans to expand access in the coming months.

ChatGPT, which gained global prominence in 2022, has been a driving force behind the widespread adoption of generative AI. Sora reflects OpenAI’s ongoing efforts to maintain a competitive edge in the rapidly evolving AI landscape.

Keeping pace with rivals

The move positions OpenAI to compete with similar offerings from rivals like Meta, Google, and Stability AI.

“The true power of GenAI will be in realizing its multi-model capabilities,” said Sharath Srinivasamurthy, associate vice president at IDC. “Since OpenAI was lagging behind its competitors in text to video, this move was needed to stay relevant and compete.”

However, both Google and Meta outpaced OpenAI in making their models publicly reviewable, even though Sora was first introduced in discussions back in February.

“OpenAI likely anticipated becoming a target if it launched this service first, so it seems probable that they waited for other companies to release their video generation products while refining Sora for public preview or alpha testing,” said Hyoun Park, CEO and chief analyst at Amalgam Insights. “OpenAI is offering longer videos, whereas Google supports six-second videos and Meta supports 16-second videos.”

Integration remains a work in progress, though OpenAI is expected to eventually provide data integration for Sora comparable to its other models, Park added.

Managing regulatory concerns

Sora-generated videos will include C2PA metadata, enabling users to identify the content’s origin and verify its authenticity. This is significant amid global regulatory efforts to ensure AI firms adhere to compliance requirements.

“While imperfect, we’ve added safeguards like visible watermarks by default, and built an internal search tool that uses technical attributes of generations to help verify if content came from Sora,” OpenAI said in the post.

Even with such safeguards, the use of data in training AI models continues to spark debates over intellectual property rights. In August, a federal judge in California ruled that visual artists could proceed with certain copyright claims against AI companies like Stability AI.

“As with all of OpenAI’s generative tools, Sora faces challenges related to being trained on commercial data, which is often subject to copyright and, in some cases, patents,” Park said. “This could create opportunities for vendors like Anthropic and Cohere, which have been more focused on adhering to EU governance guidelines.” Extensive testing is critical for video-based generative AI applications due to concerns such as the rise of deepfakes, which likely contributed to the time it took OpenAI to release the model, according to Srinivasamurthy.

OpenAI expands multi-modal capabilities with updated text-to-video model

OpenAI has released a new version of its text-to-video AI model, Sora, for ChatGPT Plus and Pro users, marking another step in expansion into multimodal AI technologies.

The original Sora model, introduced earlier this year, was restricted to safety testers in the research preview phase, limiting its availability.

The new Sora Turbo version offers significantly faster performance compared to its predecessor, OpenAI said in a blog post.

Sora is currently available to users across all regions where ChatGPT operates, except in the UK, Switzerland, and the European Economic Area, where OpenAI plans to expand access in the coming months.

ChatGPT, which gained global prominence in 2022, has been a driving force behind the widespread adoption of generative AI. Sora reflects OpenAI’s ongoing efforts to maintain a competitive edge in the rapidly evolving AI landscape.

Keeping pace with rivals

The move positions OpenAI to compete with similar offerings from rivals like Meta, Google, and Stability AI.

“The true power of GenAI will be in realizing its multi-model capabilities,” said Sharath Srinivasamurthy, associate vice president at IDC. “Since OpenAI was lagging behind its competitors in text to video, this move was needed to stay relevant and compete.”

However, both Google and Meta outpaced OpenAI in making their models publicly reviewable, even though Sora was first introduced in discussions back in February.

“OpenAI likely anticipated becoming a target if it launched this service first, so it seems probable that they waited for other companies to release their video generation products while refining Sora for public preview or alpha testing,” said Hyoun Park, CEO and chief analyst at Amalgam Insights. “OpenAI is offering longer videos, whereas Google supports six-second videos and Meta supports 16-second videos.”

Integration remains a work in progress, though OpenAI is expected to eventually provide data integration for Sora comparable to its other models, Park added.

Managing regulatory concerns

Sora-generated videos will include C2PA metadata, enabling users to identify the content’s origin and verify its authenticity. This is significant amid global regulatory efforts to ensure AI firms adhere to compliance requirements.

“While imperfect, we’ve added safeguards like visible watermarks by default, and built an internal search tool that uses technical attributes of generations to help verify if content came from Sora,” OpenAI said in the post.

Even with such safeguards, the use of data in training AI models continues to spark debates over intellectual property rights. In August, a federal judge in California ruled that visual artists could proceed with certain copyright claims against AI companies like Stability AI.

“As with all of OpenAI’s generative tools, Sora faces challenges related to being trained on commercial data, which is often subject to copyright and, in some cases, patents,” Park said. “This could create opportunities for vendors like Anthropic and Cohere, which have been more focused on adhering to EU governance guidelines.” Extensive testing is critical for video-based generative AI applications due to concerns such as the rise of deepfakes, which likely contributed to the time it took OpenAI to release the model, according to Srinivasamurthy.

OpenAI expands multi-modal capabilities with updated text-to-video model

OpenAI has released a new version of its text-to-video AI model, Sora, for ChatGPT Plus and Pro users, marking another step in expansion into multimodal AI technologies.

The original Sora model, introduced earlier this year, was restricted to safety testers in the research preview phase, limiting its availability.

The new Sora Turbo version offers significantly faster performance compared to its predecessor, OpenAI said in a blog post.

Sora is currently available to users across all regions where ChatGPT operates, except in the UK, Switzerland, and the European Economic Area, where OpenAI plans to expand access in the coming months.

ChatGPT, which gained global prominence in 2022, has been a driving force behind the widespread adoption of generative AI. Sora reflects OpenAI’s ongoing efforts to maintain a competitive edge in the rapidly evolving AI landscape.

Keeping pace with rivals

The move positions OpenAI to compete with similar offerings from rivals like Meta, Google, and Stability AI.

“The true power of GenAI will be in realizing its multi-model capabilities,” said Sharath Srinivasamurthy, associate vice president at IDC. “Since OpenAI was lagging behind its competitors in text to video, this move was needed to stay relevant and compete.”

However, both Google and Meta outpaced OpenAI in making their models publicly reviewable, even though Sora was first introduced in discussions back in February.

“OpenAI likely anticipated becoming a target if it launched this service first, so it seems probable that they waited for other companies to release their video generation products while refining Sora for public preview or alpha testing,” said Hyoun Park, CEO and chief analyst at Amalgam Insights. “OpenAI is offering longer videos, whereas Google supports six-second videos and Meta supports 16-second videos.”

Integration remains a work in progress, though OpenAI is expected to eventually provide data integration for Sora comparable to its other models, Park added.

Managing regulatory concerns

Sora-generated videos will include C2PA metadata, enabling users to identify the content’s origin and verify its authenticity. This is significant amid global regulatory efforts to ensure AI firms adhere to compliance requirements.

“While imperfect, we’ve added safeguards like visible watermarks by default, and built an internal search tool that uses technical attributes of generations to help verify if content came from Sora,” OpenAI said in the post.

Even with such safeguards, the use of data in training AI models continues to spark debates over intellectual property rights. In August, a federal judge in California ruled that visual artists could proceed with certain copyright claims against AI companies like Stability AI.

“As with all of OpenAI’s generative tools, Sora faces challenges related to being trained on commercial data, which is often subject to copyright and, in some cases, patents,” Park said. “This could create opportunities for vendors like Anthropic and Cohere, which have been more focused on adhering to EU governance guidelines.” Extensive testing is critical for video-based generative AI applications due to concerns such as the rise of deepfakes, which likely contributed to the time it took OpenAI to release the model, according to Srinivasamurthy.

OpenAI expands multi-modal capabilities with updated text-to-video model

OpenAI has released a new version of its text-to-video AI model, Sora, for ChatGPT Plus and Pro users, marking another step in expansion into multimodal AI technologies.

The original Sora model, introduced earlier this year, was restricted to safety testers in the research preview phase, limiting its availability.

The new Sora Turbo version offers significantly faster performance compared to its predecessor, OpenAI said in a blog post.

Sora is currently available to users across all regions where ChatGPT operates, except in the UK, Switzerland, and the European Economic Area, where OpenAI plans to expand access in the coming months.

ChatGPT, which gained global prominence in 2022, has been a driving force behind the widespread adoption of generative AI. Sora reflects OpenAI’s ongoing efforts to maintain a competitive edge in the rapidly evolving AI landscape.

Keeping pace with rivals

The move positions OpenAI to compete with similar offerings from rivals like Meta, Google, and Stability AI.

“The true power of GenAI will be in realizing its multi-model capabilities,” said Sharath Srinivasamurthy, associate vice president at IDC. “Since OpenAI was lagging behind its competitors in text to video, this move was needed to stay relevant and compete.”

However, both Google and Meta outpaced OpenAI in making their models publicly reviewable, even though Sora was first introduced in discussions back in February.

“OpenAI likely anticipated becoming a target if it launched this service first, so it seems probable that they waited for other companies to release their video generation products while refining Sora for public preview or alpha testing,” said Hyoun Park, CEO and chief analyst at Amalgam Insights. “OpenAI is offering longer videos, whereas Google supports six-second videos and Meta supports 16-second videos.”

Integration remains a work in progress, though OpenAI is expected to eventually provide data integration for Sora comparable to its other models, Park added.

Managing regulatory concerns

Sora-generated videos will include C2PA metadata, enabling users to identify the content’s origin and verify its authenticity. This is significant amid global regulatory efforts to ensure AI firms adhere to compliance requirements.

“While imperfect, we’ve added safeguards like visible watermarks by default, and built an internal search tool that uses technical attributes of generations to help verify if content came from Sora,” OpenAI said in the post.

Even with such safeguards, the use of data in training AI models continues to spark debates over intellectual property rights. In August, a federal judge in California ruled that visual artists could proceed with certain copyright claims against AI companies like Stability AI.

“As with all of OpenAI’s generative tools, Sora faces challenges related to being trained on commercial data, which is often subject to copyright and, in some cases, patents,” Park said. “This could create opportunities for vendors like Anthropic and Cohere, which have been more focused on adhering to EU governance guidelines.” Extensive testing is critical for video-based generative AI applications due to concerns such as the rise of deepfakes, which likely contributed to the time it took OpenAI to release the model, according to Srinivasamurthy.

OpenAI expands multi-modal capabilities with updated text-to-video model

OpenAI has released a new version of its text-to-video AI model, Sora, for ChatGPT Plus and Pro users, marking another step in expansion into multimodal AI technologies.

The original Sora model, introduced earlier this year, was restricted to safety testers in the research preview phase, limiting its availability.

The new Sora Turbo version offers significantly faster performance compared to its predecessor, OpenAI said in a blog post.

Sora is currently available to users across all regions where ChatGPT operates, except in the UK, Switzerland, and the European Economic Area, where OpenAI plans to expand access in the coming months.

ChatGPT, which gained global prominence in 2022, has been a driving force behind the widespread adoption of generative AI. Sora reflects OpenAI’s ongoing efforts to maintain a competitive edge in the rapidly evolving AI landscape.

Keeping pace with rivals

The move positions OpenAI to compete with similar offerings from rivals like Meta, Google, and Stability AI.

“The true power of GenAI will be in realizing its multi-model capabilities,” said Sharath Srinivasamurthy, associate vice president at IDC. “Since OpenAI was lagging behind its competitors in text to video, this move was needed to stay relevant and compete.”

However, both Google and Meta outpaced OpenAI in making their models publicly reviewable, even though Sora was first introduced in discussions back in February.

“OpenAI likely anticipated becoming a target if it launched this service first, so it seems probable that they waited for other companies to release their video generation products while refining Sora for public preview or alpha testing,” said Hyoun Park, CEO and chief analyst at Amalgam Insights. “OpenAI is offering longer videos, whereas Google supports six-second videos and Meta supports 16-second videos.”

Integration remains a work in progress, though OpenAI is expected to eventually provide data integration for Sora comparable to its other models, Park added.

Managing regulatory concerns

Sora-generated videos will include C2PA metadata, enabling users to identify the content’s origin and verify its authenticity. This is significant amid global regulatory efforts to ensure AI firms adhere to compliance requirements.

“While imperfect, we’ve added safeguards like visible watermarks by default, and built an internal search tool that uses technical attributes of generations to help verify if content came from Sora,” OpenAI said in the post.

Even with such safeguards, the use of data in training AI models continues to spark debates over intellectual property rights. In August, a federal judge in California ruled that visual artists could proceed with certain copyright claims against AI companies like Stability AI.

“As with all of OpenAI’s generative tools, Sora faces challenges related to being trained on commercial data, which is often subject to copyright and, in some cases, patents,” Park said. “This could create opportunities for vendors like Anthropic and Cohere, which have been more focused on adhering to EU governance guidelines.” Extensive testing is critical for video-based generative AI applications due to concerns such as the rise of deepfakes, which likely contributed to the time it took OpenAI to release the model, according to Srinivasamurthy.

China launches anti-monopoly probe into Nvidia amid rising US-China chip tensions

China has initiated an investigation into Nvidia over alleged violations of the country’s anti-monopoly laws, signaling a potential escalation in the ongoing tech and trade tensions between Beijing and Washington, reported Global Times.

The probe, announced by China’s State Administration for Market Regulation (SAMR), aims to assess whether the US chipmaker breached conditions tied to its 2019 acquisition of Israeli chip designer Mellanox Technologies.

Cloudflare Radar Year in Review 2024: big source of traffic is AI crawlers

The internet is increasingly where we live today. In fact, global internet traffic grew 17.2% this year alone, according to Cloudflare.

The network provider has released its fifth annual internet radar report, offering insights into connectivity, security, outage frequencies, device usage, and a multitude of other trends.

Not surprisingly, Google, Facebook, Apple, TikTok, and Amazon Web Services (AWS) are the most popular internet services worldwide, while Chrome led the pack (65.8%) as the most popular web browser globally.

One big source of traffic, it noted, is AI crawlers, which are increasingly under scrutiny as they scan the web and gobble up voluminous amounts of data to train large language models (LLMs). A big concern is that some take data even when they’re not supposed to, as opposed to “verified” good bots that typically come from search engines and are transparent about who they are (such as GoogleBot, GPTBot, Qualys, and BingBot).

Cloudflare tracks AI bot traffic to determine which are the most aggressive, which have the highest volume of requests, and which perform crawls on a regular basis. Researchers found that “facebookexternalhit” accounted for the most traffic throughout the year (27.16%) — the bot is notorious for creating excessive traffic — followed by Bytespider (from TikTok owner ByteDance) at 23.35%, Amazonbot (13.34%), Anthropic’s ClaudeBot (8.06%), and GPTBot (5.60%).

Interestingly, Bytespider traffic gradually declined over the year, ending roughly 80% to 85% lower than at the start of the year, while Anthropic’s ClaudeBot traffic saw a spike in the middle of the year, then flattened out. GPTBot traffic, for its part, remained pretty consistent throughout 2024.

How we connect (or don’t)

HyperText Transfer Protocol (HTTP)  is the backbone of data transmission, first standardized in 1996. HTTP/2 was released in 2015 and HTTP/3 rolled out in 2022. Cloudflare found that HTTP/2 still accounts for nearly half of web requests (49.6%), while 29.9% use older HTTP/1 and 20.5% use HTTP/3.

Cloudflare also keeps close track of another critical communications standard, transmission control protocol (TCP), which ensures reliable data transfer between network devices. The company found that 20.7% of TCP connections were unexpectedly terminated before the exchange of any useful data. TCP anomalies can occur due to denial of service (DoS) attacks, network scanning, client disconnects, connection tampering, or “quirky client behavior,” Cloudflare pointed out. 

The largest share of TCP connection terminations identified by Cloudflare took place “post SYN,” or after a server received a synchronization request, but before it received an acknowledgement.

On the security front, Cloudflare found that, of the trillions upon trillions of emails sent this year, an average of 4.3% were malicious. These most commonly contained deceptive links (42.9%) and deceptive identities (35.1%). Both methods were found in up to 70% of analyzed emails at different times throughout the year.

Cloudflare also noted that the Log4j vulnerability is still a tried-and-true attack method, being anywhere from 4x to 100x more active than other common vulnerabilities and exposures (CVEs).

In addition, nearly 100% of email messages processed by Cloudflare from the .bar (bar and pub) .rest (restaurant), and .uno (Latin America) domains were found to be either spam or outright malicious.

Beyond CrowdStrike

While many accuse CrowdStrike of breaking the internet — the July outage will undoubtedly go down as one of the largest in history — Cloudflare noted that there were actually 225 major internet outages around the world this year. The majority of these occurred in Africa, across the Middle East and India.

More than half of these outages were the result of government-directed shutdowns; others were caused by cable cutting, power outages, technical problems, weather, maintenance, and cyberattacks. Cloudflare reported that many were short-lived (lasting just a few hours) while others “stretched on for days or weeks,” such as one in Bangladesh that lasted over 10 days in July.

Who has the fastest internet (and what are they connecting on)?

Cloudflare ranked countries across the globe on internet quality, based on upload speed, download speed, idle latency, and loaded latency. Who leads the pack? Spain, which boasts download speeds of 292.6 Mbps and upload speeds of 192.6 Mbps. All top countries experienced download speeds above 200Mbps.

As for how people around the world connect, 41.3% of global internet traffic came from mobile devices, and 58.7% from laptops and PCs. However, in roughly 100 regions of the world, the majority of traffic came from mobile devices. Cuba and Syria had the largest mobile device traffic share (accounting for 77%), with other high demand areas including the Middle East/Africa, Asia Pacific and South/Central America.

Cloudflare pointed out that these traffic measurements are similar to those of 2023 and 2022, “suggesting that mobile device usage has achieved a steady state.” This should come as no surprise, as roughly 70% of the world’s population uses smartphones today.

Microsoft’s Copilot Vision assistant can now browse the web with you

Microsoft’s Copilot Vision feature is now available for users to test in a limited preview.

Built natively into Microsoft’s Edge browser, Copilot Vision analyzes and understands the contents of web pages you visit. You can then ask the AI assistant for information and guidance about what appears on screen. 

“It is a new way to invite AI along with you as you navigate the web, tucked neatly into the bottom of your Edge browser whenever you want to ask for help,” the Copilot team said in a blog post Friday.  “It’s almost like having a second set of eyes as you browse, just turn on Copilot Vision to instantly scan, analyze, and offer insights based on what it sees.”   

The feature, which is opt-in, will function only on select websites to begin with.

Copilot Vision was announced as part of an overhaul to make the consumer Copilot more of a personal AI assistant. This also included the introduction of Copilot Voice, with four voice options aimed at enabling more natural interactions. 

“Increasingly, generative AI assistants are becoming multi-modal (language, vision and voice) and have personalities that can be configured by the consumers,” Jason Wong, distinguished vice president analyst at Gartner, said about the Copilot redesign at the time. “We will see even more anthropomorphism of AI in the coming year.” 

Copilot Vision is rolling out to a limited number of Copilot Pro customers in the US via Copilot Labs. Copilot Pro costs $20 per month. 

On Friday, Microsoft also announced an expanded preview for Windows Recall, its searchable timeline tool. Having made Recall available to Windows Insiders on Copilot+ PCs running Qualcomm’s Snapdragon processors, Microsoft has now expanded access to devices with AMD and Intel chips. 

Apple’s iPhone SE 4 will matter very much indeed

It might not be the biggest-selling or most expensive product in Apple’s basket, but a very important part of Apple’s future will be defined by the upcoming iPhone SE upgrade in 2025. That’s because it is expected to bring in a new Apple-made 5G modem, impressive camera improvements, and support for Apple Intelligence.

And all of those will require more memory and a much faster processor.

To recap recent claims, here’s what we expect for the iPhone SE 4:

An Apple-made 5G modem

Apple has been working on its own 5G modem for years and has spent billions on the task. Bloomberg tells us the company is almost ready to go with its own home-developed modem, though will continue using Qualcomm modems in some devices for a while yet, in part because they support mmWave, which the new Apple modems allegedly do not.

Apple’s first modems will appear in the iPhone SE4 and iPhone 17 Air. The good thing is that the new modem will enable Apple to make thinner devices; the bad news is it might deliver reduced download speeds in comparison to Qualcomm modems on some networks. The plan is to deploy Apple modems across all iPhones and iPads by around 2028 — and we might also see 5G arrive in Macs, at long last.

And a better camera

One report claims the iPhone SE 4 will include a single-lens 48-megapixel rear camera and a 12-megapixel TrueDepth front camera. That’s a big improvement on the current model, which offers just a 12-megapixel rear camera and a measly 7-megapixel front camera. These improvements should make for better photography and videoconferencing, and hints at good support for camera-driven object recognition using Apple Intelligence.

The phone is also expected to support FaceID and to host a 6.1-inch OLED display.

Apple Intelligence

That the fourth-generation iPhone SE will support Apple Intelligence isn’t surprising, as on its current path all Apple’s hardware is expected to integrate AI to some extent. What that means in hardware terms is that the new iPhone will have a higher-capacity battery (because running large language models is thirsty work), 8GB of memory, and a faster processor. That almost certainly means an A18 chip, as fielding an A17 processor would date the product even before it even joined the race.

For Apple Intelligence to truly succeed, Apple needs to invest in growing the size of the ecosystem, which is why it makes sense to go for the A18. We shall see, of course.

Made in India?

There are a handful of additional improvements, including a built-in eSIM, USB-C, and a better battery. Much of the reporting suggests the company will roll out its lowest-price iPhone sometime around March 2025, which itself means mass production has probably begun. We don’t yet know whether they will be manufactured in India, particularly if Apple wants to keep the price at around $500 or below. 

It seems possible. 

After all, rumor has it that Apple hopes to manufacture around 25% of all its smartphones in India by the end of 2025. It’s also true that India’s traditionally value-conscious consumers are increasingly prepared to invest in pro smartphones, despite which there is a massive market of people who don’t have these devices yet; market penetration is around 40%.

With the economy growing fast, the idea of introducing a lower cost but powerful India-made iPhones equipped with a powerful processor and support for AI could resonate quite strongly in India, where Apple’s efforts to build market are already having a positive impact. A range of cool colors and a ‘Made in India’ label on the box could help Apple convince some of those who don’t yet have smartphones to ready their Rupees for an AAPL stock-saving smartphone sale. And even if that doesn’t happen, the device itself could prove critical to the company’s 2025 efforts in that market.

What about the modem?

The 5G modem is, of course, the big Apple story here. Bloomberg has claimed Apple is working on three models at the moment: the first to be introduced in the iPhone SE that lacks mmWave support, a second that does enjoy such support, and a third “Pro” modem that answers or exceeds what the best available 5G chips can do. 

The thing is, 5G isn’t the only story in town. Apple continues to make big investments in satellite communications, as recently confirmed in a series of investor reports from its preferred network supplier, GlobalStar. The company already offers a range of satellite-based services in several nations through that partnership, and it’s reasonable to expect whatever 5G chips Apple comes up with to continue and enhance support for these life-saving services

Apple’s “whole widget” approach when it comes to communication services pretty much demands its network of space satellites and accompanying smartphone modems sing from the same hymn sheet, and it will be interesting to see if the song remains the same once they do. I think this connection (along with the ability to maintain current price points by swapping out Qualcomm kit for something else) will remain two strategic imperatives for Apple through 2028. Is it possible Apple’s AI servers will reduce the environmental impact of using them by being based in and cooled by space?

That’s a very long shot, of course, but feasibility studies to do just that have already taken place. 

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe