Page 12 of 101

OpenAI expands multi-modal capabilities with updated text-to-video model

OpenAI has released a new version of its text-to-video AI model, Sora, for ChatGPT Plus and Pro users, marking another step in expansion into multimodal AI technologies.

The original Sora model, introduced earlier this year, was restricted to safety testers in the research preview phase, limiting its availability.

The new Sora Turbo version offers significantly faster performance compared to its predecessor, OpenAI said in a blog post.

Sora is currently available to users across all regions where ChatGPT operates, except in the UK, Switzerland, and the European Economic Area, where OpenAI plans to expand access in the coming months.

ChatGPT, which gained global prominence in 2022, has been a driving force behind the widespread adoption of generative AI. Sora reflects OpenAI’s ongoing efforts to maintain a competitive edge in the rapidly evolving AI landscape.

Keeping pace with rivals

The move positions OpenAI to compete with similar offerings from rivals like Meta, Google, and Stability AI.

“The true power of GenAI will be in realizing its multi-model capabilities,” said Sharath Srinivasamurthy, associate vice president at IDC. “Since OpenAI was lagging behind its competitors in text to video, this move was needed to stay relevant and compete.”

However, both Google and Meta outpaced OpenAI in making their models publicly reviewable, even though Sora was first introduced in discussions back in February.

“OpenAI likely anticipated becoming a target if it launched this service first, so it seems probable that they waited for other companies to release their video generation products while refining Sora for public preview or alpha testing,” said Hyoun Park, CEO and chief analyst at Amalgam Insights. “OpenAI is offering longer videos, whereas Google supports six-second videos and Meta supports 16-second videos.”

Integration remains a work in progress, though OpenAI is expected to eventually provide data integration for Sora comparable to its other models, Park added.

Managing regulatory concerns

Sora-generated videos will include C2PA metadata, enabling users to identify the content’s origin and verify its authenticity. This is significant amid global regulatory efforts to ensure AI firms adhere to compliance requirements.

“While imperfect, we’ve added safeguards like visible watermarks by default, and built an internal search tool that uses technical attributes of generations to help verify if content came from Sora,” OpenAI said in the post.

Even with such safeguards, the use of data in training AI models continues to spark debates over intellectual property rights. In August, a federal judge in California ruled that visual artists could proceed with certain copyright claims against AI companies like Stability AI.

“As with all of OpenAI’s generative tools, Sora faces challenges related to being trained on commercial data, which is often subject to copyright and, in some cases, patents,” Park said. “This could create opportunities for vendors like Anthropic and Cohere, which have been more focused on adhering to EU governance guidelines.” Extensive testing is critical for video-based generative AI applications due to concerns such as the rise of deepfakes, which likely contributed to the time it took OpenAI to release the model, according to Srinivasamurthy.

OpenAI expands multi-modal capabilities with updated text-to-video model

OpenAI has released a new version of its text-to-video AI model, Sora, for ChatGPT Plus and Pro users, marking another step in expansion into multimodal AI technologies.

The original Sora model, introduced earlier this year, was restricted to safety testers in the research preview phase, limiting its availability.

The new Sora Turbo version offers significantly faster performance compared to its predecessor, OpenAI said in a blog post.

Sora is currently available to users across all regions where ChatGPT operates, except in the UK, Switzerland, and the European Economic Area, where OpenAI plans to expand access in the coming months.

ChatGPT, which gained global prominence in 2022, has been a driving force behind the widespread adoption of generative AI. Sora reflects OpenAI’s ongoing efforts to maintain a competitive edge in the rapidly evolving AI landscape.

Keeping pace with rivals

The move positions OpenAI to compete with similar offerings from rivals like Meta, Google, and Stability AI.

“The true power of GenAI will be in realizing its multi-model capabilities,” said Sharath Srinivasamurthy, associate vice president at IDC. “Since OpenAI was lagging behind its competitors in text to video, this move was needed to stay relevant and compete.”

However, both Google and Meta outpaced OpenAI in making their models publicly reviewable, even though Sora was first introduced in discussions back in February.

“OpenAI likely anticipated becoming a target if it launched this service first, so it seems probable that they waited for other companies to release their video generation products while refining Sora for public preview or alpha testing,” said Hyoun Park, CEO and chief analyst at Amalgam Insights. “OpenAI is offering longer videos, whereas Google supports six-second videos and Meta supports 16-second videos.”

Integration remains a work in progress, though OpenAI is expected to eventually provide data integration for Sora comparable to its other models, Park added.

Managing regulatory concerns

Sora-generated videos will include C2PA metadata, enabling users to identify the content’s origin and verify its authenticity. This is significant amid global regulatory efforts to ensure AI firms adhere to compliance requirements.

“While imperfect, we’ve added safeguards like visible watermarks by default, and built an internal search tool that uses technical attributes of generations to help verify if content came from Sora,” OpenAI said in the post.

Even with such safeguards, the use of data in training AI models continues to spark debates over intellectual property rights. In August, a federal judge in California ruled that visual artists could proceed with certain copyright claims against AI companies like Stability AI.

“As with all of OpenAI’s generative tools, Sora faces challenges related to being trained on commercial data, which is often subject to copyright and, in some cases, patents,” Park said. “This could create opportunities for vendors like Anthropic and Cohere, which have been more focused on adhering to EU governance guidelines.” Extensive testing is critical for video-based generative AI applications due to concerns such as the rise of deepfakes, which likely contributed to the time it took OpenAI to release the model, according to Srinivasamurthy.

China launches anti-monopoly probe into Nvidia amid rising US-China chip tensions

China has initiated an investigation into Nvidia over alleged violations of the country’s anti-monopoly laws, signaling a potential escalation in the ongoing tech and trade tensions between Beijing and Washington, reported Global Times.

The probe, announced by China’s State Administration for Market Regulation (SAMR), aims to assess whether the US chipmaker breached conditions tied to its 2019 acquisition of Israeli chip designer Mellanox Technologies.

Cloudflare Radar Year in Review 2024: big source of traffic is AI crawlers

The internet is increasingly where we live today. In fact, global internet traffic grew 17.2% this year alone, according to Cloudflare.

The network provider has released its fifth annual internet radar report, offering insights into connectivity, security, outage frequencies, device usage, and a multitude of other trends.

Not surprisingly, Google, Facebook, Apple, TikTok, and Amazon Web Services (AWS) are the most popular internet services worldwide, while Chrome led the pack (65.8%) as the most popular web browser globally.

One big source of traffic, it noted, is AI crawlers, which are increasingly under scrutiny as they scan the web and gobble up voluminous amounts of data to train large language models (LLMs). A big concern is that some take data even when they’re not supposed to, as opposed to “verified” good bots that typically come from search engines and are transparent about who they are (such as GoogleBot, GPTBot, Qualys, and BingBot).

Cloudflare tracks AI bot traffic to determine which are the most aggressive, which have the highest volume of requests, and which perform crawls on a regular basis. Researchers found that “facebookexternalhit” accounted for the most traffic throughout the year (27.16%) — the bot is notorious for creating excessive traffic — followed by Bytespider (from TikTok owner ByteDance) at 23.35%, Amazonbot (13.34%), Anthropic’s ClaudeBot (8.06%), and GPTBot (5.60%).

Interestingly, Bytespider traffic gradually declined over the year, ending roughly 80% to 85% lower than at the start of the year, while Anthropic’s ClaudeBot traffic saw a spike in the middle of the year, then flattened out. GPTBot traffic, for its part, remained pretty consistent throughout 2024.

How we connect (or don’t)

HyperText Transfer Protocol (HTTP)  is the backbone of data transmission, first standardized in 1996. HTTP/2 was released in 2015 and HTTP/3 rolled out in 2022. Cloudflare found that HTTP/2 still accounts for nearly half of web requests (49.6%), while 29.9% use older HTTP/1 and 20.5% use HTTP/3.

Cloudflare also keeps close track of another critical communications standard, transmission control protocol (TCP), which ensures reliable data transfer between network devices. The company found that 20.7% of TCP connections were unexpectedly terminated before the exchange of any useful data. TCP anomalies can occur due to denial of service (DoS) attacks, network scanning, client disconnects, connection tampering, or “quirky client behavior,” Cloudflare pointed out. 

The largest share of TCP connection terminations identified by Cloudflare took place “post SYN,” or after a server received a synchronization request, but before it received an acknowledgement.

On the security front, Cloudflare found that, of the trillions upon trillions of emails sent this year, an average of 4.3% were malicious. These most commonly contained deceptive links (42.9%) and deceptive identities (35.1%). Both methods were found in up to 70% of analyzed emails at different times throughout the year.

Cloudflare also noted that the Log4j vulnerability is still a tried-and-true attack method, being anywhere from 4x to 100x more active than other common vulnerabilities and exposures (CVEs).

In addition, nearly 100% of email messages processed by Cloudflare from the .bar (bar and pub) .rest (restaurant), and .uno (Latin America) domains were found to be either spam or outright malicious.

Beyond CrowdStrike

While many accuse CrowdStrike of breaking the internet — the July outage will undoubtedly go down as one of the largest in history — Cloudflare noted that there were actually 225 major internet outages around the world this year. The majority of these occurred in Africa, across the Middle East and India.

More than half of these outages were the result of government-directed shutdowns; others were caused by cable cutting, power outages, technical problems, weather, maintenance, and cyberattacks. Cloudflare reported that many were short-lived (lasting just a few hours) while others “stretched on for days or weeks,” such as one in Bangladesh that lasted over 10 days in July.

Who has the fastest internet (and what are they connecting on)?

Cloudflare ranked countries across the globe on internet quality, based on upload speed, download speed, idle latency, and loaded latency. Who leads the pack? Spain, which boasts download speeds of 292.6 Mbps and upload speeds of 192.6 Mbps. All top countries experienced download speeds above 200Mbps.

As for how people around the world connect, 41.3% of global internet traffic came from mobile devices, and 58.7% from laptops and PCs. However, in roughly 100 regions of the world, the majority of traffic came from mobile devices. Cuba and Syria had the largest mobile device traffic share (accounting for 77%), with other high demand areas including the Middle East/Africa, Asia Pacific and South/Central America.

Cloudflare pointed out that these traffic measurements are similar to those of 2023 and 2022, “suggesting that mobile device usage has achieved a steady state.” This should come as no surprise, as roughly 70% of the world’s population uses smartphones today.

Microsoft’s Copilot Vision assistant can now browse the web with you

Microsoft’s Copilot Vision feature is now available for users to test in a limited preview.

Built natively into Microsoft’s Edge browser, Copilot Vision analyzes and understands the contents of web pages you visit. You can then ask the AI assistant for information and guidance about what appears on screen. 

“It is a new way to invite AI along with you as you navigate the web, tucked neatly into the bottom of your Edge browser whenever you want to ask for help,” the Copilot team said in a blog post Friday.  “It’s almost like having a second set of eyes as you browse, just turn on Copilot Vision to instantly scan, analyze, and offer insights based on what it sees.”   

The feature, which is opt-in, will function only on select websites to begin with.

Copilot Vision was announced as part of an overhaul to make the consumer Copilot more of a personal AI assistant. This also included the introduction of Copilot Voice, with four voice options aimed at enabling more natural interactions. 

“Increasingly, generative AI assistants are becoming multi-modal (language, vision and voice) and have personalities that can be configured by the consumers,” Jason Wong, distinguished vice president analyst at Gartner, said about the Copilot redesign at the time. “We will see even more anthropomorphism of AI in the coming year.” 

Copilot Vision is rolling out to a limited number of Copilot Pro customers in the US via Copilot Labs. Copilot Pro costs $20 per month. 

On Friday, Microsoft also announced an expanded preview for Windows Recall, its searchable timeline tool. Having made Recall available to Windows Insiders on Copilot+ PCs running Qualcomm’s Snapdragon processors, Microsoft has now expanded access to devices with AMD and Intel chips. 

Apple’s iPhone SE 4 will matter very much indeed

It might not be the biggest-selling or most expensive product in Apple’s basket, but a very important part of Apple’s future will be defined by the upcoming iPhone SE upgrade in 2025. That’s because it is expected to bring in a new Apple-made 5G modem, impressive camera improvements, and support for Apple Intelligence.

And all of those will require more memory and a much faster processor.

To recap recent claims, here’s what we expect for the iPhone SE 4:

An Apple-made 5G modem

Apple has been working on its own 5G modem for years and has spent billions on the task. Bloomberg tells us the company is almost ready to go with its own home-developed modem, though will continue using Qualcomm modems in some devices for a while yet, in part because they support mmWave, which the new Apple modems allegedly do not.

Apple’s first modems will appear in the iPhone SE4 and iPhone 17 Air. The good thing is that the new modem will enable Apple to make thinner devices; the bad news is it might deliver reduced download speeds in comparison to Qualcomm modems on some networks. The plan is to deploy Apple modems across all iPhones and iPads by around 2028 — and we might also see 5G arrive in Macs, at long last.

And a better camera

One report claims the iPhone SE 4 will include a single-lens 48-megapixel rear camera and a 12-megapixel TrueDepth front camera. That’s a big improvement on the current model, which offers just a 12-megapixel rear camera and a measly 7-megapixel front camera. These improvements should make for better photography and videoconferencing, and hints at good support for camera-driven object recognition using Apple Intelligence.

The phone is also expected to support FaceID and to host a 6.1-inch OLED display.

Apple Intelligence

That the fourth-generation iPhone SE will support Apple Intelligence isn’t surprising, as on its current path all Apple’s hardware is expected to integrate AI to some extent. What that means in hardware terms is that the new iPhone will have a higher-capacity battery (because running large language models is thirsty work), 8GB of memory, and a faster processor. That almost certainly means an A18 chip, as fielding an A17 processor would date the product even before it even joined the race.

For Apple Intelligence to truly succeed, Apple needs to invest in growing the size of the ecosystem, which is why it makes sense to go for the A18. We shall see, of course.

Made in India?

There are a handful of additional improvements, including a built-in eSIM, USB-C, and a better battery. Much of the reporting suggests the company will roll out its lowest-price iPhone sometime around March 2025, which itself means mass production has probably begun. We don’t yet know whether they will be manufactured in India, particularly if Apple wants to keep the price at around $500 or below. 

It seems possible. 

After all, rumor has it that Apple hopes to manufacture around 25% of all its smartphones in India by the end of 2025. It’s also true that India’s traditionally value-conscious consumers are increasingly prepared to invest in pro smartphones, despite which there is a massive market of people who don’t have these devices yet; market penetration is around 40%.

With the economy growing fast, the idea of introducing a lower cost but powerful India-made iPhones equipped with a powerful processor and support for AI could resonate quite strongly in India, where Apple’s efforts to build market are already having a positive impact. A range of cool colors and a ‘Made in India’ label on the box could help Apple convince some of those who don’t yet have smartphones to ready their Rupees for an AAPL stock-saving smartphone sale. And even if that doesn’t happen, the device itself could prove critical to the company’s 2025 efforts in that market.

What about the modem?

The 5G modem is, of course, the big Apple story here. Bloomberg has claimed Apple is working on three models at the moment: the first to be introduced in the iPhone SE that lacks mmWave support, a second that does enjoy such support, and a third “Pro” modem that answers or exceeds what the best available 5G chips can do. 

The thing is, 5G isn’t the only story in town. Apple continues to make big investments in satellite communications, as recently confirmed in a series of investor reports from its preferred network supplier, GlobalStar. The company already offers a range of satellite-based services in several nations through that partnership, and it’s reasonable to expect whatever 5G chips Apple comes up with to continue and enhance support for these life-saving services

Apple’s “whole widget” approach when it comes to communication services pretty much demands its network of space satellites and accompanying smartphone modems sing from the same hymn sheet, and it will be interesting to see if the song remains the same once they do. I think this connection (along with the ability to maintain current price points by swapping out Qualcomm kit for something else) will remain two strategic imperatives for Apple through 2028. Is it possible Apple’s AI servers will reduce the environmental impact of using them by being based in and cooled by space?

That’s a very long shot, of course, but feasibility studies to do just that have already taken place. 

You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe

Has Microsoft finally agreed to pay for intellectual property to train its genAI tools?

To train the large language models (LLMs) that power generative AI (genAI) technology, Microsoft and other AI companies need to use massive amounts of data. The more data, and the higher its quality, the more effective LLMs will be.

So it’s not surprising that Microsoft, OpenAI and other AI companies have become embroiled in lawsuits claiming they steal intellectual property (IP) from newspapers, magazines, writers, publishers and others to train their tools. It could take years to resolve the suits, but if the courts rule against AI companies, they could be liable for billions of dollars and forced to retrain their models without the use of that property

Now, though, there are signs Microsoft, OpenAI and other tech firms might be willing to pay for the property. They’re only initial steps, but they could be set in motion the resolution of one of genAI’s thorniest legal issues.

Will that happen, or will the fight over AI and intellectual property drag on for years? Let’s look at the legal issues involved, then delve into the agreement itself to find out how this fight might unfold. 

Intellectual property theft or fair use? 

Microsoft’s Copilot and OpenAI’s ChatGPT (on which it’s based) are trained on text, much of which is freely available on the Internet. OpenAI hoovers up whatever it finds online and uses that for training. And it doesn’t pay for it. As far as Microsoft and OpenAI are concerned, it’s open season on intellectual property.

A great deal of what they find is free for the taking, and not covered by intellectual property laws. However, they also take a lot of material that is copyright -protected, including articles in newspapers and magazines, as well as entire books. 

OpenAI and Microsoft claim that despite copyright-protection they can use those articles and books for training. Their lawyers argue the material is covered by fair use doctrine, a complicated and confusing legal concept. For years there’s been an endless stream of lawsuits over what’s fair use and what isn’t. It’s widely open to interpretation.

The New York Times claims its articles aren’t covered by fair use and has sued Microsoft and OpenAI for intellectual property theft. The suit claims Copilot and ChatGPT have been trained on millions of articles without asking The Times‘ permission or paying a penny for it. Beyond that, it claims that ChatGPT and Copilot “now compete with the news outlet as a source of reliable information.” It’s seeking “billions of dollars in statutory and actual damages” because of the “unlawful copying and use of The Times’ uniquely valuable works.” 

The Times isn’t alone. Many other copyright holders are suing Microsoft, Open AI and other AI firms as well.

You might think that billions of dollars overvalues the articles’ value. It doesn’t. Several years ago, Meta held internal discussions about whether to buy one of the world’s largest publishers in the world, Simon & Shuster, for the sole purpose of using the publisher’s books to train its genAI. The publisher wouldn’t have come cheap: Simon & Shuster was sold in 2023 for $1.62 billion. Meta eventually decided not to try to buy the company

Paying to play

With that background, it’s noteworthy that 2024 has seen several agreements between Microsoft, OpenAI and publishers that could be the beginning of the end of the fight over intellectual property. The first, struck in May, was between OpenAI and News Corp, allowing OpenAI to use News Corp’s many publications, including the Wall Street Journal, New York Post, Barrons and others to train OpenAI applications and answer people’s questions.

It’s a multi-year deal whose precise length hasn’t been publicly disclosed, although most observers  believe it will last five years. News Corp gets $250 million, a combination of cash and credits for the use of OpenAI technology.

Other media companies have signed similar agreements with OpenAI, including The Associated PressPeople owner Dotdash Meredith, and others.

In November, the other shoe dropped. Microsoft cut a deal with the publisher HarperCollins (owned by News Corp) to let it use non-fiction books to train a new genAI product that hasn’t yet been publicly disclosed. It appears that the new tool will be one that Microsoft creates itself, not something based on OpenAI’s ChatGPT.

It’s not yet clear how much money is involved. Individual authors have to agree to let their books be used for training. If they do, they and HarperCollins each get $2,500 per book for the three-year terms of the deal. The deal is non-exclusive, so the rights can also be sold to others. If authors don’t agree, the books can’t be used for AI training.

The deal takes into account many thorny issues unique to book publishing. Only so-called “back-list” books are involved — that is, newly published books won’t be used for a certain amount of time. The books can only used for LLM training, so Microsoft and its new genAI can’t create new books from them. The new tool also can’t output more than 200 consecutive words of any book, as a way to guard against intellectual property theft. 

Do these deals point towards the future?

 The big question is whether agreements like these will ultimately resolve the intellectual property issues involved in training genAI models. I think that unlikely, and that’s the way Microsoft and other AI companies want it. At the moment, they’re playing divide and conquer, buying off opponents one by one. That gives Microsoft and other tech companies the upper hand. Intellectual property owners might feel that unless they settle now with big tech firms, the company will simply take what it wants, and they’ll lose out on big money.

The issues involved are too important to be handled that way. The courts should rule on this and rule quickly — and they should side with those who own the intellectual property, not those who want to steal it. 

Low-tech solutions to high-tech cybercrimes

You might hear that 2025 will be the Year of artificial intelligence (AI) cybercrime.  But the trend really began in 2024.

AI crime will prove so overwhelming that some say the only way to fight it is through AI security software. But two incredibly simple, low-tech, and common-sense techniques have emerged recently that should become everyone’s default in business and personal contexts. (I’ll tell you about those below.)

First, let’s understand how the bad guys are using AI. 

The clear and present danger of AI-powered attacks

Already, we’re seeing attackers using AI to generate phishing emails with perfect grammar and personalized details for each victim. Not only is English grammar perfect but with AI, any attack can be delivered in any language. 

It’s even “democratizing” the ability to launch thousands of simultaneous attacks, a feat formerly possible only by large-scale attacks by nation-states. The use of swarming AI agents in 2025 will create a new and urgent risk for companies.

Phishing and malware, of course, facilitate multifaceted ransomware attacks that have caused havoc with healthcare organizations, supply chains, and other targets. Global ransomware attacks are predicted to cost more than $265 billion annually by 2031, thanks in part to the power of AI in these attacks. 

The growing quality of deepfakes, including real-time deepfakes during live video calls, invites scammers, criminals, and even state-sponsored attackers to convincingly bypass security measures and steal identities for all kinds of nefarious purposes. AI-enabled voice cloning has already proved to be a massive boon for phone-related identity theft.  AI enables malicious actors to bypass face recognition. protection And AI-powered bots are being deployed to intercept and use one-time passwords in real time.

More broadly, AI can accelerate and automate just about any cyberattack. Automated vulnerability exploitation, which allows malicious actors to identify and exploit weaknesses fast, is a huge advantage for attackers. AI also boosts detection evasion, enabling attackers to maintain a persistent presence within compromised systems while minimizing their digital footprint — magnifying the potential damage from the initial breach.

Once large amounts of data are exfiltrated, AI is useful for extracting intelligence on that data’s value, enabling fast, thorough exploitation of the breach. 

State-sponsored actors — especially Russia, Iran, and China — are using AI deepfakes as part of their broader election interference efforts in democracies around the world. They’re using AI to create memes impersonating or slandering the candidates they oppose and to create more convincing sock-puppet accounts, complete with AI-generated profile pictures and AI-generated bot content at a massive scale; the goal is to create astroturf campaigns that can sway elections.

Rise of AI-augmented spyware

A new HBO documentary by journalist Ronan Farrow, “Surveilled,” investigates the rapidly growing multi-billion-dollar industry of commercially available spyware. The most prominent, and probably most effective, of these products is NSO Group’s Pegasus spyware. 

According to the documentary, Pegasus can enable an attacker to remotely turn on a phone’s microphone and camera, record audio and video — all without any indication on the phone that this recording is taking place — and send that content to the attacker. It can also copy and exfiltrate all the data on the phone. 

While Pagasus itself does not contain or use AI, it is used in conjunction with AI tools for targeting, face recognition, data processing, pattern recognition, and other jobs.

NSO Group claims it sells Pegasus only to governments, but this claim has yet to be independently verified, and no regulation governs its sale. 

Two simple solutions can defeat AI-powered attacks

The advice for protecting an organization from AI-powered cyberattacks and fraud is well known.

  • Implement a robust cybersecurity policy and employ strong authentication measures, including multi-factor authentication.
  • Regularly update and patch all software systems.
  • Educate employees on cybersecurity awareness and best practices.
  • Deploy firewalls and endpoint protection solutions.
  • Secure perimeter and IoT connections.
  • Adopt a zero-trust security model and enforce the principle of least privilege for access control.
  • Regularly back up critical data and encrypt sensitive information.
  • Conduct frequent security audits and vulnerability assessments.
  • Implement network segmentation to limit potential damage from breaches.
  • Develop and maintain an up-to-date incident response plan.
  • Consider a people-centric security approach to address human error, a significant factor in successful cyberattacks. 

Combine these practices and you can significantly enhance your organization’s cybersecurity posture and reduce the risk of successful attacks.

Though effective, those solutions are expensive, require expertise, and require ongoing iterative efforts by large numbers of employees. They’re not something one person alone can do.

So what can each of us do to better protect against AI-enhanced attacks, fraud, and spyware tools on our smartphones? In addition to the usual best practices, the FBI and Farrow emphasize two simple, easy, and completely free techniques for powerful protection. Let’s start with the FBI. 

The FBI recently issued a warning about criminals exploiting generative AI to commit financial fraud on a larger scale. The warning is aimed at consumers rather than businesses, but their solution can work on a small scale within a team or between an executive and their assistant.

After listing all the many ways fraudsters can use AI to steal identities, impersonate people, and socially engineer their way into committing scams and theft, they say one effective way to verify identity quickly is to use a secret word. 

Once established (not in writing… ), the secret word can serve as a fast, powerful way to instantly identify someone. And because it’s not digital or stored anywhere on the Internet, it can’t be stolen. So if your “boss” or your spouse calls you to ask you for data or to transfer funds, you can ask for the secret word to verify it’s really them. 

The FBI offers other advice, such as limiting audio, video, or pictures posted online and always hanging up and calling back the person on a known number. But the secret word is the most useful advice.

Meanwhile, in his documentary, Farrow emphasizes a simple way to foil spyware: reboot your phone every day. He points out that most spyware is purged with a reboot. So rebooting every day makes sure that no spyware remains on your phone.

He also stresses the importance of keeping your OS and apps updated to the latest version. That’s my advice as well. Use good best practices generally as far as your budget will allow. But do establish a secret word with co-workers, bosses, and family members.

And reboot your phone every day. 

OpenAI announces ChatGPT Pro, priced at $200 per month

The $200 monthly pricing OpenAI has set for a subscription to its recently launched ChatGPT Pro is definitely “surprising,”  said Gartner analyst Arun Chandrasekaran on Friday, but at the same time it’s indicative that the company is betting that organizations will ultimately pay more for enhanced AI capabilities.

In an announcement on Thursday, OpenAI said the plan, priced at nearly 10 times more than its existing corporate plans, includes access to OpenAI o1, as well as to o1-mini, GPT-4o, and Advanced Voice.

Part of the company’s 12 days of Shipmas campaign, it also includes OpenAI o1 pro mode, a version of o1 that, the company said, “uses more compute to think harder and provide even better answers to the hardest problems. In the future, we expect to add more powerful, compute-intensive productivity features to this plan.”

For considerably less, OpenAI’s previously most expensive subscription, ChatGPT Team, offers a collaborative workspace with limited access to OpenAI o1 and o1-mini, and an admin console for workspace management, and costs $25 per user per month. And ChatGPT Plus, which also offers limited access to o1 and o1-mini, plus standard and advanced voice, is $20 per user per month.

ChatGPT Pro also costs far more than its competitors are charging. A 12-month commitment to the enterprise edition of Gemini Code Assist, which Google describes as “an AI-powered collaborator that helps your development team build, deploy and operate applications throughout the software development life cycle (SDLC),” costs $45 per user per month.

Monthly pricing plans for Anthropic’s Claude AI range from $18 for Claude Pro to $25 for the Claude Team edition, while the cost per user per month with an annual subscription for Microsoft 365 Copilot, which contains Copilot Studio for the creation of AI agents and the ability to automate business processes, is $30.

Small target market

With its new plan, said Chandrasekaran, OpenAI is not “targeting information retrieval use cases, because the chatbot is actually pretty effective for them.”

This latest salvo is, he said is “more about potentially using [ChatGPT Pro] as a decision intelligence tool to automate tasks that human beings do. That’s kind of the big bet here, but nevertheless, it’s still a very big jump in price, because GPT Plus is $20 per user per month. And even the ChatGPT Enterprise, which is the enterprise version of the product, is $60 or $70, so it’s a very, very big jump in my opinion.”

Thomas Randall, director of AI market research at Info-Tech Research Group, said, “the persona for ChatGPT’s ‘Pro’ offering will be very narrowly scoped, and it isn’t quite clear who that is. This is especially the case as ChatGPT has an ‘enterprise’ plan for organizations that can still take advantage of the ‘Pro’ offering. ‘Pro’ will perhaps be for individuals with highly niche use cases, or small businesses.”

‘Plus’ remains competitive

But, he said, “the value add between ‘Plus’ and ‘Pro’ is not currently clear from a marketing perspective. The average user of ChatGPT will still do well with the free option, perhaps being persuaded to pay for ‘Plus’  if they are using it more extensively for content writing or coding. When priced against other tools, ChatGPT’s ‘Plus’ will remain very competitive against its rivals.”

According to Randall, “Anthropic is still trying to achieve market share (though it has recently fumbled with an ambiguous marketing campaign), while Gemini is not currently accurate enough in its outputs to effectively position itself. As an example, when I asked ChatGPT, Anthropic’s Claude, and Gemini to give me a list of 100 historical events for a certain country, ChatGPT and Anthropic were comparable, but Gemini would only list up to 40, but still call it a list of 100.”

As for Microsoft Copilot, he said, it “still struggles to showcase the value-add of its rather expensive licensing. While Microsoft certainly needs to show revenue return from the amount it has invested in Copilot, the product has not been immediately popular, and was perhaps released too early. We may end up seeing a rebrand, or Copilot eventually being packaged with Microsoft’s enterprise plans.”