Why companies investing in generative AI want OpenAI alternatives

AI companies are using a blend of other large language models from companies like Meta and Google, testing OpenAI's GPT dominance
OpenAI spurred the AI boom.
OpenAI spurred the AI boom.
Photo: Leah Millis (Reuters)
We may earn a commission from links on this page.

It’s not just OpenAI’s world of large language models (LLMs) after all.

To be sure, OpenAI’s GPT-4, its latest model released in March, is far superior on the market—in part because of how much more vast its model is than those of competitors. OpenAI’s LLM, which powers chatbots like ChatGPT, can be applied to provide a vast amount of information like baseball stats or resume advice. But amid a fast-moving industry, companies building AI applications are diversifying their LLM dependencies, testing the limits of OpenAI’s dominance in the generative AI industry.

When it comes to disrupting OpenAI, “many smart people with a lot of money are trying, so we may expect eventually change sometime next year,” said Alexandre Lebrun, CEO of Nabla, a Paris-based startup that builds an AI note-taking software for clinical settings.

The growing market of LLM providers

That comes as Big Tech companies like Meta and Google are offering their own LLMs. And alongside upstarts like Anthropic, Stability AI, Mosaic, AI companies have a host of models to choose from.

Given the high-speed velocity of the AI industry, companies have started using multiple LLMs to reduce the risks of other players becoming better than OpenAI—and positions them to quickly adapt if that becomes the case.

For instance, telecom and networking company Cisco has contracts with multiple LLM providers, including OpenAI and Google—and it’s open to picking up more. “What we find is that there is not a single model that works well for every single scenario,” Anurag Dhingra, Cisco’s chief technology officer, told Quartz. He added that Cisco regularly tests LLMs to see how they perform. For summarizing use cases, for example, the company opts for GPT 3.5. But for evaluating trends in customer calls, a smaller model suffices; the company uses its own models or open-sourced ones that it fine-tunes.

“Instead of picking one [model] and saying this is the One Ring to rule them all, we are taking a more flexible approach,” Dhingra said.

Cisco is just one of many AI-using companies that have been building its product for flexibility. The takeaway: if one LLM surpasses others in quality, they can easily plug the new model into their AI product to stay competitive.

Cheaper models from companies like Meta and Mosaic are on the market

For other companies, particularly, cash-strapped startups using LLMs to power their products, it comes down to saving money, particularly, as training AI models can incur high computational costs. That’s the case for cash-strapped startups like Numerade, a Los Angeles-based online education company building an AI tutor.

To cut costs, those companies often use a blend of LLM providers that are less expensive than OpenAI’s, then fine-tune them to meet their needs. It’s especially effective when a company has proprietary data, said Alex Lee, CTO of Numerade, which uses a blend of LLM providers including from Google, Meta, Mosaic, Anthropic, among others.

High demand for OpenAI’s services

There are also more immediate concerns to address, like the capacity limitations for the training of AI models.

For instance, with the high demand for OpenAI’s GPT-4, servers can reach full capacity, requiring additional GPUs to run effectively—and all paths lead to Nvidia—which has had its own demand problem. And when a startup is scaling its customers, ensuring uninterrupted access to GPT-4 becomes crucial. In those situations companies are finding themselves working their connections with larger corporations, like Microsoft, which houses OpenAI’s model, to secure additional capacity.

“[If] you’re, HP or IBM or a very important customer, you get what you want for your work people at Microsoft, but if you’re just a normal company, it may be very, very difficult to get the right capacity,” Lebrun said. To prevent capacity overload, the company uses open-sourced LLMs from Meta and Mistral.

He added that using different models protects against the risk of future price increases from OpenAI, which may start raising rates when the competition starts to catch up.