AI Pricing: Some Alternative Large Language Models

Steven Forth is a Managing Partner at Ibbaka. See his Skill Profile on Ibbaka Talio.

One of the results of the Ibbaka survey on AI Monetization in 2024 is some insight into the variety of approaches being taken to AI development. This is true even within the world of transformer architectures and Large Language Models. Given the developments at Open.ai over the past few weeks having options is proving to be a good thing.

In the survey, we asked about the approach companies are taking to AI development. The full results will be provided in the survey report in December, but here is an interesting slice. Of the 307 respondents, 85% were using generative AI for text generation. We broke this down into three different approaches.

  • External LLM like GPT 4

  • Open Source LLM like Llama

  • Custom LLM being developed internally or with a technology partner

Here is the distribution of results:

Not surprisingly, the majority of companies are using an external model. Large Language Models are, well, large, and expensive to develop, requiring a great deal of expertise, software tools, and specialized hardware like GPUs (as it turns out Graphic Processing Units are suitable for the kinds of computation needed by these models) and TPUs (Google has charted its own course developing Tensor Processing Units).

There are other approaches. Almost 25% of companies responded that they are developing custom models while just over 22% said they are using open-source models.

Let’s look into these three options and ask how they might impact pricing.

One way to get to pricing is through the cost to the buyer. Theory Ventures has estimated the cost of using different LLMs and found that the cost varies by 120X! See How much does it cost to use an LLM by Tomasz Tunguz.

Mapping the model names to the companies associate with them we get

Different applications will have different ratios of input to output so this become an important consideration in pricing. Three of the critical pricing metrics for LLMs are

  • Input tokens

  • Output tokens

  • Context size

  • Training surcharges (expect to see more of these)

The details of how the underlying LLM is priced have a big impact on the cost of operating a service. A service with a small input and large output will have a cost function quite different than one with a small input and large output.

CB Insights has some data from before the November 17 events that show how and where people are spending money.

External Options

The most important providers here are

Open.ai - pricing page

Anthropic - pricing page

AI21labs - pricing page

Cohere - pricing page

The two other companies in the CB Insights figure are not really model providers. They support custom development and in the case of Mosaic have an open source LLM.

Mosaic ML - is part of Databricks and provides open-source models plus tools

Hugging Face supports the model development community.

Microsoft, Google, Amazon, and Salesforce also offer LLMs, sometimes from third parties (Microsoft resells Open.ai for example).

Comparing pricing by the four independent companies (assuming Open.ai remains independent finds the following.

Custom Options

A surprising number of companies are electing to roll their own model. This is an expensive proposition. One has to invest in the talent and computing resources to develop, operate, and maintain the model. Why would one do so? What we have heard from companies that are taking this approach is that …

  • Other approaches do not give them enough control

  • Other approaches have too many data privacy and security issues

  • They need to be able to integrate with other applications in ways not supported by third-party approaches

  • They are in ‘learn’ mode and rolling their own is the best way to learn

One of the best resources for custom LLM development is Hugging Face, an open-source community for LLM development.

Even if you are not planning to develop a custom model one can learn a great deal about one’s content and data by using it to build a model.

Open Source Options

I had expected more companies to be leveraging open source models than building completely custom models, but in fact, the custom approach edged out the open source approach in this survey.

The best-known open source LLM is Llama, but there are many other alternatives.

Why develop with an open-source model? It is expensive to build an LLM and most of us do not have access to enough data or computing power to build a foundation model (Foundation models are AI models designed to produce a wide and general variety of outputs. They are capable of a range of possible tasks and applications, such as text, image, or audio generation. They can be standalone systems or can be used as a 'base' for many other applications.) Many of the emergent properties of LLMs come from having billions of parameters. Additionally, there are many tools and services available to leverage these models, augment, customize, and tune them (again, Hugging Face is a great place to start).

One still has to host and operate the open-source model, which will not be cheap.

In the long run, most companies will probably use a mix of custom, external, and open-source models and will find creative ways to combine them. We are just at the very beginning to explore the power of transformer architecture.

 
Previous
Previous

Would you like to calculate your emotions?

Next
Next

Pricing and Planning: Masterclass with PeakSpan