Cloud Pricing No Longer Works in an AI World
Back in my younger years there was a hilarious Saturday Night Live skit about a bank that only made change (You can see it here). It ends with the customers asking – “How do you make money at this?” The answer from the bank was absurd and an LOL moment for this 2nd year MBA student – “It simple. Volume.”
We are entering the 3rd major shift in software pricing in my retail career. When I started over 30 years ago everything was on-prem. Payments up front for everything, often a client license and server license. It was about 10-12 years ago that things began to change to a SaaS model or a pay-as-you go model and hybrid costs for client and usage. This was a huge disruption for existing software providers as they had the classic business “innovators dilemma,” do we destroy our business profitability in the short term for a longer-term benefit. Now it is time for the cloud innovators to be disrupted because the pay as you go no longer works. How quickly will companies adapt?
The emergence and adoption of artificial intelligence (AI) technologies pose a new challenge for cloud pricing models. AI is not just another software application. It is a resource-intensive process that requires massive amounts of computing power, data, and storage. AI workloads can vary significantly in their complexity and duration, making it difficult to predict and budget for their resource consumption. Moreover, AI queries can require up to 10 times more processing power than traditional cloud workloads, resulting in higher and more volatile costs for cloud customers. Thus, all users are not the same when it comes to computing.
The Rise of AI and Its Resource Demands
How can cloud providers and SaaS companies adapt their pricing models to the AI era? How can they offer their customers a fair and flexible pricing strategy that aligns with the resource demands of AI? AI workloads can vary significantly in their complexity and duration, depending on the type, size, and quality of the data, the algorithm, the model, and the task.
To illustrate the resource demands of AI, let us look at some examples of AI use cases and their resource requirements. According to a report by Deloitte, a typical natural language processing task, such as sentiment analysis, can require up to 100 gigabytes (GB) of data and 10 gigaflops (GFLOPS – Billions of Floating-Point Operations per Second) of processing power per hour. A typical computer vision task, such as face recognition, can require up to 10 GB of data and 100 GFLOPS of processing power per hour. A typical machine learning task, such as image classification, can require up to 1 terabyte (TB) of data and 1,000 GFLOPS of processing power per hour. These numbers are orders of magnitude higher than the resource requirements of traditional cloud workloads, such as web hosting, email, or database, which can require up to 1 GB of data and 1 GFLOPS of processing power per hour.
These examples show that AI workloads can require up to 10 times more processing power than traditional cloud workloads, and that the resource requirements of AI can vary significantly depending on the use case and the task. This poses a challenge for the traditional cloud pricing models, which are based on fixed and predictable resource consumption. That is just the processing power required, you must also consider the electrical power consumption and data storage as well.
Limitations of Traditional Cloud Pricing Models in the Age of AI
The traditional cloud pricing models are based on the assumption that the resource consumption of cloud customers is fixed and predictable. They offer customers two main options: pay-as-you-go or reserved instances. Pay-as-you-go pricing allows customers to pay only for the resources they use, such as CPU, memory, disk, or network, on an hourly or per-second basis. Reserved instances pricing allows customers to pay upfront for a fixed amount of resources for a fixed period of time, such as one or three years, and receive a discount on the hourly rate.
However, these pricing models have some limitations when it comes to AI workloads. First, they do not account for the variability and unpredictability of AI resource consumption. AI workloads can fluctuate significantly in their resource requirements, depending on the type, size, and quality of the data, the algorithm, the model, and the task. This makes it difficult for customers to accurately predict and budget for their AI resource consumption and exposes them to the risk of cost overruns and unpredictable expenses. For example, a customer who uses pay-as-you-go pricing may end up paying more than expected if their AI workload consumes more resources than anticipated, or if the cloud provider increases the prices of the resources. A customer who uses reserved instances pricing may end up paying more than necessary if their AI workload consumes less resources than reserved, or if the cloud provider lowers the prices of the resources.
Second, they do not offer a granular and flexible pricing approach tailored to AI workloads. AI workloads can have different resource requirements depending on the use case and the task. For example, training an AI model may require more CPU and memory than inferencing an AI model. However, the traditional cloud pricing models do not allow customers to pay for the specific resources they need for their AI workloads and force them to pay for the resources they do not need. For example, a customer who uses pay-as-you-go pricing may end up paying for the CPU and memory they do not use, or for the disk and network they do not need. A customer who uses reserved instances pricing may end up paying for the resources they do not use for the entire duration of the reservation, or for the resources they do not need for the entire duration of the reservation.
What are Retailers Willing to Pay for and when will they Adopt AI options in solutions?
Adding to the complexity is retailers will likely not want to use the AI features for all users and at the same time? The typical cloud solution treats all users as the same. There are users at the store levels that need one level of flexibility, others that need AI all the time, and others in the middle. If vendors try to continue with a one-size fits all approach to licensing, the end result could be devastating. It could be a death knell to their profitability on one side or a risk to the top line revenue on the other as customers push back.
Rethinking Licensing Models for AI
How can cloud providers and SaaS companies rethink their pricing models for AI? How can they offer their customers a pricing strategy that aligns with the resource demands of AI, and that offers them granular and flexible pricing options based on their actual resource usage? We propose a consumption-based pricing model for AI workloads, which is based on the following principles:
- Granular pricing based on actual resource usage. Customers pay only for the resources they use, such as CPU, memory, disk, or network, on a per-second or per-query basis. Customers can also choose the type and quality of the resources they need for their AI workloads, such as standard, premium, or custom.
- Pricing tiers or packages tailored to different AI use cases and resource requirements. Customers can choose from different pricing tiers or packages that offer different levels of resources, performance, and features for their AI workloads, such as basic, standard, advanced, or enterprise. Customers can also customize their pricing tiers or packages according to their specific needs and preferences.
- Transparency and predictability in pricing models. Customers have full visibility and control over their resource consumption and costs and can monitor and manage their AI workloads and budgets in real time. Customers can also estimate their costs and compare different pricing options before committing to a pricing plan. By adopting a consumption-based pricing model for AI workloads, cloud providers and SaaS companies can offer their customers a number of benefits, such as:
- Cost savings and efficiency. Customers can save money and optimize their resource utilization by paying only for the resources they use, and by choosing the type and quality of the resources they need for their AI workloads.
- Flexibility and scalability. Customers can adjust their resource consumption and costs according to their changing needs and demands, and scale their AI workloads up or down without any penalties or lock-ins.
- Customer satisfaction and loyalty. Customers can enjoy a fair and transparent pricing strategy that aligns with their resource demands and usage, and that offers them granular and flexible pricing options tailored to their AI use cases and requirements.
No doubt this will be incredibly challenging and complex at the same time. Vendors may have to use AI for that! But the status quo is simply not going to work. Those that adopt may take a short term hit to profitability but will be the big winners in AI in the future.