In 2019, I wrote a post on how companies should price their AI-enabled software. I focused on SaaS companies that were developing their own AI and highlighted pricing considerations as they work to improve their models.
Since then, there’s been a meteoric rise of third-party foundational model providers like OpenAI, MosaicML and more. These “AI as a service” vendors have enabled any SaaS player to integrate powerful AI into their application. This has created a mad dash to sprinkle AI pixie dust across the SaaS ecosystem. We’ve seen this among the countless newly minted startups and more established public companies.
The proliferation of this technology raises many questions, including how to deploy it safely, who will win (focused startups or incumbents with existing distribution?) and more. One important area that hasn’t yet been discussed much: how it should be priced.
Below, I lay out a working framework on how to think about pricing the AI in your SaaS application. The space is evolving rapidly, so I’ll update this thinking in future posts.
How much differentiated value do your AI features create?
By definition, these foundational models are accessible to every SaaS provider, so how should you think about pricing what is, in effect, a commodity you’ve integrated into your product? Start with first principles: How much differentiated value does this AI feature create?
By integrating AI features into the flow of your broader platform, you are saving the user time from having to leave their flow to go to the underlying model (ChatGPT, etc). Keeping the user in context can be a powerful unlock.
However, be honest with yourself as to how much value your AI is actually creating. Many AI features in SaaS today are getting a flood of initial tire kicks from curious users but aren’t seeing meaningful sustained adoption. Start by understanding retention and value creation.
Then ask yourself how differentiated your AI offerings are. If the majority of the value your AI feature creates can be garnered by going directly to ChatGPT, don’t try to make a significant margin on that feature. Reselling is not a sustainable value creation strategy (nor differentiation strategy, though that’s a topic for another post).
Even if you aren’t able to charge much for your AI features today, they can create meaningful value by making your current product more valuable and perhaps stickier. They can also be used to drive upsell to higher tiers, all of which can result in increased net dollar retention.
Over time, you can leverage initial features that may today just be a thin wrapper around a third party model to build more differentiated value (more on how below). When you get to that point, you can consider a more value extractive pricing approach.
AI SaaS pricing is in its early days
As you launch your AI features, your priorities should be to learn about how/where they add value and to drive user adoption. To that end, many SaaS companies are starting by offering their AI features for free. The leaders we’ve chatted with behind these strategies envision monetizing over time as they learn more about how much value is being created and for whom.
That said, some SaaS providers have already begun to charge for their AI features. Most start with a freemium approach typically gated on usage volume (not features) so the user gets a full taste of the feature’s power before seeing a paywall. The pricing models they use prioritize simplicity, which helps drive adoption. GitHub CoPilot and NotionAI, which have both seen strong adoption, charge on a per-seat basis.
The aforementioned SaaS providers charge an incremental fee for their AI features on top of their core product offerings. There are also SaaS providers whose entire product is focused on providing AI-enabled services. Companies like Jasper and Copy.ai leverage OpenAI to deliver copywriting as a service. They charge higher prices per seat, though likely have a higher cost of goods sold (COGS) given their bills from OpenAI.
There’s an open question around how much of these foundational model COGS SaaS providers will be able to pass along to their customers over time. It may be the case that for those that provide truly differentiated value, they’re able to pass all of these costs along (and add a healthy margin).
Those that provide more commoditized value may not be able to and will thus have lower gross margin profiles than traditional SaaS providers (70%-80%). These players may be able to leverage AI to improve their internal operations (e.g., automating major parts of sales and marketing) and recover some of the gross margin loss in their operating expenses.
Examples of pricing for AI features in SaaS (May 2023)
Company | Featured AI product element | Current pricing approach for AI feature (May ‘23) |
HubSpot | ChatSpot | Free for now |
Doximity | DocsGPT | Free for now |
Ironclad | AI Assist | Free for now |
Guru | Guru Answers | Free for now |
GitHub | Copilot | $10-$19/seat/month |
Notion | Notion AI | $10/seat/month |
Jasper | Jasper | $39-$49/seat/month |
Copy.ai | Copy.ai | $36/seat/month+ |
How SaaS companies should consider COGS for AI features
The pricing offered by underlying model vendors is rapidly evolving; not just price level but approach (some charge on a token basis, some on compute, etc.). SaaS companies should assume that, with more model providers and better technology, cost of inference will continue to decline over time (though there may be some near-term stagnation given GPU supply constraints).
In our portfolio of SaaS companies, we’re seeing that most AI features being integrated today (summarization, editing, text generation, etc.) aren’t breaking the bank. Thus, most SaaS players aren’t gating usage for their paid AI features though they do reserve the right to throttle for certain extreme users (providing intentionally vague “fair use” policies).
As you consider COGS mitigation strategies over time, one important approach will be optimizing which underlying model you use for a given task, trading off cost and performance. As an example, very few portfolio companies are leveraging GPT-4 today given cost and latency, despite its tremendous power; GPT-3.5 Turbo is sufficient for their use cases.
While these model-specific points will evolve quickly, the core idea of choosing the right model for the job will likely become the dominant approach SaaS vendors take to delivering AI features. As open source models in particular continue to rapidly improve, we think this dynamic will get even more favorable for SaaS vendors.
Driving toward differentiated value creation
Ultimately, your goal for your AI features should be to have them drive differentiated value. You can start with a simple integration of a third-party model, but you should be using this approach to capture proprietary data specific to your “job to be done” (JTBD).
You’ll want to track and correlate which suggestions are surfaced by the model, what the user does with the suggestion and what the business outcome is. You can then use this proprietary data to tune your model, ultimately delivering JTBD-specific value that can’t be obtained from a generic third-party model. We call this approach a “coaching network.”
As SaaS companies move toward this nirvana state, I suspect many will move away from per-seat pricing. Ultimately, as discussed in my 2019 post, this approach may limit data gathering and ultimately cannibalize the SaaS provider as its AI features improve and make more user seats less necessary to drive the same value. SaaS companies will likely move toward more value aligned models at this point. As the industry gets there, we’ll share more thoughts on what we’re seeing.
In the interim, SaaS companies should be solving for simplicity and adoption in their AI feature pricing. This is a time for learning and iteration.
We’re all learning quickly as the space evolves at blinding speed. Feel free to reach out if you’re building an AI-enabled SaaS product and want to jam on these ideas.