Every week, someone publishes a piece about whether SaaS companies will survive the shift to usage-based pricing. Meanwhile, most of them are already doing it.

For almost 10 years, I’ve been tracking an index of 600 companies with meaningful recurring revenues — a cross-section of SaaS and other industries called the “Subscription Economy Index” (forthcoming in April). Of the SaaS companies that charge by seat in that dataset, 84% also carry some form of usage-based charge. Not a handful of early adopters. Not just the infrastructure companies or AI-native platforms that everyone always cites — 84% of seat-based SaaS companies (!).

At this point, the question isn’t whether to add usage. For most companies, that decision is already made. The more interesting question is this: With such widespread adoption, why is the marketplace still treating this as uncharted territory? 

My answer: Having a usage model and running one well are two entirely different things.

The Difference between Doing It and Doing It Well

When I talk with companies about usage models — and lately, nearly every conversation I have touches upon this topic — most of the energy goes into finding the right metric. What unit best reflects the value we deliver? API calls, actions taken, assets processed, outcomes achieved? The closer you can get to something the system actually does, the more it feels like a fair and precise measure of value.

That’s the science of pricing, and it matters. The problem is that pricing is both art and science, and the art piece — customer psychology — tends to get crowded out. Behind every usage metric is a human being who has to ask for budget, explain spend to their boss, and manage a team of people interacting with the system. That person’s experience of the pricing model is what determines whether it actually works. Most of the design energy goes into the measurement. The person receiving it is almost an afterthought.

The question that cuts through is simple: What does this metric feel like from the other side of the table? 

A good usage metric meets four conditions — not for the vendor, but for the customer:

  • Simplicity. Does the customer immediately understand what they’re buying and what they’re paying for? If they have to model it out or call their rep to interpret their bill, the metric has already failed.

  • Predictability. Can the customer budget for this with reasonable confidence? Usage models that expose customers to wide variance — especially when it’s driven by events outside of their control — create anxiety that erodes the relationship, regardless of the product’s actual value.

  • Transparency. Can customers see where they stand in real time? The answer can’t just be “yes, it’s on the invoice.” It must be visible inside the product, even before the bill arrives.

  • Control. When a customer sees usage trending in a direction they didn’t expect, can they do something about it? Transparency without control just gives customers a better view of a problem they can’t solve.

These four conditions are where well-intentioned usage models break down — not because the science was wrong, but because the art didn’t get the same attention. That’s understandable. Pricing teams are operating with full pipelines, boards demanding AI upside, and executives who want a monetization answer before the next earnings call. A metric that’s mathematically defensible is often the only viable response to that pressure. But the consequences don’t show up in the spreadsheet where the decision was made. Instead, they emerge inside the customer’s organization, in behavior that nobody planned for.

Beware the Butterfly Effect

A pricing decision made in a product or finance meeting doesn’t stay in that room. It travels — into how managers brief their teams; how analysts decide what work to prioritize; how buyers justify renewals. Small choices about what to measure set off chains of behavior inside customer organizations that nobody anticipated and nobody can easily see. The butterfly effect of a usage metric is real, and in my experience, it’s almost never part of the conversation when the metric gets chosen.

I’ve been in enough of these conversations to have seen the pattern repeat. A company lands on a metric that feels like a genuine proxy for value delivered, and moves forward. What they haven’t asked is what customers will actually do when that metric shows up on their bill.

Consider a data analytics platform that prices on queries run: Queries are a reasonable proxy for how deeply customers are engaging with the tool — the more questions they’re asking of their data, the more value they’re getting. But what’s actually happening in the background is an analyst is telling her team to consolidate their analyses, run fewer exploratory queries, and think twice before pulling a report. The net effect: The pricing model has turned the product’s core value proposition into something customers are rationing.

A fifth condition, then, becomes: Does the metric create behaviors that get customers more of what the product is there to deliver — or less? 

Because a model that satisfies simplicity, predictability, transparency, and control can still fail this test. And, because the consequences play out inside the customer’s organization rather than in the vendor’s data, they’re the hardest failure mode to see coming.

So, what does a well-designed usage model actually look like? The answer isn’t less usage — it’s a structure deliberately built for both sides of the table.

Finding the Hybrid Zone

The data adds one more dimension worth sitting with: There’s an optimal range for how much of your revenue should come from usage, and it’s nowhere near 100%.

Companies whose usage charges represent between 25% and 75% of total revenue consistently outperform those at either extreme — delivering stronger ARPA growth, better revenue expansion, and lower churn. Where that lands for your company will depend on several factors, including maturity and type of SaaS you deliver.

While pure subscription models leave monetization on the table when customers scale or AI disrupts, pure usage models expose both sides to volatility, and customers who can’t predict their costs don’t commit confidently. The Hybrid Zone delivers both — a stable base that funds the business, and a usage component that scales with customer value.

Critically, usage in this context rarely means pure pay-as-you-go. The companies doing this well have packaged their usage into prepaid drawdown buckets, committed use tiers, and pooled resources with annual true-up conversations. These structures give customers the flexibility of usage and the predictability of subscription simultaneously. The packaging around the metric, it turns out, matters as much as the metric itself.

Building for Both Sides of the Table

The seat isn’t disappearing. Rather, for most SaaS companies it’s already sharing the architecture with something else, which is exactly where it should be. The conversations worth having now are about whether the usage models already in place were designed with the customer’s psychology in mind — whether they meet the four conditions, whether they pass the fifth test, and whether the mix between fixed and variable was chosen deliberately or just accumulated as new features got priced and launched.

Eighty-four percent of seat-based SaaS companies have made the move. The ones pulling ahead are the ones who treated the balance as a design decision — and remembered that the person on the other side of that bill is the point of the whole exercise.

Amy Konary runs the Subscribed Institute at Zuora, a think tank focused on how companies design and evolve their revenue models. She came up as an industry analyst at the dawn of SaaS and hasn’t stopped paying attention since. Amy loves finding the story in the data — and the data in the story.

Agree? Disagree? Have an opinion?

This Week across Topline

Why Intercom Destroyed Its Predictable SaaS Revenue

Intercom gave up $60M in contracted, seat-based ARR to bet the entire company on an AI agent named Fin. The result? They passed $400M in total ARR, with Fin driving $100+M in revenue and 35% growth. The crew breaks down what went into this massive turnaround.

My Team Drives 4X Revenue per AE vs. Competitors | Aviv Canaani, CRO @ Datarails
Aviv Canaani joins Kyle Norton to break down exactly how to build a highly predictable, inbound-led revenue machine.

From Kodak Film to AI Tokens: Why AI-Native GTM Is Rewriting Commissions

What AI-native sales orgs know about paying reps that most GTM teams are still getting wrong.

Become a Topline insider by joining our Slack channel.

We want to hear your feedback! Let us know your thoughts.

Reply

Avatar

or to participate

Keep Reading