A framework for revenue, unit economics, and investor credibility
If you’re building an AI company, you’ve probably felt the tug-of-war: the product is real, customers are paying, and usage is growing, but the usual B2B frameworks (seats, ARR, net retention, “land and expand”) don’t fit your AI business model.
We’re early in the category. The “standard” AI business model is not settled yet, and token/inference costs are falling rapidly. That combination makes rigid frameworks premature.
So, the goal isn’t to force a single narrative. It’s to grow the business while staying flexible and credible as the market evolves:
- price for adoption and learn fast
- keep cost side unit economics on a reasonable path as usage scales and inference costs fall
- maintain trust with investors through transparent reporting and communication
Why This Matters Right Now
Venture capital is still figuring out how to underwrite AI businesses. In classic SaaS, investors learned a reliable pattern: recurring subscription revenue is reasonably predictable; serving one more user is close to free; gross margins are high and stable; and “ARR” is a convenient shorthand for recurring revenue.
AI breaks key parts of that pattern:
- Revenue mixes can include pilots, credits, usage, services, outcomes, and hybrids.
- Costs scale with usage (inference, tool calls, human-in-the-loop).
- Inference costs keep decreasing.
The Two Structural Breaks (Why AI Feels Different)
Break 1: Revenue looks different.
In SaaS, it’s usually easy to point to a subscription contract and say, “that’s recurring.” In AI, the revenue mix is frequently more complex:
- pilot or POC fees
- credits-based deals
- usage-heavy contracts (with or without minimums)
- outcome-based pricing
- services (implementation, onboarding, custom work)
When all of that gets blended into one “ARR” number, you end up with a metric that needs footnotes. If your ARR slide needs three minutes of verbal clarification, it’s probably not making the point you were hoping for.
Break 2: Margins look different.
In classic Saas, there is essentially no marginal cost for additional users. AI costs increase with each user / task / workflow. The drivers are usually straightforward:
- inference / model provider costs
- tool/API calls triggered by agents
- infrastructure costs
- human-in-the-loop review (sometimes)
- support burden as edge cases appear
This doesn’t make AI businesses worse. But it does mean you may need a different GTM plan, and that cost guardrails need to be built into the pricing and / or the product.
How To Present Revenue While Establishing Credibility
ARR is a definition before it’s a metric. It isn’t a GAAP metric, it’s a convention. In AI, that convention gets messy when you annualize usage spikes, blend pilots and services into “recurring,” or treat credits as if they’re high-quality paid usage.
A practical fix is to report revenue in layers. Instead of forcing everything into ARR, present a layered view that separates what’s contractually committed from what’s usage-driven and what’s truly one-time.
1) ARR
Contract-backed, renewable commitments you can point to in an agreement. All invoices should be able to be tracked back to a contract.
2) Usage run-rate (non-committed usage)
Annualized trailing paid usage that is not contractually committed. It’s valuable, but call it what it is.
3) Services / one-time
Implementation, onboarding, custom work.
4) Pilots / POCs
Call pilots out separately. Treat them as one-time if there isn’t a clear path to renewal. And don’t annualize them into ARR. Instead, track pilots as pipeline and track conversion separately.
To keep your definitions consistent as pricing evolves, write a one-page revenue policy and share this in your data room with investors. For bonus points, have a contract log that supports your ARR number by tying contracts to the invoices included in ARR.
How to Model Costs + Unit Economics (And Show Where Costs Are Going)
If revenue is one credibility trap, margins are the other. In SaaS, founders can sometimes delay deep margin work because gross margins are usually high and stable. In AI, gross margins are lower and more volatile.
As the founder of an AI company, it’s important to understand these costs. A simple way to model this is to build from the real cost driver up to the numbers investors and operators care about (average user and average client / account).
Step 1: Pick a unit of work (the cost driver).
Examples: ticket resolved, document processed, workflow run, or 1,000 actions.
Step 2: Calculate fully-loaded variable cost per unit.
Break out costs like:
- inference/model provider costs
- tool/API costs (search, enrichment, integrations)
- human-in-the-loop costs (if applicable)
- usage-linked infrastructure (if it truly scales with volume)
- optionally: a variable slice of support if it scales with usage
This section helps you understand which costs matter most today, if you are trending in the right direction, and what you expect to improve over time. Being detailed in this cost analysis is valuable.
Step 3: Roll unit costs up to a user, then an account.
- units per user per month × cost per unit = COGS per user
- users per account × COGS per user = COGS per account
Illustrative example (focus: COGS roll-up)
Imagine an AI support agent.
Assumptions (illustrative):
- cost per action: $0.030
- inference: $0.015
- tool calls: $0.010
- human review: $0.005
- average actions per user per month: 3,000
- average users per account: 50
COGS roll-up:
- COGS per user/month: 3,000 × $0.030 = $90
- COGS per account/month: 50 × $90 = $4,500
Once you can calculate COGS per user and per account reliably (and understand where these costs are going) you will be better able to understand if you have the correct pricing in place.
Implications For Pricing + GTM (Where the Category is Still Forming)
Pricing in AI is a balancing act. You want to optimize for adoption and growth, but you also have to keep costs in mind. Once you have clarity on unit costs that balancing act gets much easier.
In the ideal scenario, your pricing model maps to your cost model. More usage and more cost should produce more revenue (with some margin).
But if that complexity slows adoption, lean toward a pricing strategy that will drive growth. Then pressure test your pricing options against unit costs, including how you expect those costs to evolve. If you can explain the logic clearly you’re probably in a good position (for now). The market is changing rapidly, so perfection isn’t the goal. But experimentation and maintaining flexibility are valuable.
A few common patterns we’re seeing:
- per-action / per-workflow-run pricing
- per-query pricing
- per-conversion / outcome-based pricing (usually with constraints)
- minimum commitment + usage (predictable floor + aligned upside)
- platform fee + overage
- hybrids (and plenty of experimentation)
Two practical product/GTM rules:
1) use the lowest-cost model that’s reasonable for each task, and benefit as model costs drop
2) put guardrails in place so you don’t have unbounded provider costs, and customers can’t get surprised by a massive bill (caps, budgets, alerts, rate limits, clear included usage)
Expansion can look different too. In many AI products, growth is driven less by “more seats” and more by:
- more workflows in production
- more volume of work processed
- deeper integrations into systems of record
- rollout to additional teams
These should also be considered in your pricing strategy.
How to Maintain Credibility with Investors (and Show your Plan)
Investors know this category is still evolving. They don’t need a perfect pricing and margin story right now. They do need clarity in the numbers and confidence that you understand your unit costs and how they will change over time.
A simple credibility pack can do most of the work:
1) Revenue Definitions: committed recurring vs usage run-rate vs services (and any other categories you use). Put this in a one-pager you can hand to investors.
2) Revenue layers: the actual revenue breakdown based on those definitions (committed, run-rate, services, etc.).
3) Unit economics (cost side): unit of work → user → account. Include current unit costs, what you expect to change over time, and why.
4) Pricing strategy: how pricing drives adoption and growth, how it aligns with customer value, and how it connects to your unit economics. If there is a gap on that last point, that can be fine as long as you have strong guardrails in place.
Optionally (but powerful in diligence), show an ARR reconciliation: contracts → billing → cash → accounting.
Bottom Line
AI business models are still evolving. Trying to force them into a single legacy template usually creates confusion in the two places that matter most: revenue quality and margin structure.
You don’t need to invent the perfect AI model. You do need to:
- report revenue in layers so your numbers are easy to trust
- model unit economics from the unit-of-work level up to the average account
- put cost/pricing guardrails in place so usage growth doesn’t quietly kill margins
Do those three things and you’ll make better decisions internally, and maintain credibility externally, even while the category continues to evolve.

