Skip to main content

Revenue Quality Metrics That Separate Real Growth From Vanity Numbers

A SaaS company closes the quarter at $47M ARR, up 180% year-over-year. The deck looks immaculate. Board members nod approvingly. Then we pull the revenue waterfall and find that $14M came from deals with net-negative gross margins after you account for custom development, and another $8M sits in contracts where the customer hasn’t deployed the product in four months.

This happens more often than anyone wants to admit.

Revenue quality due diligence isn’t about whether the numbers add up arithmetically. It’s about whether the revenue represents actual economic value that will compound, or whether it’s a house of cards held together by accounting elections and sales desperation.

The Revenue Recognition Shell Game

We’ve watched management teams play creative games with ASC 606 for years now. The technical complexity of the standard gives companies enormous latitude to front-load revenue recognition, and many take full advantage.

Start with the contract term manipulation. A company books a three-year contract as “ARR” by dividing total contract value by three, even when the payment terms are heavily front-loaded and the customer has a unilateral out-clause after year one. The revenue appears recurring on the dashboard, but the cash profile and actual commitment look nothing like a true recurring relationship.

Then there’s the professional services mask. Companies bundle implementation fees, custom development work, and actual software licenses into a single performance obligation. They recognize the entire amount ratably over the contract term, which makes services revenue look like software revenue in the P&L. Pull the actual cash collections and cost structure, and you’ll find gross margins in the 30s instead of the 80s they’re claiming.

We always request the full ASC 606 technical memo for any company over $10M in revenue. Not the summary. The actual technical accounting documentation that lays out each performance obligation, the standalone selling price analysis, and the revenue recognition pattern for each contract type.

Most management teams hate this request. That tells you something.

The Cohort Economics Test

Revenue charts that go up and to the right mean nothing if the unit economics are deteriorating underneath. The only way to see this is cohort analysis with enough granularity to expose the trends.

We look at monthly cohorts for at least 24 months back. Not annual cohorts. Monthly. Because companies with degrading unit economics will show you annual cohorts that blend strong Q1 vintages with terrible Q4 vintages, and the averaging hides the death spiral.

The metrics we track by cohort: CAC, time to payback, net revenue retention at 6/12/18/24 months, and gross margin contribution after fully loaded costs. Not just COGS. Fully loaded with allocable overhead.

Here’s what dying growth looks like. January 2024 cohort: $8K CAC, 14-month payback, 105% NRR at 12 months. January 2025 cohort: $19K CAC, 31-month payback, 87% NRR at 12 months. The revenue number went up because they’re pouring more into the top of the funnel. The business got dramatically worse.

When we see CAC inflation above 30% year-over-year without corresponding ARPA expansion, we start asking hard questions about competitive dynamics and market saturation. When we see payback periods extending past 18 months, we want to understand the cash runway math in detail.

Companies will fight you on this analysis. They’ll say their cohort tracking isn’t clean, or they’ve changed the methodology, or the data isn’t reliable for older periods. This is usually smoke.

Concentration and Customer Health Flags

A $30M revenue company with $12M coming from two customers isn’t really a $30M company. It’s a $18M company with two large services relationships that could evaporate.

We use a modified Herfindahl-Hirschman Index to measure revenue concentration risk. Any customer above 10% of total revenue gets flagged. Any customer segment above 30% gets deep scrutiny.

But the raw concentration numbers hide half the story. You need to look at concentration trajectories and customer health signals simultaneously.

Map out concentration over the last eight quarters. Is it improving or getting worse? A company that was 40% concentrated on top-five customers two years ago and is now at 25% has fundamentally de-risked. A company going the other direction is building a time bomb.

Then layer in the customer health data. For the top 20 customers by revenue: what’s the product usage trend over the last six months? What’s the support ticket volume relative to baseline? How many executive sponsor changes have there been? What’s the actual renewal probability based on account manager assessments, not the bullish forecast in the board deck?

We’ve killed deals where 60% of revenue came from customers showing red health scores. Management insisted renewals would be fine. We walked. Six months later, three of the five top customers churned.

The Working Capital Trap

Revenue quality problems show up in working capital before they show up anywhere else. Watch the balance sheet.

DSO is the obvious metric, but most people look at it wrong. They calculate simple averages over a quarter or a year. That smooths out the disasters. Calculate DSO by cohort and by month. Look at the distribution, not just the average.

A company with 45-day average DSO might have 25% of invoices collecting in under 30 days and 35% sitting past 90 days. That’s not a healthy sales process. That’s a sales team closing deals with customers who don’t have budget approval or don’t see the value.

Then track unbilled revenue and deferred revenue movements. Unbilled revenue growing faster than bookings means you’re recognizing revenue ahead of invoicing, which either means you have strange contract structures or you’re stretching the accounting rules. Deferred revenue shrinking relative to bookings means you’re pulling revenue forward faster than you’re replacing it with new commitments.

We also look at the ratio of deferred revenue to next-quarter revenue. For a healthy SaaS company, deferred revenue should cover at least 95% of next quarter’s revenue forecast. When that ratio drops below 80%, you have a problem. It means you’re dependent on in-quarter bookings to hit your revenue target, which means the revenue isn’t actually recurring.

Gross Margin Archaeology

Reported gross margins lie constantly. You have to reconstruct them from first principles.

Start by breaking out every component of COGS. Hosting costs, third-party licenses, support team fully loaded comp, implementation team comp if they’re in COGS, payment processing fees if you’re running transactions, and any revenue share or royalty arrangements.

Then allocate shared costs properly. If your engineering team spends 30% of cycles on customer-specific customization, that’s a COGS component. If you’re running a scaled customer success function that’s really just break-fix support under a different name, that’s COGS.

After full allocation, we see reported 75% gross margins turn into actual 52% gross margins regularly.

The next step is cohort-level gross margin analysis. New customers are almost always lower margin because of implementation overhead. If your mature cohorts aren’t expanding to 80%+ gross margins by year two, you don’t have a software business. You have a services business with a software veneer.

Geographic analysis matters too. A company with 70% margins in North America and 35% margins in Europe probably has a partnership or reseller model in Europe that’s eating the economics. That’s fine if it’s disclosed and if the expansion strategy accounts for it. It’s not fine when management presents blended metrics and pretends the unit economics are uniform.

Churn Forensics

Net revenue retention is the most gamed metric in SaaS. Everyone reports NRR. Almost nobody reports it honestly.

First problem: timeframe manipulation. Companies report annual NRR because it blends 12 months of churn and expansion together. Monthly cohorts show the real story. If NRR is 115% annually but degrading from 118% to 108% over the last six quarters, the trajectory is what matters.

Second problem: survivor bias in expansion calculations. Companies calculate expansion as a percentage of retained customers, not as a percentage of the starting cohort. If you start with 100 customers, lose 30, and expand the remaining 70 by 30%, you report 130% expansion rate. But you actually went from 100 customers generating $X to 70 customers generating $0.91X. That’s 91% NRR, not 130%.

Third problem: logo churn versus revenue churn. A company will report 5% logo churn and 95% revenue retention, which sounds fine until you realize the churned customers were small, and the “retained” base includes three customers in the final stages of non-renewal who are still paying but have ripped out the software.

We always ask for gross dollar retention and logo retention by customer segment. Small customers, mid-market, enterprise. If the company can’t produce this in under two hours, they’re not tracking it, which means they’re not managing the business properly.

For any company with GDR below 85%, we dig into the churn reasons. Not the Salesforce picklist reasons. The actual reasons from win-loss interviews and exit surveys. When you see “product gaps” show up in 40% of churn events, that’s not a sales execution problem. That’s a product-market fit problem.

The Revenue Mix Breakdown

Not all revenue is created equal. A dollar from a multi-year enterprise contract is worth three times a dollar from a month-to-month SMB subscription.

We break revenue into tiers based on contract characteristics: contract length, payment terms, annual prepay percentage, cancellation terms, auto-renewal provisions, and price escalation clauses.

Tier 1 revenue: multi-year contracts, annual prepay, automatic renewal, hard commits. Tier 2: annual contracts, quarterly or monthly pay, soft commits with notice periods. Tier 3: month-to-month or quarterly with easy outs.

Then we calculate what percentage of revenue sits in each tier and how that’s trending. A company shifting from 60% Tier 1 to 40% Tier 1 over two years is getting riskier, even if absolute revenue is growing.

We also track revenue by sales motion: inbound, outbound, partner channel, product-led growth. Each motion has different cost structures and scale characteristics. A company that’s 80% outbound field sales will never have the capital efficiency of a 60% PLG company, and the valuation should reflect that.

The Special Situations

Some revenue patterns are automatic red flags.

Round-tripping: when a company invests in or lends money to a customer, and the customer uses that capital to buy the company’s product. We’ve seen this in fintech, crypto infrastructure, and vertical SaaS. It’s technically legal under ASC 606 if structured correctly, but it’s economically nonsense.

Barter and contra revenue: when companies exchange products or services instead of cash. This happens in adtech and martech all the time. Both sides book revenue, both sides book expenses, no actual economic value changes hands. The accounting is compliant, but the revenue quality is zero.

Channel stuffing: when a company with a reseller model books revenue on sell-in to the channel rather than sell-through to end customers. The resellers have return rights or side agreements that effectively make the inventory a consignment arrangement, but the revenue gets recognized immediately.

We caught one software infrastructure company doing this systematically. They’d ship $5M worth of licenses to a reseller on the last day of the quarter, book the revenue, then watch the reseller return $3M of it in the first month of the next quarter. The reported revenue was compliant with the contract terms, but the business reality was fiction.

The Cash Reconciliation

At the end of every revenue quality analysis, we reconcile reported revenue to actual cash collected. This is the forcing function that exposes everything.

Build a simple bridge: revenue recognized in period X, plus change in AR, minus change in deferred revenue, plus contract liabilities adjustments, should equal cash collected from customers in period X (adjusted for timing).

When this bridge doesn’t work, something is wrong. Either the revenue recognition is too aggressive, the AR is building up with uncollectible balances, or there are off-balance-sheet arrangements.

We’ve found phantom revenue this way more times than we can count. A company reports $10M in quarterly revenue but only collects $6M in cash, and AR grows by $3M. The math says there’s $1M missing. Start digging, and you find that $1M was recognized on a contract that’s in dispute, or on a deal that hasn’t actually closed, or on an amendment that the customer never signed.

The Sustainability Question

Revenue quality analysis ultimately answers one question: can this growth continue without driving the company into the ground?

We’ve seen countless companies grow to $50M or $100M ARR on the back of negative unit economics, terrible customer fit, and unsustainable sales and marketing spend. The revenue is real in the sense that it shows up in the financial statements. It’s not real in the sense that it represents a durable, profitable business.

The test is simple. Model the business forward 24 months assuming current cohort economics, current churn rates, and current CAC payback periods. Don’t assume things will magically improve. Don’t model scale efficiencies that haven’t shown up yet. Just run the math on what’s actually happening.

If the model shows cash flow positive within 18 months at current burn, the revenue is probably real. If it shows the company needs to raise $30M in the next year to keep growing at the same rate, the revenue is expensive.

What We Do With This

When we run full revenue quality due diligence, we’re not trying to find reasons to kill deals. We’re trying to understand what we’re actually buying.

A company with 60% gross margins after adjustments, 18-month CAC payback, 90% GDR, and 110% NRR is a good business even if the reported metrics were inflated. A company with 80% reported gross margins that turn into 45% after adjustments, with CAC payback at 30 months and GDR at 78%, is uninvestable at any reasonable valuation.

The work matters because the difference between real growth and vanity metrics is the difference between building compounding value and lighting money on fire. And after a few hundred deals, you learn to see the difference in the first hour of diligence.

The companies that pass this scrutiny don’t fight the process. They have the data ready, they know their unit economics cold, and they’re proud to walk through the details. The companies that fail spend the whole time explaining why their business is special and why normal metrics don’t apply.

That’s usually all you need to know.

Evaluating an acquisition?

We conduct operational due diligence for investors and acquirers across software, technology, and services. If the financial model looks right but something feels off, we find out why.

Book a conversation