Most CROs track the revenue number. The top line. Closed won. Quota attainment. Those are outcomes. They are the results of decisions that were made 6 to 18 months ago. By the time they appear in the revenue report, the window to influence them has closed.
The 10 metrics in this post are different. They are the leading and lagging indicators that tell a CRO what the revenue number will be before it materializes, where the pipeline is leaking, and which operational failures are costing measurable revenue right now. They are the metrics that revenue operations exists to produce and that every CRO should be reviewing weekly.
What it measures: The percentage of open and closed pipeline where marketing was the first touch on the opportunity.
Why it matters: This metric is the primary evidence for marketing's contribution to revenue. Stage 4 Revenue Marketing organizations source 40 to 60% of total pipeline from marketing. Most mid-market and enterprise B2B companies entering a RevOps engagement are at 15 to 30%. The gap between where you are and where the benchmark sits is the business case for marketing investment.
What drives it: ICP alignment between marketing and sales, demand generation program quality, attribution model completeness, and the quality of the MQL-to-SQL handoff. When this metric is low, the root cause is usually one of those four, not simply low marketing spend.
How to use it: Track it monthly. When it declines, diagnose whether the cause is reduced marketing program volume, attribution infrastructure degradation (UTM tags not being applied, sync failures), or ICP drift between marketing's target and sales's target.
What it measures: The dollar value of pipeline moving toward revenue on any given day: (number of opportunities x average deal value x win rate) / average sales cycle length in days.
Why it matters: Pipeline velocity is the single metric that connects marketing, sales, and RevOps operations work to revenue timing. Every change to any of the four variables in the formula changes when revenue lands. A campaign that improves win rate by 5 percentage points has a specific and calculable impact on pipeline velocity. RevOps decisions should be made against pipeline velocity impact, not against individual metric optimization.
What drives it: Lead quality (affects win rate), content and enablement quality (affects cycle length), ICP alignment (affects deal size), and demand generation volume (affects number of opportunities).
How to use it: Track it weekly. When velocity slows, identify which of the four variables changed. Each one points to a different operational fix. Fewer opportunities is a demand generation problem. Smaller deal sizes is an ICP or pricing problem. Lower win rate is a sales process or competitive problem. Longer cycles is a buyer enablement or qualification problem.
What it measures: The percentage of opportunities where marketing was the first touch that close as won.
Why it matters: Marketing-sourced pipeline that wins at a lower rate than sales-sourced pipeline is a signal that marketing is generating interested contacts who are not well-qualified, or that the sales team is not effectively converting well-qualified marketing-sourced leads. The diagnosis matters because the fix is different in each case.
What drives it: ICP precision in marketing targeting, lead scoring model accuracy, the quality of the MQL-to-SQL handoff, and sales team effectiveness with marketing-sourced leads specifically.
How to use it: Compare win rate on marketing-sourced pipeline to win rate on sales-sourced pipeline quarterly. If marketing-sourced win rate is meaningfully lower, run a cohort analysis on the closed-lost marketing-sourced deals to identify the most common loss reasons. If the loss reasons are qualification-related, the lead scoring model needs revision. If the loss reasons are competitive or process-related, the issue is downstream of marketing.
What it measures: The percentage of marketing-qualified leads that sales accepts as sales-qualified.
Why it matters: A low MQL-to-SQL conversion rate is not always a lead quality problem. It is frequently a definition problem, a process problem, or an alignment problem. If sales is rejecting 70% of what marketing sends, either the MQL criteria are wrong, the SLA is not being enforced, or sales and marketing are not operating from the same ICP.
What drives it: ICP alignment, lead scoring model accuracy, MQL handoff process quality, and sales team follow-up consistency.
How to use it: Track by month and by lead source. A declining trend signals either degrading lead quality (review scoring model) or declining sales follow-up (review SLA compliance). A sudden drop often signals an ICP change on the sales side that marketing has not been informed about. Benchmark: 13 to 25% in high-performing B2B organizations.
What it measures: The average time between an MQL being passed to sales and the first sales activity on that lead.
Why it matters: Lead response time is one of the highest-leverage, most-ignored RevOps metrics. Research consistently shows that B2B leads contacted within 5 minutes of conversion are significantly more likely to qualify than leads contacted after 30 minutes. Most enterprise B2B organizations have lead response times measured in hours or days, not minutes.
What drives it: MQL handoff workflow design, lead routing logic, sales rep notification systems, and CRM adoption rate. If leads are being routed correctly but response time is still slow, the issue is usually CRM notification adoption: sales reps are not opening or acting on the notification.
How to use it: Track it weekly and report it at the revenue operations governance meeting. Set a defined SLA: for example, all MQLs receive first contact within 4 business hours. Report SLA compliance rate monthly. When compliance drops below 80%, investigate whether the issue is routing, notification, or workload.
What it measures: The average number of days opportunities spend in each pipeline stage, from creation through close.
Why it matters: Aggregate sales cycle length is a useful metric. Stage-level sales cycle length is a diagnostic tool. When the aggregate cycle is lengthening, stage-level analysis identifies exactly where deals are stalling. A deal that spends too long in "Proposal" is a different problem than one that spends too long in "Negotiation," and each requires a different intervention.
What drives it: Content and enablement quality at each stage, sales process execution, deal qualification rigor at entry, and buyer engagement patterns.
How to use it: Review stage-level cycle length monthly. When average time in a specific stage increases, review the deal notes for a sample of deals that stalled in that stage and identify the most common stall reason. Build enablement or process changes to address the specific stall pattern.
What it measures: The percentage of revenue retained from the existing customer base after accounting for churn, downgrades, and expansion over a defined period. Calculated as (beginning ARR + expansion ARR - churned ARR - downgraded ARR) / beginning ARR.
Why it matters: NRR above 100% means the existing customer base is growing without acquiring new customers. For B2B SaaS companies, NRR is the single most important indicator of product-market fit and customer success effectiveness. Investors evaluate NRR as a primary signal of business health. A CRO who is focused exclusively on new business pipeline while NRR is below 100% is filling a leaky bucket.
What drives it: Product adoption depth, customer success engagement quality, expansion program effectiveness, and pricing and packaging design.
How to use it: Track monthly. When NRR trends below 100%, diagnose whether the driver is increased churn (CS and product problem), reduced expansion (marketing and CS program problem), or increased downgrades (pricing or value realization problem). Each has a different RevOps intervention.
What it measures: The value of upsell and cross-sell pipeline generated from the existing customer base, expressed as a percentage of total pipeline.
Why it matters: Expansion revenue is typically 2 to 3 times less expensive to generate than new logo revenue. B2B technology companies that treat the existing customer base as a demand generation source consistently achieve higher NRR and more predictable revenue growth than companies that focus exclusively on new logo acquisition. If expansion pipeline is not a tracked metric in the CRO's weekly review, it is almost certainly being underinvested.
What drives it: CS platform adoption signal data, marketing expansion programs targeting the existing customer base, product-led growth motions that surface expansion opportunities from usage data, and sales team time allocation between new logo and expansion.
How to use it: Track monthly and set a target expansion pipeline percentage. Most B2B SaaS companies should be generating 30 to 50% of total pipeline from the existing customer base by the time they reach $30M ARR. If the actual percentage is below that, diagnose whether the gap is a CS capacity issue, a marketing program gap, or a sales team time allocation issue.
What it measures: The variance between the revenue forecast committed at the start of each quarter and the actual revenue closed by the end of that quarter.
Why it matters: Forecast accuracy is the operational score for the RevOps function as a whole. A CRO who is consistently within 5% of their committed forecast has a RevOps infrastructure that is producing reliable pipeline data, accurate stage progression signals, and disciplined deal qualification. A CRO who is regularly missing or exceeding their forecast by 20% or more has a RevOps infrastructure that is producing unreliable data or a qualification process that is not enforcing stage criteria.
What drives it: CRM data quality, deal qualification rigor at each stage, pipeline inspection process discipline, and the forecasting methodology applied to the pipeline data.
How to use it: Track quarterly and set a target accuracy range (typically plus or minus 5% of committed forecast). When accuracy degrades, diagnose at the deal level: which specific deals surprised in either direction and why. The most common causes of forecast misses are deals that were in "late stage" but had never engaged the economic buyer, and deals that were marked as lost after missing the close date but were actually still active.
What it measures: The total marketing and sales investment required to acquire one new customer, calculated by channel.
Why it matters: Aggregate CAC is a finance metric. Channel-level CAC is a RevOps metric. It tells the revenue team which channels are producing customers efficiently and which are producing customers at a cost that cannot be justified by the average contract value of the customers they generate.
What drives it: Marketing spend by channel, sales time allocated to opportunities sourced from each channel, the conversion rate from MQL to closed-won by channel, and the average deal size by channel.
How to use it: Calculate quarterly. Compare channel-level CAC to channel-level average contract value. Channels where CAC is greater than 20% of first-year ACV are producing customers at a negative unit economics. Channels where CAC is below 10% of first-year ACV are underinvested relative to their efficiency. Channel-level CAC analysis should drive marketing budget allocation decisions, not channel-level lead volume or MQL volume.
These 10 metrics should be available on a single dashboard that the CRO reviews weekly and the revenue leadership team reviews in the monthly revenue review. The dashboard should update automatically from CRM and MAP data without manual pulls.
The most common reason RevOps dashboards are not used is that they require manual assembly before each meeting. A dashboard that requires a data pull is not a dashboard. It is a report template. Build the infrastructure that produces these numbers on demand, then the conversation in the revenue review shifts from "did we get the data" to "what does the data tell us to do."
Which of these 10 metrics should a revenue operations function focus on first? Marketing-sourced pipeline percentage and MQL-to-SQL conversion rate. They are the two metrics that most directly measure whether the marketing-to-sales handoff is functioning and whether the demand generation investment is producing pipeline. Organizations that can produce those two numbers reliably and weekly have the minimum viable RevOps measurement infrastructure in place.
How do these metrics change at different stages of company growth? At early stage (below $10M ARR), lead response time and MQL-to-SQL conversion rate are the highest-leverage metrics because they are most directly connected to the sales velocity of a small sales team. At growth stage ($10M to $50M ARR), pipeline velocity and win rate by channel become the primary optimization levers. At scale ($50M and above), NRR and expansion pipeline contribution deserve equal attention to new logo metrics because the existing customer base is large enough to materially move the revenue number.
What is the minimum data infrastructure required to track all 10 of these metrics? A MAP connected to CRM with reliable attribution field mapping, a UTM taxonomy applied consistently across all marketing channels, deal stage criteria enforced in CRM, and a CS platform connected to CRM. That is the minimum. Organizations missing any of those four elements will find that one or more of these metrics is either unavailable or unreliable.
The Pedowitz Group has been building revenue marketing and revenue operations infrastructure for B2B technology companies since 2007. The RM6 diagnostic assesses your current RevOps measurement infrastructure and identifies which of these metrics your current stack can and cannot produce reliably. Talk to TPG.