Method / Engagement
Traditional services pricing rewards time. Modern services pricing, increasingly, rewards outcomes. Around 30% of enterprise SaaS now incorporates an outcome-based element, and the number is forecast to reach 40% by 2026. For engagements where the outcome is measurable and both sides benefit from aligned economics, we structure pricing as a fixed-plus-upside, a shared-savings model, or a pure performance fee. The pricing mechanism is matched to the problem, not the firm's preferred model.
Most enterprise buyers have been on both sides of a bad pricing structure: T&M engagements where the vendor's incentive was to stretch the hours, or fixed-fee engagements where the vendor's incentive was to pull back on scope. Outcome pricing inverts both incentives, when the outcome is measurable and the measurement is honest. We use it where it works: claims automation (cycle-time measurable), fraud detection (savings measurable), customer service AI (resolution rate measurable), productivity gains (velocity measurable). Where the outcome isn't cleanly measurable, or where the measurement would distort the engineering in unhealthy ways, we don't use it, and we say so.
Signature Visual
A five-column comparison (fixed fee, time & materials, shared savings, performance fee, hybrid fixed + upside) with small cost/outcome charts showing the economic shape of each. Annotations across the bottom make risk to vendor, risk to client, alignment level, and when we use the model explicit. CFO-memo aesthetic. Coming soon.
Four tests we apply before structuring an engagement on outcome.
The outcome must be measurable by a method that both sides trust. Cycle-time reduction is usually measurable. “Customer satisfaction improvement” often isn't, or is measurable only through a proxy that can be gamed. If we can't define a clean metric that's honest under adversarial conditions, we don't use outcome pricing.
Measurement needs agreed baseline, agreed method, agreed source data, and an agreed dispute-resolution path. Outcome pricing without agreed measurement becomes the most contentious part of the engagement. We document measurement before we start.
A performance-fee model that gives the vendor 100% upside and 100% downside isn't aligned. It's a gambling product. Good outcome structures have floors (vendor base fee covers delivery costs) and caps (vendor upside has a reasonable ceiling). The math has to make sense for both sides at the worst and best outcomes.
Sometimes the right pricing model would cause the vendor to optimize for the wrong engineering decision. Example: a pricing model tied to cycle-time could incentivize skipping QA. If the structure pushes the team toward unhealthy engineering choices, we don't use it — a slightly less-aligned structure with healthier engineering wins.
Four specific structures we use, matched to engagement type.
Client pays floor fee to cover Sprout delivery; upside is 20–40% of verified savings over agreed baseline. Common for: claims automation, invoice automation, fraud detection — engagements where savings are cleanly measurable.
Base fee plus performance fee triggered at accuracy, resolution, or deployment thresholds. Common for: AI model deployments with clearly defined quality gates.
Reduced fixed fee covers delivery; upside fee on outcome achievement. The most widely used outcome-pricing structure in practice — balances vendor risk with aligned incentive.
For co-build and venture engagements where Sprout takes a share of revenue alongside or instead of equity. Different legal treatment; different risk structure. See Equity Partnerships for the equity-specific structures.
Where the model is landing, both in Sprout engagements and the broader market.
30% of enterprise SaaS and services pricing now includes an outcome-based component, up from single digits five years ago, with forecasts reaching 40% by 2026. The shift is driven by buyers seeking aligned incentives and vendors seeing competitive differentiation from it. For services firms, the shift is existential — fixed-fee and T&M alone are no longer sufficient.
AI implementation engagements (automation, fraud detection, document intelligence, customer service) produce outcomes that are cleanly measurable — cycle-time reduction, savings, resolution rate, accuracy. This makes AI delivery a natural category for outcome-based pricing, and the category where growth in outcome-pricing adoption is fastest.
Enterprise procurement teams — particularly in OJK-supervised financial services — now audit outcome claims, with measurement discipline, baseline agreement, and dispute resolution included in vendor onboarding paperwork. Outcome pricing without documented measurement agreement is increasingly flagged in vendor reviews.
The economic reasoning behind the hybrid model — why it's adopted more than pure shared-savings or pure performance fees — and how to structure the base + upside split fairly.
Outcome pricing is only as good as its measurement. Baseline agreement, measurement method, source data authority, dispute resolution — the parts that make or break an engagement.
The pricing structures that push engineering teams toward unhealthy decisions vs. the ones that align incentive with quality. How to tell the difference at engagement design time.
Tell us the engagement — the outcome you're targeting (cycle time, savings, accuracy, resolution rate, revenue). We'll assess whether outcome pricing is the right fit, propose a structure (floor + upside / performance fee / hybrid), and agree measurement, baseline, caps, and dispute resolution in writing. If the outcome isn't cleanly measurable, we'll tell you and propose a different pricing structure. Transparent pricing is the only kind worth defending.
Start a project