Back to Tools

ROI Defensibility Checker

A free ROI stress test. Pressure-test your ROI model against the questions a CFO will actually ask, surface weak assumptions before finance does, and get a defensibility score in three minutes.

Why a strong ROI number still gets rejected

Most ROI models are not rejected because the math is wrong. They are rejected because the model behind the number cannot be reconstructed by the person reviewing it. A 312% return looks unbeatable on a slide and dissolves the moment a CFO asks where the productivity figure came from, what happens if adoption is half of plan, or whether the headcount savings are already in next year's operating budget. The arithmetic was never the problem — the chain of evidence was.

Defensibility is the discipline of building an ROI model that survives that review. It means every input traces to a source, every benefit ties to an operational metric the buyer already reports on, every assumption has a sensitivity range, and the downside case is on the same page as the base case. It also means being honest about dependencies — the integrations, the change management, the data quality — that quietly determine whether the projected return ever lands.

The ROI Defensibility Checker is a structured way to find the gaps before they cost you the deal. It does not generate an ROI for you and it is not a calculator. It evaluates the model you already have against the patterns finance teams use to challenge vendor business cases — opaque benefit drivers, missing baselines, mismatched payback windows, double-counted savings, and a dozen other recurring failure modes — and gives you a defensibility score with the specific items to fix.

Run it before the meeting, not after. The cost of running it is three minutes. The cost of not running it is rework, delay, and a buyer who has lost confidence that the numbers were ever real.

Step 1
Step 2
Step 3
Step 1 of 3
1

Assumptions

How solid are the assumptions underlying your ROI calculation?

How the defensibility score works

The diagnostic asks nine questions across four dimensions: assumption quality, scenario coverage, dependency disclosure, and benefit attribution. Each answer maps to a weighted contribution to the overall score. The weights reflect how often each dimension drives a finance rejection in practice — assumption quality and scenario coverage carry the most weight because they are the most common failure modes in vendor-built ROI models.

Inside each dimension the checker looks for specific signals. On assumption quality it asks whether benefit drivers cite a baseline metric, whether savings figures are anchored to a current cost line, and whether productivity claims correspond to a headcount or capacity decision the buyer is actually willing to make. On scenario coverage it asks for the existence of a downside case, the size of the spread between best and worst, and whether the sensitivity range was derived or assumed. On dependency disclosure it surfaces integration, data, and change-management prerequisites that are often left out of the deck. On benefit attribution it tests for double-counting between revenue uplift, margin expansion, and cost avoidance.

The output is not a pass-fail. It is a score, a per-dimension breakdown, and a list of the specific weaknesses the diagnostic detected. The list is ordered by impact — fix the top three items and the model will survive most reviews.

When to run the check, and who it is for

The right time to run the defensibility check is in the window between drafting the ROI model and presenting it externally — typically two or three days before a finance review or executive readout. By that point the inputs are stable enough to evaluate, but there is still time to fix what the diagnostic surfaces. Running it after the meeting is informative but rarely useful.

The tool is designed for the people who own the model. That includes value engineers and value consultants building deal-specific business cases, account executives writing ROI summaries for procurement, customer success managers preparing renewal or expansion cases, finance business partners scrubbing a vendor proposal before approval, and product marketing teams maintaining the ROI templates the field uses. It is equally useful for buyers — anyone reviewing an inbound vendor business case can use it to identify which sections to challenge.

It is less useful for very early-stage discovery work where a directional ROI is appropriate. The checker assumes you have a model with explicit assumptions; it cannot evaluate a one-line back-of-envelope estimate.

What the output looks like

After the nine questions, the diagnostic returns a single defensibility score from 0 to 100, a breakdown across the four dimensions, and a ranked list of weaknesses. A typical output looks like this.

Defensibility Score62 / 100
Assumption quality48 / 100
Scenario coverage55 / 100
Dependency disclosure70 / 100
Benefit attribution75 / 100

Top issues to fix

  • Productivity savings are expressed as "25% efficiency gain" but no baseline hours-per-task metric is cited.
  • No downside case. Sensitivity range is implied ("conservative estimate") rather than modeled.
  • Payback period (8 months) is shorter than the stated implementation window (10 months) — the math implies benefits before go-live.

The annotated weaknesses are the most useful part of the output. Each item names the specific assumption or omission that triggered it, which makes it easy to take back to the spreadsheet and fix. The score itself is secondary — the goal is not to maximize the number, it is to fix the items that would have lost the deal.

What to do with the result

Treat the diagnostic as a punch list, not a grade. The two highest-leverage moves after running it are usually the same: replace any percentage-only benefit driver with an absolute number tied to a baseline metric, and add an explicit downside case to every benefit category. Those two changes alone move most models from rejected to approved.

The remaining issues tend to be longer-tail and deal-specific. If the diagnostic flagged dependency disclosure, add an explicit prerequisites section to the business case. If it flagged double-counting between revenue and margin, separate the two streams in the model and reconcile to a single bottom-line figure. If it flagged a payback inconsistency, restate the timeline so the benefits do not start before go-live.

Once the fixes are in, run the diagnostic a second time. A model that scores above 80 on the second pass is in good shape for finance review. A model that still scores below 70 needs more than copy-edits — it needs a structural rebuild, and a tool like the Business Case Readiness Diagnostic is a better starting point than another revision.

Frequently asked questions

Common questions about ROI defensibility, how the diagnostic works, and how to use the result.

What is a defensible ROI model?+

A defensible ROI model is one whose assumptions, data sources, and scenarios survive scrutiny from finance and procurement without rework. It cites where each input came from, ties benefits to operational metrics the buyer already tracks, includes a downside case as well as a base case, and isolates dependencies the deal could pivot on. If a model cannot be reconstructed by the CFO from the inputs alone, it is not defensible.

How is this different from an ROI calculator?+

A standard ROI calculator computes a number. The ROI Defensibility Checker evaluates the model behind the number. Two models can produce the same headline ROI and have very different odds of approval — the difference is in assumption quality, sensitivity coverage, and the chain of evidence behind each input. This tool scores those qualitative attributes, not the arithmetic.

What kinds of assumption weaknesses does the checker flag?+

Common failure modes include: benefits expressed only as a percentage with no anchor metric; productivity gains that imply a headcount reduction the buyer will not actually take; cost-avoidance figures with no baseline; payback periods shorter than the implementation window; risk-free discount rates on outcomes that are clearly risky; and double-counting between revenue uplift and margin expansion. The diagnostic surfaces each of these patterns with the specific question that exposed it.

How long does the diagnostic take?+

About three minutes for nine questions. The questions are short, the inputs are multiple-choice, and you do not need to upload your model. The output is generated immediately on the same page — there is no email gate and no waiting period.

Who should run an ROI defensibility check?+

Anyone whose ROI model will be reviewed by someone with budget authority: value engineers preparing a deal, AEs writing a business case for procurement, customer success building an expansion case, or finance partners scrubbing a vendor proposal. It is most useful before the model is presented externally — running it after rejection is too late.

What does the defensibility score mean?+

The score reflects how many of the standard CFO objections the model already answers. A high score means the model can survive a typical finance review with limited rework. A low score does not mean the ROI is wrong — it means the case behind it is incomplete and a finance reviewer will have unanswered questions, which materially reduces the odds of a clean approval.

Will this replace my ROI model or my business case?+

No. The checker is diagnostic, not generative. It tells you where your existing model is weak and what to fix. The fixes themselves — better baselines, sensitivity scenarios, cleaner attribution — still require you to gather the underlying inputs.

Why do CFOs reject ROI models with strong-looking numbers?+

The most common reasons are not arithmetic errors. They are: assumptions the CFO cannot trace to a source, benefits that are not measurable in the buyer's reporting system, missing downside cases, dependencies that were not disclosed, and projected savings that conflict with the headcount or budget plan already in place. Strong numbers built on opaque inputs lose to weaker numbers backed by clear evidence.

Does the tool save my data?+

No personal information is collected to use the tool. The diagnostic runs entirely client-side — your answers are not stored on our servers and there is no form to submit before seeing your result.