Back to Tools

Value Maturity Lens

A free value engineering maturity assessment. Answer four questions about how your value work happens today and see where you sit on the spectrum from ad-hoc to productized — and whether software helps or adds overhead at your current stage.

Why value maturity decides whether software helps or hurts

Most leaders evaluating value engineering platforms ask the wrong first question. They ask whether the platform is good. The better question is whether their team is at the maturity stage where a platform compounds the work or formalizes chaos. Software is leverage on top of an operating model — it amplifies the model that already exists. If the model is ad-hoc, the platform tools an ad-hoc process and adds tooling overhead. If the model is repeatable but inconsistent, the platform exposes the inconsistency without resolving it. The teams that get the most out of value infrastructure are the ones that arrive with the operational discipline to feed it.

The Value Maturity Lens places your team on a four-stage spectrum — ad-hoc, repeatable, systemized, productized — across the four dimensions that most often determine whether value work scales: repeatability of the underlying motion, standardization of outputs, independence from specific individuals, and the frequency at which the team delivers. The shape of the result matters as much as the headline stage. Uneven dimensions usually surface the actual constraint: a team with strong repeatability but weak standardization has templates that no one enforces; a team with strong scale but weak independence has burned-out experts holding the system together.

The Lens is deliberately direct about what each stage means and what to do next. If the right answer is to keep working manually for another quarter, that is what the result says. If the right answer is to invest in platform infrastructure because the cost of maintaining the current system has overtaken its benefit, that is what the result says. The output is a directional read, not a certification — designed to start a sharper internal conversation rather than to score the team.

Run it before you evaluate vendors, before you pitch a platform business case internally, or before you scope the next iteration of your value program. Two minutes of honesty about where the work actually sits today saves quarters of misdirected investment.

Question 1 of 4

How repeatable is your value work today?

The four stages of value maturity

Every value team sits somewhere on this spectrum. There is no wrong answer — only a realistic read of where you are today and what the next stage actually requires.

1

Ad-hoc value work

Value software creates structure before you know what's worth structuring. The constraint is lack of repeatability, not tooling. Most teams at this stage benefit more from running the work manually until patterns emerge.

2

Repeatable but inconsistent

Your value work happens, but it depends on people remembering how. Documents exist. Templates exist. But every delivery drifts slightly from the last, and no one notices until it compounds.

3

Systemized but constrained

Process exists. The problem shifts to maintaining it: QA takes longer, governance gets heavier, and reuse requires manual effort. You've solved repeatability but created overhead.

4

Productized value work

Value delivery is infrastructure, not a service. The constraint is extending it — more people, more clients, more contexts — without degrading quality or losing control.

How the Lens works

The assessment evaluates four dimensions that consistently differentiate immature value functions from mature ones. Repeatability asks whether the underlying motion can be re-run without re-deriving the logic each time. Standardization asks whether outputs look the same across deals, customers, and consultants. Independence asks how reliant delivery is on specific individuals — the failure mode is a team where two or three experts are the system. Scale asks how often the work happens, because cadence exposes whether the operating model holds up under volume.

Each dimension is scored on a one-to-four scale and the four scores are averaged to place you on the overall stage. The radar chart in the result is the more useful artifact: stages summarize, but the radar shows the specific imbalance. A flat-but-low shape points to early-stage work where the priority is documenting any pattern at all. A spikier shape — strong on two dimensions, weak on two — points to a specific bottleneck that is worth addressing before adding tooling.

The dimensions are deliberately narrow. They do not cover team size, tooling stack, or commercial model — those are downstream of operational maturity, not substitutes for it. The Lens is a read on the work itself, not the wrapper around it.

When to run the Lens, and who it is for

The Lens is most useful at three moments: before evaluating value engineering platforms, when a team that has been operating manually starts asking whether tooling would help; before pitching value infrastructure internally, when a leader needs to articulate why the investment is timely rather than premature; and as a periodic calibration — running it once a quarter across a leadership team typically surfaces disagreement about where the function actually sits, which is itself a useful signal about how aligned the team is on the next set of priorities.

It is built for the people who own the value function and its operating model. That includes heads of value engineering and value consulting, customer success leaders building a value program, RevOps owners scoping value tooling, sales engineering directors building shared business case practice, and external consultants advising clients on whether to invest in value infrastructure. It is also useful for executives evaluating a vendor pitch — a vendor whose pitch assumes you are at stage four when you are at stage two will sell you a platform that does not pay back.

It is less useful when a team is still defining what value work means in its context. The Lens assumes value engagements happen in some form today and asks how they happen. If the function does not yet exist, the right starting point is scoping what it should produce — come back to the Lens once there is a motion to assess.

What the result looks like

The result has three components: the headline maturity stage, a per-dimension radar chart, and a stage-specific framing of what the next move should be. A typical result for a team at stage two looks like this.

Current stageRepeatable but inconsistent
Repeatability3 / 4
Standardization2 / 4
Independence2 / 4
Scale3 / 4

What it means

Your value work happens, but it depends on people remembering how. Templates exist. Patterns exist. But every delivery drifts slightly from the last, and the drift is invisible until it compounds into a quality issue. The next move is locking in consistency before scaling — platform tooling at this stage typically formalizes the inconsistency rather than resolving it.

The radar matters more than the stage label. A team scoring 3-2-2-3 has a different problem than a team scoring 2-3-3-2 even though both land at stage two on the average — the first needs standardization work, the second needs scale before standardization pays off. Read the shape, not just the headline.

What to do with the result

At stage one (ad-hoc), resist the instinct to buy tooling. The constraint is pattern discovery, not enforcement — running the work manually for another quarter and writing down what worked produces more leverage than any platform can. At stage two (repeatable but inconsistent), the highest-impact move is enforcing the templates that already exist: pick one workflow, define what "done" looks like, and require every delivery to meet it before the team adds new motions.

At stage three (systemized but constrained), the question changes. The system works, but maintaining it has become its own job. This is the inflection where platform infrastructure tends to pay back: the manual overhead of governance, QA, and reuse has caught up to the cost of formal tooling. Audit where the time is going — if more than a third of the value team's effort is going into maintaining the system rather than producing outputs, the platform conversation is timely.

At stage four (productized), the constraint is extension rather than discipline. The work runs cleanly internally; the question is how to extend it across more people, more clients, and more contexts without losing control. The next move is rarely more process — it is infrastructure that lets the existing process scale without manual intervention at every step.

Frequently asked questions

Common questions about value maturity, how the Lens works, and how to use the result.

What is value maturity?+

Value maturity is the degree to which a team's value engineering and value delivery work is repeatable, standardized, independent of specific individuals, and operating at scale. A mature value function produces consistent outputs across deals, customers, and consultants without re-deriving the underlying logic each time. An immature one runs every engagement as a one-off, with quality gated by the experience of whichever person happens to lead the work. The Value Maturity Lens places your team on a four-stage spectrum — ad-hoc, repeatable, systemized, productized — using four short questions about how value work happens in practice today.

What are the four stages in the value maturity model?+

Ad-hoc value work: every engagement is bespoke, knowledge lives with individuals, and outputs vary widely. Repeatable but inconsistent: the work happens reliably, templates exist, but each delivery drifts slightly from the last and the drift is invisible until it compounds. Systemized but constrained: process is in place, governance and QA enforce consistency, but maintaining the system is now its own overhead. Productized value work: value delivery is infrastructure rather than a service — repeatable, governed, and extensible across people, clients, and contexts without quality degradation.

Why does value maturity matter before adopting value engineering software?+

Software amplifies the operating model it is dropped into. If the underlying work is ad-hoc, a platform formalizes chaos and adds tooling overhead on top. Most teams at stage one or two get more leverage from documenting patterns and stabilizing the manual workflow first. Platform infrastructure pays off once repeatability exists and the constraint shifts from "can we do this consistently" to "can we do this at scale without burning out the team that maintains it." The Lens is designed to surface that distinction honestly rather than push every visitor toward the same answer.

How is the assessment scored?+

Four multiple-choice questions, one per dimension: repeatability, standardization, independence from key individuals, and scale of delivery. Each answer maps to a 1–4 score on its dimension. The Lens averages the four dimensions to place you on the overall maturity stage and renders the per-dimension breakdown as a radar chart. The radar matters as much as the headline — uneven dimensions point to the specific bottleneck (for example, high repeatability with low standardization usually means templates exist but no one is enforcing them).

How long does the assessment take?+

Around two minutes. Four questions, multiple-choice, no documents to upload, no email gate. The result and the per-dimension breakdown are generated immediately on the same page.

Who is the Value Maturity Lens for?+

Heads of value engineering, value consulting leaders, customer success leaders building a value program, RevOps owners evaluating value tooling, and consultants advising clients on whether to invest in value infrastructure. It is also useful internally as a calibration tool — running it across a leadership team typically surfaces disagreement about where the function actually sits today, which is itself a useful signal.

How is this different from a generic capability maturity model?+

Generic CMMI-style models cover process maturity for any function. The Value Maturity Lens is specific to value engineering and value delivery work — the dimensions (repeatability, standardization, independence, scale) reflect the actual failure modes of value teams, not software development or quality management. The output is not a certification grade; it is a directional read on whether your current constraint is operational discipline or infrastructure leverage.

What should I do with the result?+

Treat it as a starting point for a conversation, not a verdict. If you land at ad-hoc or repeatable, the highest-leverage next step is documenting the patterns that work and standardizing outputs before adding tooling. If you land at systemized, the question is whether the overhead of maintaining the system has started to outweigh its benefits — that is the inflection point where platform infrastructure typically pays back. If you land at productized, the constraint is usually extension: more people, more clients, more contexts, without losing control. The result page surfaces the next-step framing for whichever stage you land on.

Does the Lens save my answers?+

No personal information is required to use the Lens, and the assessment runs entirely in your browser. There is no form to submit before seeing your result, and your answers are not stored on our servers.

Wherever you land, we're available to talk through what comes next.

Book a short discovery call