The premise.
Most AI conversations begin in the wrong place. They begin with a model, a vendor, or a use case someone read about. The work that gets funded looks like the work that got funded somewhere else, which means companies pay to discover that what worked at a peer is a poor fit for their operating reality. The cost is not the model. The cost is six to twelve months of organizational attention spent on the wrong augmentation.
The Process Diagnostic inverts the order. It begins with the function as it operates today, maps where time and judgment compound, and then asks which of those compound points can be augmented at acceptable risk. The model selection comes last. By the time the diagnostic is complete, the augmentation choice is almost mechanical: the work that survives the value-feasibility filter is the work that gets built first.
Phase one: process inventory.
The first phase is descriptive, not prescriptive. The objective is to map the function as it actually operates, distinct from how it is described in the org chart or the documentation. A typical inventory captures, for each process: the trigger that initiates it, the people who touch it, the decisions that get made along the way, the artifacts produced, the cycle time, and the volume per period. The unit of analysis is the process, not the role.
The output of this phase is a process map with somewhere between fifteen and forty processes for a single function. The map is unglamorous to produce and unglamorous to read. It is also the artifact the rest of the diagnostic depends on. Skipping it produces the same recommendations every other firm in the market produces, because the recommendations are defaulting to the same generalized template.
Phase two: value sizing.
The second phase converts the process inventory into a value-weighted list. For each process, three numbers matter: how much time it consumes per period, how much of that time is in the senior-judgment band rather than in routine execution, and how much error or rework is currently in the process. The product of those three numbers is the augmentable value of the process. High volume by itself is insufficient; high senior-judgment-time by itself is insufficient; high error rates by themselves are insufficient. The processes worth augmenting first are the ones where all three signals are high simultaneously.
This phase is also where the most common organizational instinct gets corrected. Buyers tend to point at the most visible processes — the ones that consume the most calendar time at the senior level — and assume those are the right augmentation targets. They sometimes are. They are often not. The most visible processes are visible precisely because they involve senior judgment, but senior judgment is the part that is hardest to augment safely. The right augmentation targets are usually one layer down: the senior-adjacent work that consumes senior attention without requiring senior judgment.
Phase three: feasibility scoring.
The third phase tests each high-value process against three feasibility dimensions. The first is data: does the process have enough structured, accessible, and recent data for an augmentation to be reliable? The second is decision boundary: is the augmentation supporting a decision the human will still make, or is it being asked to make the decision autonomously? Augmentation is usually safe; autonomy usually is not, particularly in regulated or judgment-heavy domains. The third is governance: is there a named human owner who will sign off on the augmentation's outputs, manage exceptions, and own the model's performance over time?
Each process gets a feasibility score on each dimension. Processes that fail any dimension are removed from the near-term roadmap, regardless of value. The temptation to keep them in because their value is high is the same temptation that produces the failed augmentations the industry is now full of. Removal is not abandonment; processes removed for feasibility reasons are revisited in the next planning cycle when the underlying constraint has been resolved.
Phase four: the roadmap.
The roadmap is the deliverable. It sequences the surviving processes into a thirty-, sixty-, and ninety-day plan, with named owners, named augmentation patterns, and named success metrics. The roadmap also explicitly names what is being deprioritized: the high-value, low-feasibility processes that will be revisited in six or twelve months when the data infrastructure or the governance is in place to support them.
The deprioritized list is often more useful than the prioritized one. It gives the leadership team a coherent answer to the question of why a particular augmentation is not being pursued, without leaving the impression that the company is uninterested in AI. The answer is that the company is interested in augmentation that works, and a clear-eyed assessment is what separates the augmentations that work from the ones that do not.
Closing.
The Process Diagnostic typically takes two to four weeks for a single function. The output is a roadmap, not a model. The model selection follows in a separate phase, informed by what the diagnostic identified as the right augmentation pattern. The implementation follows that. The order matters: companies that begin with a model and work backward to a use case spend money on tooling that does not fit the work; companies that begin with the work and choose tooling last spend money on tooling that compounds.
The diagnostic is also the part of an AI Transformation engagement that is least visible to the rest of the organization. There is no demo, no dashboard, no chat interface. There is a roadmap. That is, in our experience, the part that determines whether the rest of the program works.