Opinion

Enable before Build: why most teams aren't ready for agent systems yet

A common conversation: a growth-stage company has decided this is the year they get serious about AI. Leadership has read the right essays, attended the right conferences, watched the right demos. They have a budget and a mandate. They want to commission a multi-agent system that owns an operational function within the quarter.

The honest answer, in roughly two-thirds of these conversations, is: not yet. You should spend the next two to six months getting your operators fluent with the tools that already exist before you commission anything bespoke. You will save money, build internal capability, and (crucially) be in a position to commission the right thing rather than the obvious thing when you do.

This is unwelcome advice. It sounds like consultants finding reasons not to take a project. It sounds like the conservative path when the moment seems to call for ambition. It sounds, depending on your priors, like either healthy caution or willful underselling. We’ve made the case enough times now to know how it lands.

It is, nonetheless, the right answer.

The hidden prerequisite

Every successful agent system we’ve shipped has shared a precondition that nobody mentions in the public discourse: the operators who will work alongside it were already fluent with general-purpose AI tools before the bespoke system arrived. They knew what Claude or GPT could and couldn’t do. They had developed instincts for when to trust an LLM’s output and when to verify it. They had built their own personal workflows around AI and had opinions about which patterns were saving them time and which were costing time disguised as saving it.

When the agent system landed in their hands, they had the cognitive scaffolding to evaluate it. They could spot a bad output for what it was. They could tune prompts when the system needed adjustment. They could explain to colleagues, accurately, what the system did and didn’t do. They were, in a meaningful sense, ready to be partners to the system rather than recipients of it.

When the agent system lands instead in the hands of a team that has not been through this fluency-building, the failure pattern is predictable. The operators don’t know what to compare the system to. They alternately over-trust it (because it sounds confident) and under-trust it (because the first noticeable error feels disqualifying). They can’t tune it because they don’t have the mental model. They escalate everything to the engineering team, which kills the operational economics. Within six months the system is either being run by engineers (defeating the purpose) or quietly retired (defeating the investment).

The fluency precondition is not optional. The only question is whether you build it before the agent system arrives or after, and “after” is the more expensive path by a wide margin.

What fluency actually means

Operator fluency is not “knows how to type a prompt.” It is a specific cluster of intuitions developed over months of using AI tools on real work:

  • Understanding the difference between tasks where the model is reliable and tasks where it confabulates, with calibration that updates as you encounter new tasks.
  • Knowing when to trust a single output versus when to ask for two and compare.
  • Recognising the kinds of mistakes the model makes (rather than the kinds you’d make), so you know what to verify.
  • Having a feel for context window economics: when adding more context helps and when it hurts.
  • Building a small library of prompts and patterns that work for your specific function.
  • Knowing the difference between “the model can’t do this” and “you haven’t found the framing it can do this in yet.”

None of this is teachable in a workshop. It develops through use, with feedback, on real tasks that matter to the operator. The right unit of fluency-building is not a course; it is a set of guided pilots on the operator’s actual work, with someone alongside who has done it before and can shorten the loop on the wrong assumptions.

The cost asymmetry

Building operator fluency before commissioning an agent system costs something. Two to six weeks of the team’s attention, a modest engagement fee, some workflow disruption while pilots run. Real costs, not pretend ones.

Skipping it costs more. The agent system you commission is probably the wrong one (because nobody on the team has the experience to know which workflow is the right first target). The integration is rougher (because operators haven’t internalised what they actually want from a system). The handover lands badly (because the operators meet the system as foreign equipment rather than familiar territory). The retainer that follows is more expensive (because the system needs more hand-holding from the build team than it should). The next engagement is harder to win internal sponsorship for (because the first one underperformed in ways nobody can quite articulate).

The cost asymmetry is something like 1:5 in our rough accounting. Spend the small amount up front, save the large amount downstream. This is not an unusual pattern in any kind of consulting; it is just unusually pronounced here because the technology is new enough that the fluency gap is unusually large in most teams.

The signals that a team is ready to skip Enable

Some teams are ready. The signals are specific and worth naming, because if you have these signals you should not waste time on enablement work you don’t need.

You’re probably ready to start with Diagnose or Build directly if:

  • A meaningful fraction of your team (say, a third or more) is already using AI tools daily on substantive work, not just for code completion or copy editing.
  • At least one operator can articulate, with examples, where AI is and isn’t reliable in your specific function.
  • Someone on the team has shipped at least one small AI-augmented workflow that other people use and rely on.
  • Leadership has a working theory of which functions the agent system would replace versus augment, grounded in observation rather than aspiration.
  • Your data is already in a state where structured access is possible without a multi-month cleanup project.

If three or more of those are true, you can probably commission a Diagnosis directly. If fewer, the right first step is Enable, even if you have the budget for more.

Why this is unpopular

The argument is unpopular for two reasons that have nothing to do with whether it’s correct.

First, it sounds like underselling. Consultancies are supposed to want to sell you the largest possible engagement. Telling clients to start smaller looks like either bad business or a pitch for bigger work later. Both interpretations miss the point. We sell agent systems. We want to sell them to teams who will get value from them. Teams who skip the enablement step do not get value from agent systems. The path through Enable is not a smaller sale; it is the path to a larger sale that actually works.

Second, it sounds like the conservative answer. Agent systems are exciting. Operator training in AI tools is not. Telling leadership that the right first move is the unexciting one feels like dampening enthusiasm at exactly the moment the team has it. We’ve learned to make the case anyway, because the cost of building enthusiasm into a system that fails is much higher than the cost of channelling enthusiasm into the right preparation. The teams that get it tend to thank us six months in. The teams that don’t, and commission an oversized first build anyway, tend to come back twelve months later having learned the same lesson the expensive way.

What we actually do in an Enable engagement

For the curious: an Enable engagement is two to six weeks. We start with an audit of where your team is on AI fluency today, function by function. We pick the right tools for your specific stack and workflows, set them up, train operators on real work (not synthetic exercises), and run one guided pilot on a high-value workflow so the team has a shared concrete experience to reason from. We leave you with a roadmap for what comes next, including (often) a recommendation that you’re now ready for a Diagnosis, and what that Diagnosis should focus on.

The output is not a polished deck. It is a team that has done the work, knows what AI can do for their specific function, and is positioned to make good decisions about the next step. That’s the whole product.

It’s the least glamorous engagement we offer. It is also the one that, when skipped inappropriately, is the most expensive thing in the rest of the engagement chain. Build the fluency first. The agent systems will be there when you’re ready, and they’ll work better when you are.