How Broad AI Adoption Fails
I recently attended an AI strategy webinar for executives to hear how senior leaders are framing the problem of AI adoption inside their organizations. There was a healthy enthusiasm of course, which was unsurprising. But what stood out was how quickly the conversation defaulted to breadth. AI was discussed as something to deploy widely and quickly, as if success were primarily a function of coverage rather than judgment. The dominant theme seemed to be a concern that the real risk lay in moving too slowly rather than choosing poorly.
At one point, an attendee who ran legal for a family office asked how AI should be helping her in general. The presenter's answer was, more or less, that there are lots of legal AI tools out there and she should go explore them. That was it. No discussion of tradeoffs or unique data handling requirements, no process specifics, no consideration of what mattered about how her function actually worked. The entire conversation operated at that altitude.
Though these days that altitude is understandable; the pressures behind that posture are familiar. Boards are asking questions, competitors are making announcements, nobody wants to be the executive who looks hesitant while everyone else is signaling momentum. What was largely missing, though, was a disciplined discussion of where AI should actually be applied, why those areas made sense, and what tradeoffs were being accepted in the process. There was plenty of motion, but very little clarity about direction. That imbalance isn't surprising given the pressures senior leaders face, but ignoring that angle materially increases the risk of failure.
MIT research suggests that roughly 95% of enterprise AI initiatives fail to deliver meaningful value. While the size of that number may be what’s grabbing attention, the important takeaway is how predictable the reasons for failure tend to be. These initiatives rarely fail because the models are incapable or the technology is immature (it is in the grand scheme of things, but that’s not the issue). They fail because AI is applied too broadly, without sufficient discrimination, and before organizations understand where it meaningfully improves system behavior. The failure isn't technological, it's an allocation error.

This pattern persists because AI behaves differently from earlier waves of enterprise technology in ways that materially affect how it should be deployed.
The first constraint is speed. AI capabilities are improving far faster than organizations can change how they operate. What looked like a meaningful advantage a few months ago is now often part of the default feature set across multiple tools. Organizations, meanwhile, move at human speed. People need to be trained. Processes need to be redesigned. Incentives need to shift. Norms need to settle. When companies attempt large scale AI transformations, they are often trying to synchronize two systems that operate on different clock cycles. The result is not sustained acceleration, but organizational strain, as structures are asked to absorb change faster than they can meaningfully adapt.
If an initiative is too big and too broad, by the time the organization adapts to new processes, the capabilities it built around may no longer be cutting edge. If it was implemented and structured in a way that didn’t preserve future optionality, it could lead to difficult situations with bad lock-in later.
The second constraint is economic. Much of today’s AI ecosystem is operating under economic and financial conditions that obscure its long term cost structure. Major model providers like OpenAI and Anthropic are sustaining losses measured in billions per quarter. AI focused startups for both B2B and consumer markets are prioritizing growth over profitability, often operating significantly in the red even with what’s now effectively subsidized AI compute. Infrastructure providers are racing to build capacity, often by taking on astronomical levels of debt. While there have been moderate improvements in model efficiency and hardware performance per watt, there has been little evidence of a durable reduction in the true cost of inference. Rising energy costs, competition for specialized labor, and physical resource constraints all point in the same direction.
Unless there is a transformative breakthrough in compute costs like silicon photonics panning out or ubiquitous cheap and clean energy, AI compute will become more expensive over time, not less. Structural decisions made when costs are artificially low have a long history of aging badly once those economics begin to reflect market reality.
The third constraint is differentiation. With few exceptions, AI capabilities are now broadly accessible. Unless an organization is building and maintaining its own models, the tools being deployed internally are largely the same ones available to competitors. By definition, this means many uses of AI compress variation rather than create it. That compression is not inherently bad. It can be great if it’s applied to an area that normally has friction in terms of time, effort, and cost and doesn’t meaningfully impact the unique advantages of the company’s internal process or offering. But it becomes strategically dangerous when applied to areas of differentiation that matter to how a business competes.
Time and cost savings are of limited value if they quietly erode the qualities that made your organization distinctive in the first place. Regression toward the industry mean rarely announces itself in real time, but it is often decisive in retrospect.
Taken together, these constraints explain why broad AI adoption so often disappoints. Organizations attempt to apply a rapidly evolving, artificially inexpensive, and widely available capability across large portions of their operations, locking in new processes and behaviors before the consequences are well understood. Local improvements may be real, but when they are made without regard for system dynamics, they tend to shift pressure rather than relieve it, increasing fragility elsewhere. The outcome is activity without proportional leverage.
If broad AI transformation is an unstable starting point, the question becomes where attention should be directed instead.
In complex systems, overall performance is governed by constraints, not averages. Bottlenecks are the points where work accumulates, decisions slow, and small inefficiencies cascade into larger problems. Improvements made away from those constraints can feel productive and even look impressive, but they rarely change how the system performs as a whole. The most impactful way to bring in new AI-based capabilities is to focus on your organization’s bottlenecks. Doing so also forces discipline; the problem is bounded, inputs and outputs are clearer, impact is measurable, and when something goes wrong, the consequences are more likely to remain contained.
More importantly, a bottleneck focused approach changes the nature of the decision being made. Instead of asking where AI can be applied, you instead ask where the system is actually constrained, and whether AI is an appropriate intervention at that point. That shift matters. It moves AI adoption from a race to deploy into a series of deliberate and smart choices about how your organization behaves under pressure. That is how capabilities are built sustainably, and how your business retains the ability to compete over time and win the long game.
AI will certainly play a central role in business going forward. It is not, however, analogous to earlier digitization efforts. It moves too quickly, its long term economics remain uncertain, and applied without restraint it can undermine the very advantages organizations rely on to compete. The challenge facing senior leaders is not whether to adopt AI, but how to do so in ways that build meaningful capability now without locking their organizations into decisions that are difficult to reverse later.
Rather than attempting to overhaul everything at once, leaders would be better served by starting with their bottlenecks, and only then deciding whether AI belongs there.
In the next part, I will introduce a framework for evaluating that decision, including when AI makes sense at a bottleneck and when it clearly does not.
Note: This article covered implementing AI in business process that has interdepartmental impact (or broader). It did not cover using AI for coding support, and did not cover individual employees to use AI in ways they feel make them more efficient. Both of those things can be encouraged without the same level of consideration for the bottleneck dynamic outlined in this article (though the subsidized cost risk still applies of course).