The Major Failure of Business AI: It's Not the Technology, It's Human Behavior and Billing

The Major Failure of Business AI: It's Not the Technology, It's Human Behavior and Billing

Companies are buying AI like traditional software and then surprise when it doesn't change work or capture revenue.

Andrés MolinaAndrés MolinaMarch 8, 20266 min
Share

The Major Failure of Business AI: It's Not the Technology, It's Human Behavior and Billing

Over the past year, the public conversation surrounding artificial intelligence has been filled with demonstrations, promises, and corporate purchases. On an industrial scale, investments have been made in models, licenses, infrastructure, and pilots. However, the relevant symptom in a boardroom is not how many tests were executed, but how many margin points appeared at the end of the quarter.

A study cited by MIT and reported by TheStreet documents an unsettling number that doesn’t align with the narrative of euphoria: 95% of organizations did not see measurable returns on their investments in AI, even though the aggregate spending ranged from $30 billion to $40 billion on enterprise AI initiatives. This isn't an issue of computing power or “model maturity.” Instead, it’s largely a problem of human adoption and internal systems not designed for the real economy of AI consumption.

Based on my work analyzing consumer behavior and adoption friction, I read this story as an autopsy of two classic failures: the first occurs on the employee's desk, where AI is reduced to a “better search engine”; the second occurs in the back office, where even when use exists, the company does not know how to measure or bill for it accurately. In both cases, the error is the same: designing for a human and an accounting system that do not exist.

When AI Hits Real Work, It Collides with Incentives, Habits, and Fear of Failure

Oseas Ramirez, CEO of Axialent, expressed it with a phrase that should be printed on every transformation plan: “AI is adopted by people, not servers. If people do not change how they work, the technology simply stays there.” That statement isn’t philosophy; it’s applied economics. If behavior does not change, the technological asset becomes a sunk cost.

The pattern described by the research cited by TheStreet is consistent with what I observe in adoption: the majority of employees use AI as a slightly smarter search engine, not as a redesign of workflow. That nuance destroys returns. A “better search engine” saves minutes; a redesigned workflow changes cycle times, reduces rework, standardizes decisions, and makes activities that once depended on internal heroes scalable.

The clash occurs because organizations attempt to deploy AI with the same old script: buy the tool, install it, train, declare victory. But adoption does not fail due to lack of training; it fails due to cognitive friction and perceived risks. Employees do not “reject AI” out of ideology: they avoid it when the mental cost of using it outweighs the immediate benefit or when the incentive system penalizes experimentation.

Behaviorally, there is a push — the frustration with repetitive tasks and pressure for productivity — and also a magnetism — the promise of speed and better answers. The problem is that anxiety and habit often win. Anxiety, because delegating judgment to a probabilistic system exposes the user to visible errors. Habit, because the status quo already has known routes for surviving internal politics: “doing it the same way” rarely jeopardizes one's career; trying something new and failing can.

The critical piece here is that many hierarchies and incentives were designed before AI existed. If a sales team receives AI-generated forecasts that clash with quotas or internal narratives, the data isn’t “discussed”; it’s ignored. Not out of malice, but for preservation: the human optimizes for safety within the system. If the model threatens the tacit agreement of how merit and blame are assigned, the model loses.

This is why companies that achieve results are not typically those with the most sophisticated model, but those that restructure work around the model. AI is not an “add-on”; it’s a redesign of the psychological contract of work: who decides, who validates, who signs, who takes on risks. Without that redesign, the tool is used for small tasks, the ROI evaporates, and the organization learns the wrong lesson: that AI “doesn't work,” when in reality, what doesn't work is the adoption system.

The ROI Breaks Down for a Trivial Reason: Buying Shine, Underestimating Friction

The numbers from the cited study are a blow to the triumphant narrative: 95% with no measurable return after $30–40 billion invested. When such a gap appears, the explanation is often less glamorous than technology. The answer lies in how companies allocate budget and attention.

In practice, many organizations enthusiastically fund what is visible: licenses, infrastructure, pilots with spectacular demos. That “shines” in a presentation. What doesn’t receive the same budgetary love is what actually moves behavior: process redesign, incentive changes, governance of use, protection against reasonable errors, and real-time iteration time.

Here we see a frequent corporate bias: transformation is treated as an IT project, not as an operational rewrite. The consequence is predictable: usage remains superficial. Employees open the tool to draft an email, summarize a document, or search for information. These are actions that don’t jeopardize professional identity or challenge hierarchies. AI becomes cosmetic productivity.

Another detail that aggravates the problem is organizational resilience to failure. The note mentions that when experiments fail — and they fail often — many companies lack the institutional capability to insist and iterate. From a behavior perspective, this is key: if the user's first experience occurs in a punitive environment, adoption dies. A bad initial interaction creates an internal heuristic: “this causes problems.” From there, each micro-friction confirms the decision to revert to habit.

The final result is insidious for C-Level executives: “AI deployed” is reported, but there is no return. Implementation is celebrated, while change is punished. And so the cycle repeats: more spending on tools, more frustration, more cynicism. The cost is not only financial; it is internal reputational. Each failed initiative reduces political capital for the next.

Even with Adoption, Many Companies Lose Money by Not Being Able to Bill for Usage

The second part of the story is quieter and, for a CFO, more dangerous: even when AI is used, many companies are ill-equipped to bill for it. Erez Agmon, CEO of Vayu, summarized it succinctly: “Most SaaS billing systems were designed with predictable subscriptions in mind. AI leads to erratic consumption.”

The heart of the problem is structural. Traditional software was sold by seats, licenses, or flat subscriptions. In contrast, AI is consumed in variable units: processed tokens, API calls, model executions. That consumption is not only variable; it’s intermittent, with peaks and valleys that are hard to predict. Expecting an old billing system to capture this without losses is like using a cash register to measure electricity.

TheStreet describes a concrete case that illustrates revenue leakage: a CFO discovered that his system only recorded usage on the billing cycle day. If a client upgraded mid-month and downgraded before billing day, the peak disappeared. The CFO himself bluntly stated: “I only bill for what was at the billing cycle date. I missed the peak. I lost that money.”

This example exposes a broader pattern: the economics of AI punish companies that do not measure accurately. Tracking gaps appear, manual reconciliations with spreadsheets emerge, and invoices are manually assembled. All of that works when there are few clients and the volume is low; it collapses when the product scales.

Revenue leakage is not a one-time event; it’s a dribble. And in a consumption model, a dribble multiplies. Not only does the company leave money on the table; it also becomes blind to price-setting decisions. If real usage is not captured, the management team ends up managing an illusion: believing the product is worth X, when customer behavior is indicating Y.

Additionally, from a client psychology standpoint, this is a trust bomb. A billing system that doesn’t understand consumption produces two symmetrical risks: underbilling and gifting value, or overbilling and activating conflict. In both cases, the business relationship erodes. AI promises precision; an erratic invoice communicates disorder.

The Transformation That Pays: Redesign Human Decisions and the Financial Muscle That Monetizes Them

The news leaves a harsh lesson: business AI is trapped between two worlds. Above, a discourse of innovation. Below, human habits and inherited financial systems.

To escape that trap, the strategy does not start with the model, but with the behavior that one wants to see in production. Companies that will capture value will not be those with the most pilots, but those that enact three disciplined moves.

First, translate AI into concrete decisions, with explicit accountability. If the output of AI does not change who decides, when they decide, and with what validation standard, usage will remain in small tasks. Real adoption occurs when the operational flow incorporates the tool as part of the “default path,” and when the cost of ignoring it becomes greater than the cost of using it.

Second, rebuild incentives so that employees do not have to choose between personal performance and adoption. When the system rewards maintaining the status quo, habit is rational. The company must create conditions where experimenting is safe and where reasonable error is not an individual liability but a controlled learning cost.

Third, modernize billing for the world of variable consumption. If the product is charged by use, accounting must view usage with enough granularity and in real-time to avoid missing peaks. Without that foundation, even successful adoption becomes growth that isn’t billed.

The synthesis for C-Level executives is uncomfortable but actionable: the return on AI is not unlocked by increasing computing power, but by reducing human and financial friction. Technology can shine, but business only profits when the organization stops betting all its capital on that shine and instead invests, with discipline, in removing the fears and frictions that prevent user adoption and that hinder the company from capturing value in billing.

Share
0 votes
Vote for this article!

Comments

...

You might also like