The Race for AI Has Shifted from Software to Electricity and Concrete
For years, the narrative surrounding AI was framed as a battle of models: who trained better, who launched first, who had the talent. That phase is becoming obsolete. The figures emerging for 2026 are rewriting the competitive map in a way reminiscent of heavy industries: Meta, Microsoft, Alphabet, Amazon, and Oracle are poised to commit between $660 billion and $690 billion in capex for AI infrastructure—almost double that of 2025, according to TechCrunch. Simultaneously, Jensen Huang projects an even larger order of magnitude: $3 to $4 trillion in total AI infrastructure investment by the end of the decade.
The unsettling detail for many executive teams is that this leap is not explained by some ethereal “bet on the future.” It's driven by present-day friction: data center capacity, availability of GPUs, and, above all, energy. Microsoft, for instance, points to a data point that serves as a market thermometer: $80 billion in backlogged orders for Azure due to power restrictions. The bottleneck is no longer in the product roadmap but in the electric grid and building capacity.
The New Balance Sheet of AI: Massive Capex and Unmet Demand
The first structural change is both accounting and strategic. What is being financed is not just “computing,” but industrial capacity: land, substations, energy contracts, cooling, and data centers designed for AI workloads. TechCrunch details how the major players are moving with figures that were previously associated with public infrastructure cycles.
Investment guidelines for 2026 outline the scale of the pivot: Amazon projects $200 billion (up from $131 billion in 2025), Alphabet between $175 billion and $185 billion (up from $91 billion), Meta between $115 billion and $135 billion (up from $71 billion), Microsoft heading towards $120 billion or more, and Oracle aiming for $50 billion, a leap of 136% over 2025. Together, these numbers form the aggregate range of $660 billion to $690 billion.
Behind the aggregate lies an operational message: hyperscalers are accepting that, for a period, AI is managed like an industry where the winner is the one who can convert liquidity into usable physical capacity ahead of others. In this context, “time to market” is measured in enabled megawatts, not in sprints.
This shift carries inevitable financial implications. Capex becomes a lever for positioning but also a source of pressure: if AI monetization does not keep pace, the asset remains idle, depreciating, and competing for energy with other usages. For now, the market seems to validate the scarcity thesis: Microsoft's signal about power backlogs serves as evidence that demand exceeds available supply.
Data Centers as Products: The Customer Buys Certainty, Not “Models”
I am interested in looking at this race through the lens of consumer behavior because the “customer” of this infrastructure is not just the end user of a chatbot. The relevant customer is the one who pays: companies that need to integrate AI into operations, customer service, programming, marketing, and analytics; and who today are “contracting” a very specific outcome: computational certainty.
In 2024 or 2025, many AI business discussions were resolved with demos and promises of productivity. By 2026, the differential is shifting to something more prosaic: guaranteed availability. When a provider accumulates orders without being able to fulfill them (Azure’s backlog), the corporate customer learns a pragmatic lesson: the risk is no longer just “whether the model works,” but whether there is capacity to run it when I need it.
Here emerges a less glamorous but more decisive innovation: transforming infrastructure into an explicit value proposition. Projects like Meta’s Hyperion —a 2,250-acre site in Louisiana, around $10 billion and scalable to 5GW, with plans linked to a nuclear plant according to reports— are not an engineering whim. They are an attempt to package the most scarce resource as a “product”: energy plus computing.
And the Stargate case takes that logic to the extreme. The joint venture of OpenAI, SoftBank, Oracle, and MGX, announced with support from the Trump administration, aims for $500 billion by 2029, with an initial deployment of $100 billion and planning for 7GW across five locations in Texas, New Mexico, and Ohio (by September 2025), in addition to more than $400 billion committed in the first three years. This doesn't seem like an incremental cloud expansion; it appears to be the construction of a new industrial layer.
In terms of corporate consumer, the pattern is clear: companies are paying for operational continuity. As AI becomes a component of critical processes, interruptions due to capacity shortfall become intolerable. The purchase shifts from “smart software” to “reliable industrial service.”
The Fight for the Supply Chain: Nvidia, GPU Agreements, and Alliances that Fix Dependency
The other dimension of power is not the data center itself but the supply chain that makes it useful. TechCrunch compiles agreements that, by scale, seem closer to commodity contracts than to tech alliances.
OpenAI, for instance, appears linked to a $100 billion agreement in GPUs with Nvidia, in addition to a stock-for-GPUs scheme with AMD. Nvidia, in turn, is reported to have reflected a similar structure with xAI. At the same time, it’s noted that Microsoft has invested almost $14 billion in OpenAI since 2019, starting with a $1 billion deal that included exclusivity on Azure (later relaxed to a multicloud approach with “first right of refusal”). And Amazon invested $8 billion in Anthropic, adjusting hardware for its needs.
Financially, this is read as an effort to reduce volatility on three fronts:
1. Secure supply: lacking a fixed GPU contract, one becomes subject to queues and prices.
2. Ensure demand: financing or integrating with a relevant lab guarantees workloads that fill the capex.
3. Convert infrastructure into lock-in: not necessarily through exclusivity clauses but through operational switching costs.
The important nuance is that negotiating power is shifting. When there's scarcity, suppliers of inputs (GPUs, energy, building capacity) capture more value. The cloud competes but also depends. Therefore, Huang’s comment on energy bottlenecks is so significant: the hardest limit is not the algorithm, it is access to electrical power.
This reordering also explains Oracle’s atypical growth narrative: its goal of $50 billion in capex and $523 billion in remaining performance obligations suggest a repositioning to capture demand for large-scale infrastructure, bolstered by its role in Stargate.
The Silent Risk: Oversized Infrastructure and Deteriorated User Experience
When an industry enters “build first, monetize later” mode, the risk is not always technological failure. Often, it’s a disconnect from the real work of the customer.
Here, there’s a central tension: the aggregate spending of $660 billion to $690 billion coexists with a pointed fact in the briefing: pure AI companies show rapid revenue growth but remain a fraction of the total infrastructure spending. This imbalance doesn't imply that the investment is irrational; it suggests that the value capture model is still consolidating.
Along the way, two operational dangers emerge:
The market signal is that the big players are betting that “AI will consume all available capacity,” as summarized by the cited analysis from Futurum Group regarding the leap from ~380 billion in 2025 to 660–690 billion in 2026. Should this hypothesis hold true, the capex is justified. If it is only partially true, the winner will be the one who built with greater contractual and energy flexibility.
That’s why the upcoming public discussion —a meeting at the White House in March 2026 with Amazon, Google, Meta, Microsoft, xAI, Oracle, and OpenAI, according to the briefing— has economic implications: enabling energy, permits, and construction defines future market share just as much as the best model does.
The Strategic Direction is Already Set: AI Will Be Sold as Guaranteed Capacity
The story of 2026 illustrates that the decisive “product” has shifted. AI will continue to compete on model quality, yes, but economic power is accumulating in the hands of those who control the physical bottleneck: data centers, GPUs, and electricity.
For a CEO or CFO, the practical implication is that the conversation about AI is no longer just a software discussion but evolves into a discussion of cost structures, supplier dependencies, and operational risk. In the short term, scale favors those who can absorb massive capex. In the medium term, the competitive space will open for proposals that deliver sufficient AI at a lower cost with fewer infrastructure requirements, especially where the client doesn’t need maximum performance.
The corporate consumer behavior pattern revealed by this race is striking: companies are not contracting “AI” as a concept; they are contracting continuity and certainty to transform processes without their infrastructure failing at critical moments.










