The most important AI shift is showing up on balance sheets before it shows up in product demos.
BOND’s May 2025 AI report points to the largest listed U.S. tech firms pushing capital expenditure sharply higher in 2024. That matters for a simple reason. Software used to be admired for growing without much physical weight. AI is making that harder to say with a straight face.
What changed is not only model capability. It is the cost of staying in the race. The leading firms now need more than engineers and distribution. They need accelerators, land, networking gear, cooling systems, construction capacity, and long-duration power access. In plain language, capital intensity means more of each revenue dollar has to go back into physical assets.
That is why this matters beyond a quarterly spending figure. AI is pushing major tech firms toward a different corporate form. They still sell software, cloud services, and advertising. But they are starting to behave more like owners and coordinators of infrastructure. That changes margins, control, and who gets to capture value when demand surges.
The real shift is capital intensity
The easy version of the story is that AI needs expensive chips. True enough. The better version is that AI is dragging the whole stack into view.
For years, the software dream was simple: high margins, low physical burden, and scale that did not require building much beyond server fleets and office space. AI has not ended that model, but it has complicated it. The firms with the strongest AI ambitions now look more exposed to the old industrial facts of life: procurement delays, power limits, construction schedules, and the awkward reality that not every bottleneck can be solved with code.
| Old software model | AI infrastructure model |
|---|---|
| Scale came mostly from code, distribution, and sales efficiency | Scale increasingly depends on chips, data centers, and power access |
| Physical assets mattered, but often sat in the background | Long-lived assets move closer to the center of the business |
| Margins were shaped mainly by talent and customer acquisition | Margins are shaped by compute pricing, utilization, and infrastructure financing |
| Growth felt relatively asset-light | Growth carries more industrial weight and coordination risk |
The balance sheet is starting to tell the truth before the product copy does.
Microsoft’s investor materials show how much money is now flowing into AI-related infrastructure and other long-lived assets as the company expands cloud and AI capacity. Amazon, Alphabet, and Meta are on similar paths, though each has a different mix of internal chips, cloud exposure, and revenue structure. NVIDIA remains central because it supplies so much of the compute foundation, but the deeper point is broader than one supplier. The big firms are trying to secure the stack itself.
If software margins start depending on substations and cooling loops, is this still a software business in the old sense. Or is it becoming something else.
Chips are only one layer of the story
It is tempting to reduce the entire AI buildout to GPUs. That misses the shape of the constraint.
- Accelerators are scarce and expensive.
- Data-center capacity takes time to build and fit out.
- Electricity and grid access can slow everything down even after the hardware arrives.
By physical bottlenecks I mean the scarce things the model cannot think without. Not abstract scarcity. Literal scarcity. Transformers. Switchgear. Cooling equipment. Enough available power in the right place. The more AI demand rises, the less sensible it becomes to talk as if software and infrastructure are separate conversations.
That is also why custom silicon matters. Google has TPUs. Amazon has Trainium and Inferentia. Microsoft has Maia. These projects are not vanity exercises. They are attempts to reduce dependence on a single external layer and regain some control over performance, cost, and scheduling. Put more simply, firms are trying to own more of the parts that can slow them down.
| Constraint layer | Why it matters | Awkward reality |
|---|---|---|
| Chips | They set the pace of training and inference | Supply concentration can turn one vendor relationship into a strategic dependency |
| Data centers | They turn compute plans into usable capacity | Buildings, fit-out, networking, and cooling do not move at software speed |
| Power | It determines whether the site can actually run at scale | The hardware can arrive before the electricity does |
This is where the old asset-light mythology runs into fixed-asset reality. It is not a dramatic line. It is just what happens when model ambition meets the physical world.
Data centers turn software ambition into physical constraint
The data-center buildout is the part of the story that makes the shift impossible to ignore. Dell’Oro estimated global data-center capital expenditure at $455 billion in 2024. That is not a background number. It is a sign that AI has moved from being a feature race to being a facilities race as well.
Sometimes a single example says more than a page of abstractions. xAI says it built Colossus in 122 days. Whether or not that pace becomes common is almost beside the point. The fact that companies are willing to move that fast tells you how strategic these sites have become.
There is a second layer here that matters for ownership. Once data centers become scarce strategic assets rather than generic hosting space, the firms that can finance them, fill them, and keep them fed gain a different kind of advantage. Not only a product advantage. A coordination advantage. They can make long commitments that smaller firms cannot. They can tolerate lower near-term free cash flow if they believe the capacity will matter later.
That does not mean the largest firms automatically win every round. Overbuild can become a problem. Utilization can disappoint. Costs can arrive faster than monetization. But the structure of the game still changes. The firms with balance-sheet depth and the ability to sign long contracts are playing with a broader set of tools.
This is no longer only a software scaling story.
Anyone trying to understand this shift could do worse than reading Vaclav Smil’s How the World Really Works. Not for prediction. For proportion. It helps to remember that physical systems do not care much about digital impatience.
Power becomes the quiet gatekeeper
Power is the least glamorous part of the AI buildout, which is exactly why it matters. The impressive model gets the attention. The boring contract and the available capacity decide whether the model can run cheaply and at scale.
The IEA says data centers accounted for roughly 1.5% of world electricity consumption in 2024, with demand rising at a pace that has turned energy planning into a live AI question. Put more simply: the machine-learning story now leans on the electrical system more than many people expected a few years ago.
That creates a quiet sorting effect. The winners are not only the firms with the best models. They are also the firms that can secure enough electricity, place infrastructure in workable locations, and absorb the waiting time that comes with industrial buildout. A model can be copied in spirit by competitors. A signed power arrangement in the right place is harder to reproduce quickly.
This is one reason the topic sits naturally on CV3. The issue is not just innovation. It is value capture. When AI depends on scarce physical systems, value tends to collect around whoever owns, controls, or can reliably finance those systems. That logic runs through more than one piece on CV3, from The Foundations of Wealth to Why Data Pipelines Are the New Oil Rigs of AI. The common thread is not hype. It is the persistence of bottlenecks.
The more unsettling question sits a little further out. If leading tech firms become more infrastructure-like, they also become more exposed to the economics of infrastructure: heavier upfront spend, longer payback periods, and a stronger link between operational discipline and strategic position. That is not necessarily bad. It just means the old assumptions about what a software company is may age faster than expected.
That is why AI should not be read only as a model story. It is also a story about ownership, dependency, and the return of physical constraint. Speed still matters. But speed without capacity ends up being a slogan. And capacity, increasingly, has to be built.
Why is AI becoming so capital-intensive?
Because frontier AI now depends on expensive chips, specialized facilities, networking, cooling, and power access. More revenue has to be recycled into physical assets instead of staying inside the older, lighter software model. The Foundations of Wealth
Are chips still the main bottleneck?
Chips matter, but they are only one layer. Even with enough accelerators, firms still need usable data-center capacity, cooling equipment, and enough electricity in the right place. The hardware can arrive before the wider system is ready for it. Why Data Pipelines Are the New Oil Rigs of AI
Why do data centers change software economics?
Because they pull software companies deeper into the world of long-lived assets, financing decisions, construction timelines, and utilization risk. That changes how margins are earned and defended. Private Wealth Intelligence
Does this favor only the biggest firms?
Not automatically, but balance-sheet depth helps. Large firms can sign longer contracts, absorb delays, and keep building through uneven monetization. Smaller firms may still win in parts of the stack, though usually with less room for error. The AI Singularity Isn’t Skynet, It’s Speed
What is the core shift to watch from here?
Watch whether leading tech firms keep looking more like owners of scarce systems and less like pure software abstractions. If that continues, the real AI moat may sit as much in infrastructure control as in model quality. The Foundations of Wealth
