"Behind every AI model is a physical supply chain — silicon, copper, concrete, and an enormous amount of electricity."
7 min read · April 2026
Good Morning, Good Evening, and Good Night — wherever you're reading this. This month, we go inside the buildings where the AI revolution actually happens.
When people talk about artificial intelligence, they talk about models, algorithms, and software. What they rarely talk about is the physical infrastructure that makes it all possible — the warehouses full of GPUs, the transformers converting megawatts of power, the cooling systems preventing billions of dollars in silicon from melting. Data centers are the factories of the AI era, and their supply chain is becoming one of the most complex, capital-intensive, and strategically important in the world.
April 2026 finds the data center construction market in a state of unprecedented expansion. Hyperscalers — Amazon Web Services, Google, Microsoft Azure, and Meta — are collectively investing over $200 billion in data center infrastructure this year alone. That figure was considered unthinkable three years ago. Today, it's not enough. Demand for AI compute is outstripping even the most aggressive build-out plans, and the supply chains feeding these projects are straining under the weight of orders that have no historical precedent.
The scale of investment is difficult to overstate. Microsoft alone has committed over $60 billion to data center construction in 2026 — more than the entire CHIPS Act. Google is building campuses across the U.S. Southeast, drawn by cheaper power and favorable tax incentives. AWS is expanding in Oregon, Virginia, and Ohio. Meta is constructing what it calls the largest AI training cluster ever built, purpose-designed for next-generation foundation models.
These aren't incremental expansions of existing facilities. They're entirely new campuses — often spanning hundreds of acres, requiring their own electrical substations, and consuming more power than small cities. A single modern AI training cluster can draw 100+ megawatts continuously. For context, that's enough electricity to power roughly 80,000 homes.
The construction bottleneck: Even with unlimited capital, you can't build data centers faster than the supply chain can deliver components. The three most critical constraints right now are: (1) power transformers, (2) GPU availability, and (3) skilled electrical labor. Any one of these can add 6-12 months to a project timeline — and all three are currently in short supply.
The single biggest constraint on data center growth isn't silicon — it's electricity. Data centers are projected to consume 8-10% of total U.S. electricity by 2028, up from approximately 4% in 2024. That growth rate is staggering, and the electrical grid was not designed to accommodate it.
The problem manifests at every level of the power supply chain. Generation capacity is insufficient in many regions — utilities are scrambling to add power plants, and the permitting and construction timeline for new generation is measured in years, not months. Transmission infrastructure — the high-voltage lines that move electricity from generators to consumers — is even more constrained, with some grid interconnection queues stretching to 2030.
If there's a single component that symbolizes the data center supply chain bottleneck, it's the large power transformer. These massive devices — some weighing over 400 tons — step voltage down from transmission levels to distribution levels. They're essential for every data center, every new industrial facility, and every grid expansion project.
"We have a 24 to 36 month wait for large power transformers. There are only a handful of manufacturers globally that can build them. This is the most critical bottleneck in the entire data center supply chain, and there's no quick fix."
— Power infrastructure industry assessment, April 2026The global transformer manufacturing capacity is concentrated in a small number of facilities in the U.S., Europe, South Korea, and China. Demand from data centers is competing with demand from grid modernization, renewable energy integration, and post-disaster replacement. The lead time has stretched from 12 months pre-2023 to 24-36 months today, and some hyperscalers are reportedly placing orders for transformers before they've even secured building permits — speculative procurement driven by scarcity.
The power demands of AI are driving a genuine renaissance in nuclear energy — specifically, Small Modular Reactors (SMRs) designed to provide dedicated, carbon-free power to data center campuses. Microsoft has signed agreements to explore SMR deployment for its Azure infrastructure. Google and Amazon have made similar investments. The appeal is obvious: nuclear provides reliable baseload power (data centers need 99.999% uptime), produces zero carbon emissions (satisfying ESG commitments), and can be sited near the data centers themselves (reducing transmission losses and grid dependency).
The nuclear timeline reality: SMRs are promising but not imminent. The first commercial SMR deployments are expected in 2028-2030 at the earliest. NRC licensing, construction, and commissioning timelines mean that nuclear won't solve today's power crunch — but it's the most credible long-term answer to the question of how AI gets the electricity it needs without breaking the grid or the climate.
AI chips run hot. NVIDIA's H200 and B200 GPUs — the workhorses of AI training — generate significantly more heat per unit than the general-purpose server CPUs that data centers were traditionally designed to cool. The thermal density of modern AI racks is pushing traditional air cooling to its physical limits.
Liquid cooling is rapidly becoming the standard for new AI-focused data center construction. Direct-to-chip liquid cooling systems circulate coolant directly over GPU packages, removing heat far more efficiently than air-based systems. Immersion cooling — where entire servers are submerged in dielectric fluid — is being deployed for the highest-density installations.
This shift has its own supply chain implications. Liquid cooling systems require specialized components — precision-machined cold plates, leak-proof fluid couplings, heat exchangers, and non-conductive coolant fluids — that a year ago had niche suppliers and limited production capacity. Demand has outstripped supply for virtually every component in the liquid cooling stack. Companies like Vertiv, CoolIT Systems, and GRC are scaling manufacturing as fast as possible, but the ramp takes time.
NVIDIA's data center revenue is running at a pace exceeding $120 billion annually — a number that would have seemed absurd even two years ago. The demand for AI training compute is growing exponentially, driven by foundation model development (each new generation requires roughly 10x the compute of the previous one), enterprise AI deployment, and the emerging field of AI agents that require continuous inference compute.
TSMC, which manufactures the vast majority of NVIDIA's chips, is running its advanced nodes at full capacity. The company's Arizona fab is helping at the margins, but the bulk of leading-edge production remains in Taiwan — a geographic concentration of risk that keeps supply chain strategists up at night. A disruption to TSMC's Taiwan operations wouldn't just slow down AI development — it would effectively halt it. There is no near-term alternative for leading-edge AI chip fabrication at scale.
The data center boom is reshaping commercial real estate markets in ways that few predicted. Data center REITs — Equinix, Digital Realty, QTS — have dramatically outperformed the broader real estate market. Land prices near major electrical substations and power plants have skyrocketed as developers compete for sites with sufficient power access.
Northern Virginia remains the world's largest data center market, but capacity and power constraints are pushing development to new geographies. Central Ohio, the Carolinas, the Texas grid (ERCOT), and parts of the Pacific Northwest are all seeing major data center development. The common thread: available power, favorable climate for cooling efficiency, and local government incentives.
The community tension: Data centers bring tax revenue and construction jobs, but they consume enormous amounts of power and water while creating relatively few permanent jobs compared to other industrial developments. Several communities have pushed back against proposed data center projects, citing grid strain and water usage concerns. This community opposition is becoming a meaningful variable in site selection — one that can add months to project timelines and millions to development costs.
Step back and look at the full picture. Building AI infrastructure requires:
Every one of these supply chains is under strain. The companies that can secure supply — through long-term contracts, vertical integration, or strategic partnerships — will build faster. Those that rely on spot procurement will wait in line. In the AI infrastructure race, supply chain execution is competitive advantage.
"AI is a software revolution powered by a hardware supply chain. The companies that understand both will define the next era. The ones that only understand software will be waiting for someone else to build the infrastructure they need."
— Daivik Suresh, April 2026-DAIVIK SURESH-
Supply Chain + Business Analytics Enthusiast · April 2026Not financial advice. All opinions are personal. Investing involves risk including potential loss of principal.