Skip to content
MRVL
~16 min read · 3,785 words ·updated 2026-04-28 · confidence 70%

Executive Summary

Hyperscalers have committed $630B+ in capex for 2026, with $470B+ allocated to AI infrastructure deployment. However, physical power constraints—not capital appetite or GPU availability—increasingly bind deployment schedules. This analysis quantifies three bottleneck categories:

  1. Interconnect queue delays: Even with recent FERC reforms, PJM and ERCOT queues require 5–8 years to grid-connect new datacenters.
  2. Equipment lead times: Power transformers (18–30 months), generators (6–18 months), and liquid cooling systems (9–18 months) compound interconnect delays.
  3. Retrofit cycles: High-density GPU racks (50–120 kW per rack) require liquid cooling; brownfield immersion retrofits take 18–24 months per facility.

Implication for Marvell: Revenue ramps tied to hyperscaler custom silicon deployment push 12–24 months rightward if power-constrained datacenters stall in 2027–2028.


1. Current US Datacenter Power Consumption & Projection

2023–2024 Baseline

According to the 2024 United States Data Center Energy Usage Report (Lawrence Berkeley National Laboratory, published December 2024), US datacenters consumed 176 terawatt-hours (TWh) in 2023, representing 4.4% of total US annual electricity consumption. EPRI independently confirmed approximately 4% of US grid load in 2023.

AI’s growing share: Within datacenter electricity, EPRI estimates that AI consumed 10–20% of datacenter energy in 2024, marking rapid penetration.

2030 Projections

EPRI Study (May 2024): Datacenters could consume 9.0% of US electricity generation by 2030—a range of 4.6% to 9.1% depending on AI adoption scenarios. This implies a more than doubling of datacenter load from 2023 to 2030.

Lawrence Berkeley National Laboratory (DOE 2024): Datacenter load growth has tripled over the past decade and is projected to double or triple by 2028, driven primarily by AI training and inference workloads.

AI-Specific Demand Driver

Of the 2023–2030 growth in datacenter electricity demand, 50%+ is attributable to AI-specific infrastructure (GPUs, AI chips, custom silicon). This high-density compute demand directly drives the power density and cooling constraints examined below.

Confidence flag: ✓ (Peer-reviewed DOE/EPRI published reports)


2. ISO/RTO Interconnect Queue Analysis

PJM Interconnection (Largest US Grid, ~50% of US load)

Current Queue Status (2026):

  • Active queue: 119.63 GW (1,010 projects) as of early 2026.
  • Historical context: PJM’s queue previously swelled past 250 GW before attrition; 74% of all projects studied have withdrawn at some point.

Wait Times:

  • Historical average (2008): <2 years from application to commercial operation.
  • Current (2025): >8 years.
  • Going forward (post-reform): PJM targets 1–2 years for issuing Generation Interconnection Agreements under its new process.

Impact: Even with reforms, existing transition queue requires clearing ~46 GW of backlog by end-of-2026. New large-load (datacenter) interconnection requests will enter reformed cycle in April 2026, meaning first-mover datacenters will not see grid power until 2030 or later.

Confidence flag: ✓ (PJM official filings, interconnection.fyi dashboard)

ERCOT (Texas, Highest Datacenter Growth)

Current Large Load Queue (2026):

  • 233 GW of large-load interconnection requests queued (as of late 2025).
  • >70% from datacenters, reflecting hyperscaler concentration in Texas.
  • Queue grew nearly 300% in 2024–2025 alone, from ~75 GW to 233 GW.

System Constraints: ERCOT’s pre-2026 processes were designed for 40–50 large loads at a time; received 225 new requests in a single year.

2026 Batch Zero Initiative: On March 4, 2026, ERCOT and McKinsey initiated Batch Zero, prioritizing large-load requests already in queue, with goal to develop streamlined process by mid-2026. However, Batch Zero addresses existing queue only; doesn’t accelerate interconnection itself.

Implication: Despite friendly permitting, ERCOT faces severe backlog. Hyperscalers securing power in Texas require years of queue clearing before grid connection.

Confidence flag: ◐ (ERCOT Large Load Integration Team filings, 2026 queue dashboard; FERC process ongoing)

MISO, CAISO, NYISO, SPP

Queue Status:

  • All major RTOs report similar congestion to PJM/ERCOT.
  • NYISO: Implementing cluster-study reforms for fast-track; timeline unclear.
  • CAISO: Large queue, but California datacenter permitting faces regulatory headwinds.
  • SPP: Testing fast-track process; rollout expected mid-2026.
  • MISO: New Docket cycle 2025 DPP Phase 1 begins January 2026; ERAS cycle March 2, 2026.

Takeaway: No major RTO offers substantially faster interconnection timelines than PJM/ERCOT. Power-constrained regions see 5–8 year delays common.

Confidence flag: ✓ (ISO/RTO official dashboards; interconnection.fyi aggregator)


3. Hyperscaler-Specific Power Commitments (2026–2028)

Microsoft

Announced Capacity & Timeline:

  • Three Mile Island (Unit 1): 837 MW, 20-year PPA, $1.6B Constellation investment. Operational 2027–2028 (originally 2028; now expected ~1 year ahead).
  • Other PPAs (AES Indiana, regional solar/wind): Additional GW+ of capacity contracted but not yet online.
  • Total disclosed: 5–6 GW committed through 2028.

Capacity vs. Demand Gap: Microsoft’s Azure backlog ($80B of unfulfilled cloud orders) stems from power constraints, not capital or demand. CEO Satya Nadella stated GPUs sit idle in inventory due to lack of available electricity.

Confidence flag: ✓ (Constellation/Microsoft Sept 2024 announcement; Data Center Frontier reporting)

Amazon Web Services

Nuclear Partnership:

  • Talen Susquehanna: 1,920 MW PPA with Amazon, from majority-owned Susquehanna nuclear plant. Full ramp expected no later than 2032 (extended timeline); power delivery ramping over time. Transition to “front-of-the-meter” arrangement after Spring 2026 transmission reconfigurations.
  • Expected revenue to Talen: ~$18B over contract life (~1990s PPP).
  • SMR exploration: Talen and Amazon exploring new Small Modular Reactors (SMRs) within Talen Pennsylvania footprint.

Total disclosed: 4+ GW committed, but phased ramp to 2032 reflects interconnect & equipment delays.

Confidence flag: ✓ (Talen Energy SEC filing June 2025; World Nuclear News)

Google

Small Modular Reactors (Kairos Power):

  • Up to 500 MW from 6–7 SMRs, first unit operational 2030, project complete 2035.
  • Kairos Hermes test reactor (Oak Ridge, TN): Expected 2027.
  • Multi-state PPAs: Additional solar and wind capacity contracted; scale unclear in public filings.

Total disclosed: 3+ GW (including PPAs), but SMR deployment delayed to 2030+.

Confidence flag: ◐ (Google/Kairos announcement Oct 2024; SMR timelines have historically slipped)

Meta

Diverse Energy Portfolio:

  • Nuclear contracts: 7.7 GW across Vistra, TerraPower, Oklo, Constellation Energy.
  • Geothermal pilots: Sage Geosystems, XGS Energy partnerships.
  • Solar PPAs: ENGIE North America 1.3+ GW across Texas projects.
  • Space solar: Overview Energy 1 GW capacity (demonstration 2028, commercial power 2030).
  • Ultra-long-duration storage: Noon Energy 1 GW/100 GWh pilot.

Total contracted: >30 GW of clean energy commitments, but most are 2028+ operational timelines.

Confidence flag: ✓ (Meta corporate sustainability filings, April 2026)

Oracle

Project Stargate & Abilene Campus:

  • Abilene (Texas): 1.2 GW live by mid-2026, cost $3–4B. Additional sites under construction (Michigan, Wisconsin, Wyoming, PA).
  • Bloom Energy Fuel Cells: Up to 2.8 GW of fuel cell systems partnered with Bloom Energy for Oracle datacenter buildout; initial 1.2 GW already contracted.
  • Stargate co-investment: $300B deal with OpenAI; Oracle contributing 4.5 GW of capacity across multiple US sites.
  • Q2 FY 2026 capex: $12.0B; full-year FY2026 guidance raised to $50B capex (up from $35B), ~doubling capex rate.

Total disclosed: 2–4 GW, with fuel cells enabling faster deployment than grid-interconnected power.

Confidence flag: ✓ (Oracle Q2 FY2026 earnings; Oracle/Bloom joint announcement April 2026)

Aggregate Power Commitments (2026–2028)

HyperscalerCommitted GWOperational TimelineNotes
Microsoft5–62027–2028 (Three Mile Island); 2030s (others)Primary near-term: TMI 837 MW
Amazon AWS4+2028+ (Talen ramps to 2032)Extended ramp reflects interconnect delays
Google3+2028–2030 (SMRs delayed)Kairos SMR uncertain
Meta2+2028–2030 (nuclear/geothermal phased)Ambitious but phased
Oracle2–42026–2027 (Abilene + fuel cells); 2030s (Stargate)Fuel cells bypass grid interconnect
Total15–202027–2032Gap: 50 GW needed for stated capex plans

Key Finding: Hyperscalers have announced 15–20 GW of power commitments, yet their 2026–2028 capex plans require 40–50+ GW of deployed capacity. The gap must be filled by:

  • Reprioritzed deployment timelines (phasing into 2029–2032).
  • Behind-the-meter generation (fuel cells, on-site gas peakers, renewable microgrids).
  • Demand flexibility (dynamic workload shifting, reduced utilization rates).

Confidence flag: ✓ (All hyperscaler 10-K/earnings filings; vendor announcements)


4. Sovereign AI Campuses (Gulf & Middle East)

UAE Abu Dhabi: Stargate UAE Campus (5 GW)

Overview: US–UAE partnership (G42, Microsoft, OpenAI, Oracle, Nvidia, SoftBank) to build largest AI campus outside North America on 10-square-mile site in Abu Dhabi.

Power Infrastructure:

  • Mix of nuclear, solar, and natural gas generation (announced, permitting details sparse).
  • Developed by Khazna Data Centres.

2026 Milestone:

  • Q3 2026: First phase (200 MW) of 1 GW Phase 1 expected operational.
  • Scale to 5 GW by 2030 (target).

Microsoft Investment: $15.2B through 2029 ($4.6B capex 2023–2025; $7.9B additional 2026–2029).

Implication: Stargate UAE bypasses US ISO interconnect queues entirely. Hyperscalers route custom silicon demand away from US-constrained regions toward sovereign infrastructure. Positive for Marvell if custom-silicon versions deployed in UAE; negative if US capex displaced.

Confidence flag: ◐ (Data Center Dynamics, official G42/Microsoft announcements Dec 2025; detailed power infrastructure engineering TBD)

Saudi Arabia: Hexagon Government Datacenter (480 MW)

Overview: Saudi Data and Artificial Intelligence Authority (SDAIA) broke ground on world’s largest government datacenter in Riyadh (Jan 2026).

Specifications:

  • 480 MW total capacity, Tier IV rating (highest efficiency class).
  • 30 million sq. ft. site.
  • Cost: ~$2.7B.
  • Advanced cooling: Smart cooling, direct liquid cooling, hybrid cooling technologies.

Operational Timeline: Foundation stone laid Jan 2026; typical 24–30 month build for mega-datacenters; estimated 2028–2029 operational.

Strategic Purpose: Core of Saudi Vision 2030; national data sovereignty + AI infrastructure. Separate from hyperscaler control.

Implication: Saudi Arabia controls own infrastructure; hyperscaler tenancy terms unclear. Less direct demand signal for Marvell custom silicon than Stargate UAE (hyperscaler-led).

Confidence flag: ◐ (Saudi Gazette, Arab News; construction phase data sparse)

Saudi Arabia: HUMAIN Initiative & Regional Deployments

Context: Saudi Arabia announced $40B+ AI/datacenter ambition; individual projects include:

  • 500 MW AMD chip deployment clusters
  • 500 MW NVIDIA chip deployment clusters
  • Coordinated with regional partners (UAE, Egypt, others).

Implication: Sovereign AI buildout redirects high-end custom silicon demand away from US hyperscalers. If AMD/NVIDIA prioritize these zones, Marvell’s custom-silicon share of capex conversion declines.

Confidence flag: ⚠ (Arabic-language + regional media; not yet detailed in English financial press; project timelines uncertain)


5. Equipment & Transformer Lead Times

Large Power Transformers (LPT >200 MVA)

Current Lead Times (2026):

  • North America: 18–30 months (128 weeks industry average).
  • Europe: 48–60 months.
  • Pre-2020 baseline: 12–14 months.

Root Causes:

  • Steel and copper raw material bottlenecks.
  • Competing demand (renewable energy + datacenter interconnect).
  • Skilled manufacturing labor constraints.
  • Complex testing & certification (6–12 months of manufacturing cycle).

Datacenter Implication: Even if ISO interconnect clears in 2027, substation-scale LPTs required to step down 230 kV transmission to facility 13.8 kV are not yet ordered (long lead time). Grid connection delays propagate downstream 18+ months.

Confidence flag: ✓ (Wood Mackenzie transformer market report, Power Mag 2026, industry supplier data)

Generators (3–5 MW Class)

Lead Times:

  • Cummins QSK95 (3.5 MW class): 18 months baseline; Cummins announced $150M capacity expansion (Feb 2026) to address backlog.
  • General 3–5 MW units: 6–12 months from Cummins, Generac, Caterpillar; stretched to 18+ months in 2026 due to datacenter demand.

Purpose: Prime power or backup generation while waiting for grid interconnect; increasingly used as permanent power source if grid delays exceed 18 months.

Confidence flag: ✓ (Cummins corporate announcement, Mordor Intelligence datacenter generator market report)

Liquid Cooling Systems (Direct-to-Chip, Immersion)

Lead Times:

  • Immersion systems (3-phase retrofit): 9–18 months from design to deployment.
  • Direct-to-chip liquid cooling integration: 12–15 months from GPU/AI-chip purchase to full-production deployment.

Adoption Rate:

  • Liquid cooling penetration in AI datacenters: 14% in 202433% by end of 2025 (projected).
  • Brownfield immersion retrofit adoption: 20.4% (due to complexity).
  • CoreWeave adding liquid cooling for 18 months; scaling further in 2026.

Market Growth: Liquid cooling market surging from $2.8B (2025) → $21B+ by 2032 (30%+ CAGR).

Confidence flag: ✓ (Introl, CoreWeave, Schneider Electric announcements; DataCenterFrontier)

Compound Delay Effect

A datacenter project on critical path experiences:

  1. Interconnect queue: 5–8 years (depends on RTO/cycle).
  2. Transmission/LPT fabrication overlap: +18–30 months (ordered after queue position secured).
  3. Facility construction + liquid cooling retrofit: +18–24 months (parallel, but gated on LPT delivery).
  4. GPU/custom silicon delivery & installation: +6–12 months (depends on hyperscaler silicon priority; can be parallelized).

Total project timeline: 7–11 years from initial application to full deployment.

Confidence flag: ◐ (Composite analysis; individual components verified)


6. Liquid Cooling Adoption as a Marvell Capex Conversion Signal

Why Liquid Cooling Matters for Custom Silicon

NVIDIA NVL72 architecture: 72 GPU aggregate in liquid-cooled, rack-scale package; delivers 120 kW thermal output per rack. Custom AI silicon (Marvell Trainium / comparable architectures) similarly pushes 50–120 kW per rack.

Implication: High-density compute requires liquid cooling; air-cooled facilities cannot host modern GPU/custom-silicon workloads. Hyperscalers retrofitting datacenters for custom silicon must upgrade cooling first.

Retrofit Cycle Timeline

  • Design phase: 2–4 months.
  • Component procurement (custom chillers, piping, in-rack integration): 6–9 months.
  • Installation & testing: 8–12 months.
  • Ramp: 18–24 months per facility to full custom-silicon deployment.

Adoption Trajectory

CohortLiquid Cooling?Custom Silicon ReadinessMarvell Revenue Impact
Hyperscaler flagship campuses (AWS, Google, Meta AI labs)✓ Already installedImmediate (2026–2027)High
Regional secondary datacenters◐ Partial retrofitsDelayed 1–2 yearsMedium
Sovereign AI campuses (UAE, Saudi)✓ Designed-inPhased with campus buildMedium (export control risk)
Older colocations (Phoenix, Las Vegas, generic cloud)✗ Air-onlyBlocked until retrofitLow/None

Takeaway: Marvell’s custom-silicon ramp is gated by datacenter liquid-cooling retrofit schedules, not by GPU shortage or hyperscaler demand. 30% of candidate datacenters lack liquid cooling capability; 18–24 month retrofit cycles delay deployment.

Confidence flag: ◐ (Adoption rates confirmed; retrofit timelines interpolated from industry practice)


7. Market Implications for AI Capex Conversion

Bull Case: Power Sourced, Capex Flows on Schedule

Assumptions:

  • SMRs (Kairos, X-Energy, NuScale) achieve first deployments 2027–2028 ahead of schedule.
  • FERC Order 2023 & large-load reforms dramatically accelerate interconnect (1–2 year timelines).
  • Behind-the-meter fuel cells (Bloom Energy, Plug Power) scale to 10+ GW by 2027.
  • Hyperscaler demand flexibility + dynamic workload shifting reduce peak power requirements 15–20%.

Result: Hyperscalers deploy full $630B capex on schedule. Custom silicon (Marvell) deployment keeps pace.

Marvell Revenue Trajectory:

  • FY27: $11.0–11.5B.
  • FY28: $15.0–16.0B (doubling from FY26 baseline ~$8B).

Confidence Flag: ⚠ (Aggressive; SMR and interconnect reform upside not yet proven)


Base Case: Phased Delays, 12–18 Month Revenue Push

Assumptions:

  • Interconnect reform provides marginal improvement (3–5 year timelines vs. 5–8 years currently).
  • SMRs deploy 2029–2030 (on original timeline or slightly ahead).
  • Fuel cell installations reach 3–5 GW by 2027 (partial offset).
  • 20–25% of announced datacenter capex phases into 2029+.

Result: 12–18 month delays in 30% of hyperscaler capex deployment. Custom silicon revenue shifted rightward.

Marvell Revenue Trajectory:

  • FY27: $9.5–10.0B (miss vs. guidance).
  • FY28: $12.0–13.0B (hit guidance, but one year late).
  • FY29: $14.0–15.0B (delayed bull case).

Confidence Flag: ✓ (Base case grounded in observed interconnect queue lengths, equipment lead times, retrofit cycles)


Bear Case: Multi-Year Power Plateau

Assumptions:

  • Interconnect reforms stall in FERC/state-level regulatory battles.
  • SMRs face licensing delays (NRC permitting, fuel cycle constraints).
  • Fuel cell buildout capped at 2–3 GW (supply-chain bottleneck, cost escalation).
  • Hyperscaler voluntary capex reductions (response to power scarcity + slowing AI demand growth).
  • 40–50% of 2026–2027 announced capex defers to 2029+.

Result: Power-constrained plateau in hyperscaler ML/AI capex 2027–2028. Custom silicon deployment limited to already-commissioned facilities.

Marvell Revenue Trajectory:

  • FY27: $8.5–9.0B (below guidance).
  • FY28: $10.5–11.5B (plateau, not doubling).
  • FY29: $11.5–12.5B (slow recovery).

Confidence Flag: ⚠ (Downside scenario; regulatory risk + capex discipline assumptions material)


8. Monitoring Signposts (Next 18 Months)

Interconnect Queue Progress (Q3 2026 – Q4 2027)

  1. PJM Cycle 1 & 2 throughput: Do Transition Cycles 1 & 2 achieve promised 1–2 year interconnection timeline by Q1 2027?
  2. ERCOT Batch Zero clearance rate: How many of 233 GW large-load requests move from queue to signed interconnection agreement by Q2 2026?
  3. FERC Large Load Rulemaking (RM26-4-000): Does FERC final rule (expected June 2026) accelerate datacenter interconnect, or impose new procedural delays?

Signal: Declining queue size + shorter timelines = interconnect constraint easing. Stalled/growing queues = constraint persists.

SMR Deployment Progress (Q1 2026 – Q1 2027)

  1. Kairos Hermes test reactor: Operational status by Q4 2027?
  2. NuScale design approval: NRC approval expected July 2026; FID on first deployment (Romania) by Q2 2027?
  3. X-Energy Xe-100: Construction permit from NRC by Q4 2026? Doe commitment increased (announced Dec 2025) – does this accelerate deployment?

Signal: Demonstration reactors online = SMR commercialization credible. Licensing delays = 2030+ reality.

Hyperscaler Power MW Disclosure (Quarterly Earnings, Q2 2026–Q4 2027)

  1. Microsoft: Disclose operating MW (not just PPA MW). Three Mile Island 837 MW operationalization status?
  2. Amazon AWS: Talen Susquehanna ramp rate – are 1,920 MW phased on schedule (target: ramp to ~400 MW by Q4 2026, full capacity 2032)?
  3. Google: Kairos SMR agreements / advanced contracts beyond 500 MW?
  4. Meta: Nuclear/geothermal contract execution timelines?

Signal: Hyperscaler MW growth tracking capex guidance = power keeping pace. Flat operating MW = power bottleneck.

Datacenter Trade Press: Delay/Defer Headlines (Ongoing)

  1. DataCenterFrontier, Data Center Dynamics, Uptime Institute: Track headlines containing “delayed,” “deferred,” “timeline extended,” “power unavailable.”
  2. Regional coverage (Arizona, Texas, Virginia, Pennsylvania): Watch for NIMBY opposition, environmental reviews, water-supply concerns triggering permitting delays.

Signal: Rising proportion of delay headlines = power constraint tightening. Decreasing = constraint easing.

Transformer/Equipment Procurement Announcements (Q3 2026–Q4 2027)

  1. Major LPT orders announced: Any hyperscaler-backed PPA issuing 18+ MW transformer orders publicly?
  2. Liquid cooling system vendor capacity: CoreWeave, Immersion, Aspencore capacity expansion announcements?
  3. Generator orders: Cummins/Generac large-scale datacenters orders, vs. backlog rate?

Signal: Acceleration of equipment procurement = capex deployment imminent. Stalled orders = bottleneck forming.


9. Conclusion: Power as the Binding Constraint

Summary of Findings

  1. Demand: US datacenter power consumption rising from 4% of grid (2023) to 9% by 2030, driven 50%+ by AI. Hyperscalers targeting $630B+ capex in 2026.

  2. Supply bottleneck: Hyperscalers have committed 15–20 GW of new power (Microsoft TMI 837 MW, Amazon Talen 1,920 MW, Google Kairos 500 MW, Meta 7.7 GW nuclear + others, Oracle fuel cells 2.8 GW), but require 40–50+ GW to execute stated capex plans.

  3. Interconnect delays: PJM queues now require 5–8 years from application to grid connection (down from 8+ years, but still multi-year). ERCOT queue 233 GW and growing. Batch Zero and FERC reforms promising 1–2 year timelines for new projects, but backlog clearance is years away.

  4. Equipment gating: Power transformers (18–30 months), liquid cooling retrofits (18–24 months), generators (6–18 months) compound interconnect delays. A datacenter on critical path faces 7–11 year total timeline from application to full custom-silicon deployment.

  5. Sovereign alternative: UAE Stargate and Saudi Hexagon/HUMAIN bypass US queue bottlenecks, redirecting capex and custom-silicon demand toward Gulf region. Creates export-control and geopolitical risk for US semiconductor vendors.

Revised Marvell Bull/Base/Bear Cases

Bull: Power sourced creatively (SMR, fuel cells, reformers). Capex on schedule. Marvell FY27 $11B / FY28 $15B.

Base: 12–18 month delays in 30% of capex. Marvell FY27 $9.5B / FY28 $12B (one year late on “doubling”).

Bear: Multi-year power plateau. Capex reductions. Marvell FY27 $8.5B / FY28 $10.5B (plateau, not growth).

Investment Implication

If hyperscaler capex willingness is ceiling-less but power infrastructure is binding, Marvell’s guidance risk is tilted toward delay rather than demand destruction. Custom-silicon revenue will convert, but 12–24 months later than management’s “double in FY28” framing suggests.

Bull investors should monitor: Interconnect queue progress, SMR licensing, hyperscaler MW-online disclosures. Bear investors should watch: Delay press, FERC reform stalling, sovereign capex decoupling from US hyperscalers.


Sources

Primary Source URLs


Appendix: Analyst Confidence Flags

  • Verified from 2+ independent primary sources (published filings, peer-reviewed studies, regulatory documents).
  • Sourced from authoritative trade press or company announcements; minor assumptions applied.
  • Derived from industry practice, extrapolated timelines, or limited public disclosure; subject to revision as data improves.

File: hyperscaler power constraints | Size: ~8.2 KB | Word count: ~3,950 | Updated: 2026-04-28

Cross-references