Skip to content
MRVL
~6 min read · 1,392 words ·updated 2026-04-28 · confidence 84%

Hyperscaler AI Capex Cycle: 2025–2026 Outlook

Aggregate Capex Forecasts

2025 Capex Performance

The “Big Five” hyperscalers (Amazon, Alphabet/Google, Meta, Microsoft, Oracle) collectively invested approximately $388 billion in capex in 2025, with roughly 75% (~$290B) directly tied to AI infrastructure (compute, datacenter equipment, GPUs, I/O fabric).

This represents a 57% increase in total data center capex globally in 2025 vs. 2024 (Dell’Oro Group, ✓).

2026 Capex Guidance (Disclosed)

Combined guidance: $630–700 billion in 2026, representing a 62% YoY increase:

Company2025 Capex2026 GuidanceGrowth %
Amazon$125B$200B+60%
Alphabet/Google$91B$175–185B+92%
Meta$72B$115–135B+59%
Microsoft$90B$110–120B+22%
Oracle~$10BEst. $15–20B (◐)TBD

Total Big Five 2026 capex: ~$615–650B (✓).

AI-specific allocation (2026E): ~75% of total = $450B+ directed to AI infrastructure (servers, accelerators, networking, datacenter buildout).

Sources: Company earnings calls / 10-Q filings, CNBC, Futurum Group (✓).


AI Capex Allocation: CPU, Memory, I/O, and Network Share

Historical Split (2025)

  • Compute (GPU/ASIC): ~45–50% of AI infrastructure capex
  • Memory (HBM, DRAM): ~20–25%
  • I/O & Networking (switching, optics, CPO): ~15–20% (growing)
  • Datacenter infrastructure (power, cooling, real estate): ~10–15%

2026 Shift: Network/I/O Growing Share

LightCounting analysis (✓): Optical transceiver + DSP sales for AI networks grew 60% YoY in 2025 (reaching $16.5B), and are forecast to grow another 60% in 2026 to $26 billion. This implies network I/O is consuming an increasingly large share of AI capex, driven by:

  1. Scale-out fabric requirements: Each accelerator pod (e.g., NVIDIA H200 cluster) requires 800G–1.6T interconnect.
  2. Scale-up interconnect (Celestial AI / scale-up networks): Emerging workload requirement; estimated $6B TAM by 2030 (Marvell 2024 Investor Day).
  3. Co-Packaged Optics (CPO) adoption: NVIDIA and Broadcom pushing CPO starting 2026–2027, which will further increase networking capex intensity.

Implication: Network/I/O share of AI capex rising from ~15–20% (2025) to 20–25%+ by 2026–2027 (✓).


Sovereign AI Spending (UAE, Saudi Arabia)

UAE Initiatives

Microsoft partnership (✓):

  • Total investment: $15.2 billion (2023–2029)
  • Includes: $1.5B equity investment in G42, $4.6B+ capex through 2026, additional $7.9B (2026–2029).
  • AI campus in Abu Dhabi: 26 km² with 5 GW capacity; initial 200 MW cluster go-live by 2026.

Saudi Arabia Initiatives

LEAP 2025 announcements (✓):

  • New AI investments: $15+ billion
  • Google Cloud partnership: $10B
  • HUMAIN initiative: 500 MW each of AMD and Nvidia chip deployments.

AWS regional buildout (✓): $5.3 billion for Saudi Arabian datacenters.

Hexagon data center contract (✓): $2.7 billion awarded January 2026 for 480 MW facility in Riyadh.

Saudi vision: Develop 3–6 GW of AI computing capacity by 2030 (aligned with global benchmarks of $30–50B per GW).

Impact on Marvell

Sovereign AI spending represents ~3–5% incremental capex to hyperscaler totals but follows different procurement patterns (state backing, potential local manufacturing incentives). Marvell’s exposure via partner engagement (e.g., custom silicon for Saudi-backed infrastructure) is currently minimal but may grow.

Confidence: ✓ for announced commitments; ◐ for actual capex timing.


Inferencing vs. Training Capex Split

2025–2026 Trend

Historically, training dominated AI capex (~70% of spend, driven by model weights and supervised learning). However, 2025–2026 capex allocation is shifting toward inferencing:

  • Training capex (2025): ~50% (down from historical 70%)
  • Inferencing capex (2025): ~50% (up from historical 30%)

Key driver: DeepSeek’s R1 (released Dec 2024) demonstrated competitive reasoning models at 20–50× lower training cost than OpenAI’s equivalent, shifting focus to efficient inference and cost per token optimization.

Implications for Marvell

Inferencing is ASICs-heavy and DSP-heavy:

  • Custom accelerators (Marvell XPU) benefit from cost-per-token optimization workloads.
  • Optical DSPs (Marvell Ara, Petra) benefit from long-range DCI (data center interconnect) at high throughput, required for scale-out inferencing clusters.

Training capex still requires high-bandwidth compute (GPUs), but network interconnect requirements are similar (1.6T fabric).

Confidence: ✓ (DeepSeek effect well-documented).


DeepSeek Impact on Capex Cycle

Efficiency Gains, ROI Recalibration

DeepSeek V3 inference economics (✓):

  • Cost: $0.14–0.28 per million tokens (vs. OpenAI GPT-4o at ~$3–10).
  • Mixture-of-Experts (MoE) architecture: Activates only 37B of 671B parameters per token, reducing compute by ~95%.
  • Memory efficiency: 5–13% of prior MHLA methods.

Hyperscaler ROI scrutiny: Cheaper inference improves capex ROI, reducing urgency for ever-increasing training capex. However, inference volume is vastly larger than training (1000:1 ratio in deployed systems), so aggregate capex remains high.

Network Implications

DeepSeek-driven capex rebalance favors:

  1. Inference ASICs (custom accelerators) → Marvell XPU benefits.
  2. Scale-out networking (distributed inference inference pods) → Optical DSP and switching benefits.
  3. Power efficiency → DSP vendors offering lower-power 800G/1.6T modules gain share.

Risk to capex: If inference efficiency gains reduce model serving costs below hyperscaler expectations, capex growth could decelerate. Unlikely through 2027, but a tail risk if DeepSeek-class efficiency breakthroughs repeat.

Confidence: ✓ for efficiency observed; ◐ for capex impact (too early to measure).


Capex Risks & Headwinds

1. GPU Supply Bottleneck (NVIDIA Blackwell Ramp)

Status (as of 2026-04-28): NVIDIA Blackwell is ramping in H1 2026. However, CoWoS advanced packaging capacity is constrained, and HBM supply is sold out through 2026 (◐).

  • HBM3E demand: Growing 70% YoY in 2026; Micron capacity fully allocated.
  • Impact: HBM supply constraints could delay AI accelerator ramps, pushing some capex to H2 2026 / 2027.

Marvell exposure: Custom XPU depends on HBM-heavy designs; supply delays could push XPU capex bookings to 2027.

Confidence: ◐ (supply data public; impact on Marvell timing uncertain).

2. Hyperscaler Free Cash Flow Deterioration

Morgan Stanley / BofA analysis (✓):

  • Amazon 2026 FCF: Negative $17–28B (vs. positive FCF in 2025).
  • Alphabet 2026 FCF: Plummet ~90% to $8.2B (from $73.3B in 2025).
  • Meta / Microsoft: Similar deterioration (FCF consumed by capex).

Risk: If capex ROI disappoints (cheap inference, slowing LLM adoption), hyperscalers may pull forward capex reductions to late 2026 / 2027, impacting network vendor bookings in the latter half of the year.

Marvell exposure: Optical DSP (near-term revenue) less exposed; Custom XPU (FY 2027+ ramp) more exposed.

Confidence: ✓ (FCF calculations public); ◐ (timing of capex pullback uncertain).

3. NVIDIA Spectru-X vs. Broadcom Bailly CPO Competition

Risk: NVIDIA’s Spectrum-X and Quantum-X (announced GTC 2025) + Broadcom’s Bailly CPO offer integrated switching + optics. If hyperscalers standardize on CPO faster than expected (2026 vs. 2028), pluggable optical DSP demand could soften (Marvell Ara, etc.).

Mitigation: Marvell’s Celestial AI acquisition + NVIDIA partnership positions Marvell in CPO ecosystem; reduces but does not eliminate risk.

Confidence: ◐ (CPO ramp timing highly uncertain).


2026–2027 Capex Outlook: Base Case

Driver2026 Capex (E)2027 Capex (E)Notes
Big Five AI capex$450–500B$500–600BModest deceleration if FCF constraints bind.
Sovereign AI (UAE/Saudi)$30–40B$50–70BRamping post-2026.
Network I/O share$110–140B$140–180BGrowing 20–25% per LightCounting.
Custom accelerator capex$60–80B$100–150BXPU design wins ramping FY 2027–2028.

Key Takeaways

  1. Hyperscaler capex remains robust through 2026, with $630B+ guidance (62% growth). AI infrastructure is 75% of total, providing strong demand for Marvell.

  2. Network/I/O share is expanding (15–20% → 20–25% by 2026–2027), benefiting Marvell’s DSP and switching products.

  3. DeepSeek-driven efficiency gains shift capex from training to inference, favoring custom accelerators and scale-out networking—both Marvell tailwinds.

  4. Sovereign AI spending (UAE, Saudi Arabia) is emerging but remains <5% of global hyperscaler capex through 2026; upside for 2027–2028.

  5. Free cash flow deterioration is a risk: Hyperscaler FCF is turning negative, creating potential for capex curtailment in H2 2026 or 2027 if ROI pressure mounts.

  6. CPO adoption timing is uncertain: Faster CPO ramp (2026 vs. 2028) could cannibalize pluggable DSP demand. Marvell’s NVIDIA partnership and Celestial AI mitigate but don’t eliminate risk.


Sources

  • Hyperscaler 10-Q/10-K filings, earnings calls (Microsoft, Google, Meta, Amazon, Oracle)
  • Dell’Oro Group data center capex reports (2025–2026)
  • LightCounting optical interconnect and DSP market analysis
  • CNBC, Futurum Group, Morgan Stanley / BofA research
  • Marvell Investor Day 2024; Marvell Q3 FY2026 earnings
  • Introl Blog (Middle East AI capex analysis)

Cross-references