Skip to content
MRVL
~11 min read · 2,476 words ·updated 2026-04-28 · confidence 56%

Executive Summary

This analysis builds a bottom-up wallet-share model for Marvell Technology’s top five hyperscaler customers (AWS, Microsoft, Google, Meta, Oracle). The model combines published capex guidance, industry AI infrastructure spend benchmarks, and disclosed partnerships to estimate Marvell’s addressable TAM and win rates per hyperscaler.

Key Finding: Marvell’s FY2026 data center revenue of $6.1B implies ~55–70% wallet concentration in top-3 hyperscalers (AWS, Microsoft, Google), driven by custom AI ASIC design wins and optical interconnect dominance. A 50% wallet-share loss at any top-3 customer in FY2028 could reduce management’s $15B target by $1.5–2.5B.


Part 1: Hyperscaler Capex Framework

1.1 Total 2026 Capex by Hyperscaler

HyperscalerTotal FY2026 CapexSource
AWS (Amazon)~$200BAMZN 10-K guidance; includes all AWS infrastructure
Microsoft Azure~$120B+MSFT 10-K guidance; $80B backlog disclosed for Azure AI
Google/Alphabet~$175–185BGOOG 10-K guidance; $60B+ committed to data centers
Meta~$115–135BMETA 10-K guidance; includes 1GW Ohio facility
Oracle OCI~$50BORCL 10-K guidance; 136% YoY increase driven by Gen2 OCI
TOTAL (Top 5)~$660–690BCombined estimate

Sources: Futurum AI Capex 2026; CNBC Tech AI Spending Feb 2026; CreditSights Hyperscaler CapEx 2026; Introl Hyperscaler CapEx $690B

1.2 AI Infrastructure Share of Total Capex

Assumption: ~75% of hyperscaler capex flows to AI infrastructure (servers, GPUs, custom ASICs, data center equipment).

  • AWS: ~$150B AI infrastructure
  • Microsoft: ~$90B AI infrastructure
  • Google: ~$130–140B AI infrastructure
  • Meta: ~$86–100B AI infrastructure
  • Oracle: ~$37–38B AI infrastructure (shift from GPU-only to mix after Blackwell deployment)
  • Subtotal: ~$490–510B AI infrastructure capex (Top 5)

1.3 Silicon (Semiconductor) Share of AI Capex

Key Split (Industry Benchmarks):

  • GPU/custom ASIC compute: 40–50% (~$40–50K per petaflop training cluster)
  • HBM/Memory: 20–25%
  • Networking/Optical/DSP: 15–20%
  • Power/Cooling/Enclosure: 15–20%

Implied Silicon TAM (40% midpoint): ~$200B across top 5 hyperscalers in 2026.

Sources: Google Cloud Next 2026 Announcements; Hyperframeresearch Marvell Customer Cliff


Part 2: Marvell’s Addressable Share by Hyperscaler

2.1 AWS / Amazon Web Services

Products & Partnership

  • Custom silicon: AWS Trainium 2 & Trainium 3 (exclusive or majority supply arrangement with Marvell)
  • Optical DSP: Data center interconnect modules, high-speed optical pluggables
  • Ethernet switching: Marvell’s interconnect silicon (confirmed product line)
  • Other: Custom XPU pipeline; Marvell is exclusive or near-exclusive on Trainium production

Revenue Estimate (FY2026)

  • Trainium run rate: ~$1.5B annualized custom chip revenue (Marvell’s disclosed aggregate for all customers, but Trainium heavily weighted)
  • AWS custom silicon business: $10B+ run rate (per CEO Andy Jassy, Dec 2025 / Q4 2025 earnings)
  • Marvell’s implied share: 15–20% of AWS’s $10B+ custom chip business = $1.5–2.0B annual revenue from AWS
  • Optical/interconnect add-on: $200–300M (conservative estimate on DSP + modules)
  • AWS Total to Marvell FY2026: ~$1.7–2.3B (midpoint ~$2.0B)

Wallet Share Calculation

  • AWS AI-infrastructure capex: $150B × 40% (silicon) = $60B addressable
  • Marvell’s implied share: $2.0B ÷ $60B = ~3.3% wallet share (very conservative; actual may be higher if Trainium is exclusive)

Notes

  • Trainium 2 is “fully subscribed” (Marvell earnings call, Q4 FY2026)
  • Trainium 3 ramp expected H1 2026; growth accelerating
  • Concentration risk: If AWS achieves internal ASIC design capability or switches to Broadcom/alternate supplier, Marvell’s revenue exposure is ~$1.7–2.3B

Sources: Nextplatform Marvell Custom XPU Pipeline; Trefis Amazon-Anthropic Benefit; Nasdaq Q2 2026 Earnings Preview; Nextplatform Marvell AWS Mix Trails Nvidia


2.2 Microsoft Azure

Products & Partnership

  • Custom silicon: Maia AI accelerator (design partner confirmed; Broadcom rumored on some components, but Marvell is on architecture/DSP)
  • Optical interconnect: Similar DSP and high-speed modules as AWS
  • Networking: Broadcom’s dominant position in switching; Marvell’s optical likely secondary

Revenue Estimate (FY2026)

  • Maia deployment rate: Maia 100 (initial SKU) ramping; Maia 200 delayed to 2026 (confirmed, design/tooling hurdles)
  • Marvell’s implied revenue: Maia design win announced; assumed revenue contribution $400M–800M for FY2026 (lower than Trainium due to delayed Maia 200)
  • Optical/interconnect add-on: $150–250M
  • Microsoft Total to Marvell FY2026: ~$600M–1.0B (midpoint ~$800M)

Wallet Share Calculation

  • Microsoft AI infrastructure capex: $90B × 40% = $36B addressable
  • Marvell’s implied share: $0.8B ÷ $36B = ~2.2% wallet share

Notes

  • Maia was positioned as 2025 launch; slipped to 2026, indicating design risk
  • Microsoft’s $80B Azure backlog suggests aggressive future AI capex; Maia volumes likely underestimated near-term
  • If Maia ramps faster in 2027–2028, wallet share could expand to 4–5%

Sources: Broadcom vs Marvell Custom AI Silicon 2026; Tradingkey Marvell vs Broadcom ASIC; Financialcontent Marvell Architect Custom Silicon


2.3 Google / Alphabet Cloud

Products & Partnership

  • Custom silicon (NEW April 2026): Memory Processing Unit (MPU) designed to complement TPU stack + inference-optimized TPU variant (talks, not yet signed)
  • Optical DSP & interconnect: Already supplying for data center networking
  • Training TPU: Broadcom remains on Google’s training chip (TPU 8t)
  • Inference TPU: MediaTek + Marvell co-developing (NEW, April 2026 Cloud Next announcement)

Revenue Estimate (FY2026)

  • MPU + Inference TPU (nascent): Design win announced April 2026; revenue impact FY2026 minimal, ramp in FY2027–2028
  • Optical DSP existing: $300–500M (ongoing supplies)
  • Google Total to Marvell FY2026: ~$350–550M (optical dominant; TPU work nascent)

Wallet Share Calculation (near-term FY2026; long-term significant upside)

  • Google AI infrastructure capex: $130–140B × 40% = $52–56B addressable
  • Current implied wallet share: $0.4B ÷ $54B = ~0.75% wallet share (FY2026)
  • Forward look (FY2027–2028): If MPU + inference TPU ramp to $2–3B by FY2028, wallet share could grow to 4–6%

Notes

  • Google’s strategic intent: no single partner monopoly (Broadcom on training, MediaTek + Marvell on inference)
  • TPU 8t/8i launch imminent; Marvell’s inference chip partnership announced April 2026
  • Upside optionality: Google will likely deploy multi-gigawatts of TPU 8i by FY2028; Marvell’s piece could be significant if inference share is 20%+ of TPU volume

Sources: Oplexa Google Cloud Next 2026; Thenextweb Google Marvell Inference Chips; Google Blog Eighth Gen TPU Agentic Era; Marvell Stock Surges on Google AI Chip Partnership


2.4 Meta Platforms

Products & Partnership

  • Custom silicon: MTIA (Meta Training and Inference Accelerator) “Arke” variant (inference-only; Marvell confirmed as design partner)
  • Broadcom dominance: Broadcom is primary partner for MTIA training (Iris, Arke training variant); announced multi-year extension April 2026
  • Marvell role: Arke inference-only sub-variant (smaller volume than Iris training)
  • Optical interconnect: Marvell supplies optical DSP for MTIA cluster interconnect

Revenue Estimate (FY2026)

  • Arke inference-only: Smaller subset of total MTIA deploy; estimate $400–600M revenue to Marvell
  • Optical interconnect: $150–250M (Meta’s cluster scale requires heavy optical interconnect)
  • Meta Total to Marvell FY2026: ~$550–850M (midpoint ~$700M)

Wallet Share Calculation

  • Meta AI infrastructure capex: $100B × 40% = $40B addressable
  • Marvell’s implied share: $0.7B ÷ $40B = ~1.75% wallet share (understates Marvell’s dominance in optical)

Notes

  • Broadcom’s MTIA partnership is the headline; Marvell’s Arke role is smaller but strategic (inference is growing segment)
  • Arke likely targets lower-cost inference workloads (cheaper than Iris training variant)
  • Meta’s capex ramping toward $150B+ (management guidance suggests upside); optical interconnect demand will rise sharply

Sources: Meta Broadcom Partnership April 2026; Financialcontent Meta Iris MTIA Rollout; Globenewswire Broadcom Extended Partnership Meta


2.5 Oracle OCI

Products & Partnership

  • Custom silicon: None disclosed. Oracle’s strategy is Nvidia-first (Blackwell, NVL72 deployment H1 2026)
  • Optical interconnect: Minimal role; Broadcom + Nvidia ecosystem dominate
  • Marvell opportunity: Essentially zero direct custom ASIC; only indirect via 3rd-party optical modules

Revenue Estimate (FY2026)

  • Marvell’s direct revenue: ~$0M–50M (negligible custom silicon, optical modules secondary)
  • Oracle Total to Marvell FY2026: ~$0–50M (effectively zero)

Wallet Share Calculation

  • Oracle AI infrastructure capex: $37B × 40% = $14.8B addressable
  • Marvell’s implied share: $0.025B ÷ $14.8B = <0.2% wallet share

Notes

  • Oracle’s OCI Gen2 is Nvidia/Broadcom ecosystem play
  • Opportunity risk: If Oracle pivots to custom silicon (unlikely under Ellison), Marvell could gain share
  • Current positioning: Oracle is not a material customer for Marvell’s core products

Sources: Financialcontent Oracle Infrastructure Landlord 2026; Marketminute Silicon Power Couple Oracle Nvidia


Part 3: Summary Wallet Share by Hyperscaler (FY2026)

HyperscalerEstimated Marvell RevenueAI Silicon TAMImplied Wallet ShareConfidence
AWS$1.7–2.3B$60B2.8–3.8%✓ (Trainium exclusive/majority)
Microsoft$0.6–1.0B$36B1.7–2.8%◐ (Maia ramping; design delays)
Google$0.35–0.55B$54B0.6–1.0%◐ (TPU wins nascent; FY2027+ upside)
Meta$0.55–0.85B$40B1.4–2.1%◐ (Arke secondary to Broadcom Iris)
Oracle$0–0.05B$14.8B<0.2%⚠ (No custom silicon partnership)
TOTAL (Top 5)$3.2–4.8B$204.8B1.6–2.3% aggregate

Implied Check vs. Reported FY2026 Results:

  • Marvell FY2026 data center revenue: $6.1B
  • Top-5 hyperscaler estimate: $3.2–4.8B (52–79% of data center segment)
  • Gap: Remainder ($1.3–2.9B) attributable to: tier-2 hyperscalers (ByteDance, Alibaba, Baidu, etc.), enterprise data center, telecom, storage, and legacy customers
  • Confidence: ✓ — The model’s implied revenue aligns with observed data center mix

Part 4: Cross-Check vs. 10-K Customer Concentration Disclosures

4.1 Published Concentration Metrics (FY2026 10-K, filed 3/11/2026)

MetricFY2026FY2025FY2024
Distributor A revenue %37%34%24%
Customer A (Direct) revenue %14%13%<10%

Source: Marvell FY2026 10-K, Significant Customers Table

4.2 Identity of “Distributor A” & “Customer A”

Marvell does not disclose the name of its top distributor or direct customer in the 10-K (standard practice in industry). However, analyst inference suggests:

Distributor A (37% of revenue = ~$3.0B FY2026):

  • Likely Arrow Electronics or Tech Data (major semiconductor distributors)
  • End-customer base includes: AWS (via distributor), Microsoft, Google, Meta, OEM resellers
  • Note: Substantial majority of distributor shipments to China relate to non-China hyperscalers with OSAT/OEM assembly in China

Customer A (Direct, 14% of revenue = ~$1.1B FY2026):

  • Likely AWS or Microsoft (largest direct design-win customers)
  • If AWS: ~$1.0–1.1B is consistent with our Trainium estimate
  • If Microsoft: ~$0.8–1.0B is lower than our forecast, suggesting Maia ramp slower than expected
  • Confidence: ◐ — Most likely AWS, but not confirmed

4.3 Remaining Revenue Sources

  • Top-5 hyperscaler direct + distributor: ~$4.1B ($3.0B distributor + $1.1B direct)
  • Other sources (63% of revenue): ~$5.1B
    • Tier-2 hyperscalers (ByteDance, Alibaba, Baidu, Tencent): ~$1.5–2.0B (estimated)
    • Enterprise, telecom, storage: ~$1.5–2.0B
    • Legacy/non-AI data center: ~$0.5–1.0B
    • Other segments (comm, non-DC): $2.1B (reported as “Communications & Other”)

Part 5: Replaceability Stress Test (FY2028 Impact Analysis)

5.1 Management Guidance & Baseline Assumption

Target: Marvell management projects FY2028 revenue approaching $15B (announced March 2026).

  • FY2026 actual: $8.2B
  • Implied CAGR FY2026–2028: ~34.5%
  • Breakdown assumption (data center-centric model): ~75% data center = $11.25B data center in FY2028

5.2 Stress Scenario: 50% Wallet-Share Loss at One Top-3 Customer

Scenario A: AWS loses 50% wallet share (e.g., internal ASIC design capability, competitive loss)

AssumptionBaselineStressed
AWS Marvell revenue FY2028 (extrapolated)$2.4B–3.2B$1.2–1.6B
Implied FY2028 loss vs. baseline$1.2–1.6B
Impact on $15B target0%–8 to –11%
Revised FY2028 data center target$11.25B$9.6–10.0B

Scenario B: Microsoft/Maia growth stalls (Maia 200 cancelled or yields issues)

AssumptionBaselineStressed
Microsoft Marvell revenue FY2028 (extrapolated with ramp)$2.0–2.5B$0.6–0.8B
Implied FY2028 loss vs. baseline$1.2–1.8B
Impact on $15B target0%–8 to –12%
Revised FY2028 data center target$11.25B$9.4–10.0B

Scenario C: Google delays TPU 8i inference deployment (supply constraint, design risk)

AssumptionBaselineStressed
Google Marvell revenue FY2028 (with TPU 8i ramp)$1.5–2.0B$0.4–0.5B
Implied FY2028 loss vs. baseline$1.0–1.5B
Impact on $15B target0%–7 to –10%
Revised FY2028 data center target$11.25B$9.7–10.2B

5.3 Cumulative Risk (All Three Scenarios)

If any ONE of top-3 customers experiences 50% wallet-share loss:

  • Revenue impact: –$1.2–1.8B in FY2028 (vs. management’s $15B target)
  • Revised target: $13.2–13.8B (down 12–13%)
  • Probability: Analyst consensus suggests ~20–30% risk of competitive loss at any given hyperscaler in a 24-month window (source: KeyBanc, Stifel equity research)

Mitigation factors:

  1. Stickiness: Marvell’s design wins are multi-year (3–5 year engagements); switching costs high
  2. Optical moat: Marvell’s optical DSP dominance (via Celestial AI acquisition, $3.25B, announced early 2026) creates ecosystem lock-in
  3. Enablement: Marvell’s custom silicon methodology is hard to replicate; hyperscalers benefit from Marvell’s EDA, simulation, tape-out expertise
  4. Supply chain: Exclusive Trainium capacity agreements reduce AWS’s ability to switch suppliers mid-contract

Part 6: Bull vs. Bear Implications

Bull Case: Marvell’s Hyperscaler Relationships Are Sticky

  • Thesis: Once a hyperscaler commits to Marvell for custom ASIC design, switching costs (respinning with Broadcom, in-house design teams, tape-out delays) are prohibitive
  • Evidence: Trainium fully subscribed; Microsoft Maia multi-year roadmap; Google’s strategic diversification (not replacement) rationale; Meta’s Arke continuation
  • Implied wallet share at maturity (FY2028+): 4–6% of top-5 hyperscaler silicon spend = $12–15B revenue potential by FY2029

Bear Case: Marvell Is Replaceable; Wallets Consolidate to Broadcom/In-House

  • Thesis: Hyperscalers are incentivized to vertically integrate and reduce ASIC suppliers; Broadcom has stronger switching relationship with Google/Meta; Marvell’s optical advantage eroding post-Celestial acquisition
  • Evidence: Meta’s partnership “extended” with Broadcom (not Marvell) in April 2026; Google’s TPU training still Broadcom-exclusive; Marvell lacks memory/HBM expertise (competitor risk from Samsung/SK Hynix)
  • Implied wallet share at bear case (FY2028): 0.8–1.5% of addressable silicon TAM = $7–9B revenue plateau

Forecast Conclusion

  • Base case (60% confidence): Marvell’s wallet share grows to 2.5–3.5% by FY2028 = $12–13B revenue (slightly below management guidance due to execution risk)
  • Bull case (25% confidence): Wallet share reaches 4–6% by FY2028 = $14–16B revenue (meets/exceeds guidance)
  • Bear case (15% confidence): Wallet share stalls at 1.5–2.0% by FY2028 = $8–10B revenue (material miss)

Sources

  1. Futurum AI Capex 2026: The $690B Infrastructure Sprint
  2. CNBC: Tech AI Spending May Approach $700 Billion This Year
  3. Marvell FY2026 10-K Filing (Accession 0001835632-26-000011)
  4. Nextplatform: Marvell’s Custom XPU Pipeline
  5. Google Cloud Next 2026: Eighth-Generation TPU Announcement
  6. Thenextweb: Google in Talks With Marvell for Inference Chips
  7. Meta & Broadcom Extended Partnership (April 2026)
  8. Marvell Q4 FY2026 Earnings Call Summary
  9. Hyperframeresearch: Does Marvell’s AI Win Mask a Customer Cliff?
  10. Tom’s Hardware: Why Nvidia Invested $2B in Marvell

Cross-references