Executive Summary
Marvell Technology’s revenue is heavily concentrated among a small cohort of hyperscale cloud providers and optical module OEMs. As of fiscal year 2026 (ended January 31, 2026), one distributor represented 37% of net revenue and one direct customer represented 14% of revenue. The top ten customers (distributor + direct) accounted for 81% of total revenue in fiscal 2025, indicating extreme concentration risk and dependency on a handful of players—primarily Amazon Web Services, Microsoft, and Google.
The company’s strategic value resides in deep co-design partnerships with these hyperscalers to build custom AI accelerators (Trainium, Maia, potential TPU roles) and provision optical DSPs to module makers serving cloud data centers.
Primary Hyperscaler Customers
Amazon Web Services (AWS)
Custom AI Silicon Partnership: Trainium & Inferentia
Marvell is a co-design partner for Amazon’s Trainium and Inferentia custom AI accelerators. In December 2024, Marvell and AWS extended their strategic relationship through a five-year, multi-generational agreement covering a broad range of data center semiconductors, including custom AI products, optical digital signal processors, and other networking components. ✓
Revenue Concentration
AWS is believed to be the distributor or major direct customer accounting for the 37% and 14% segments disclosed in the 10-K, though Marvell does not publicly name AWS as its largest customer in regulatory filings. Industry analysis indicates AWS’s Trainium/Inferentia programs are the primary driver of Marvell’s custom ASIC business, with analysts projecting the custom silicon segment could reach ~$1 billion in annual revenue within two years. ◐
Trainium Momentum
Trainium2 experienced Marvell’s “fastest-ever ramp-up in demand,” with Trainium3 expected to have fully committed supply by mid-2026, and strong pre-order interest in Trainium4. Marvell has secured 3nm wafer and advanced packaging capacity with expected production starts in calendar 2026. ✓
Sources: Marvell Newsroom, Motley Fool, Next Platform
Microsoft
Maia AI Accelerator Co-Design
Marvell is the primary co-design partner for Microsoft’s Maia AI accelerator family. The company won a design engagement for Maia 1 and Maia 2, with Maia 3 slated for 2026 release. Reports indicate Maia 300 (the next-generation follow-on) is explicitly co-designed with Marvell and expected to utilize HBM4 memory and advance toward 1.4nm nodes. ✓
Strategic Role
Marvell’s “co-design” model enables Microsoft to bypass the cost of general-purpose GPUs by tailoring custom silicon to specific AI workloads. The design win “continues to progress well,” and Marvell is engaged with Microsoft on follow-on AI XPU programs beyond Maia. ✓
Revenue Contribution
Microsoft is a material direct customer within Marvell’s data center segment (73% of FY 2026 revenue), though exact revenue percentages are not publicly disclosed. The Maia programs represent a growing revenue stream as hyperscalers roll out inference and specialized workloads. ◐
Sources: X/Wall St Engine, Next Platform, Klover.ai
TPU & Custom AI Chip Discussions
Google is in talks with Marvell Technology to develop new AI inference chips to complement and supplement its Broadcom-led TPU supply chain. Reports from April 20, 2026 indicate Google is negotiating with Marvell to build a memory processing unit (MPU) to work alongside TPU, and a new TPU variant specifically for inference with design finalization expected by 2027. ◐
Multi-Supplier Strategy
Google is assembling a four-partner chip supply chain (Broadcom, MediaTek, Marvell, and others) to diversify its TPU sourcing and negotiate competitive pricing, mirroring automotive supply strategies. Marvell’s role is not yet contractually finalized; discussions remain in progress. ◐
Market Implication
If Marvell wins the Google TPU engagement, it would represent the company’s third major hyperscaler custom-silicon win (after AWS Trainium and Microsoft Maia), significantly raising Marvell’s addressable market in the custom ASIC segment. Bloomberg projects Marvell could capture 20–25% of the $118 billion custom ASIC market by the early 2030s. ◐
Sources: CNBC, The Next Web, Yahoo Finance
Meta
MTIA Custom Silicon Collaboration
Meta’s flagship MTIA (Meta Training and Inference Accelerator) was designed primarily with Broadcom, but Marvell collaborates on specialized variants. Meta has introduced “Arke,” an inference-only chip developed in collaboration with Marvell. ✓
Hardware-Software Co-Design Model
Meta maintains full control of its AI stack—from PyTorch software framework to silicon design. Marvell’s role in Arke and other MTIA iterations enables Meta to implement optimizations physically impossible on closed-source hardware. ✓
Growth Trajectory
Meta is expanding its custom silicon capacity aggressively, with engineering teams discussing second-generation MTIA at industry conferences. Marvell’s role in Arke suggests deepening engagement for inference-specific workloads. ✓
Sources: Meta Engineering Blog, Meta AI Infrastructure Expansion, FinancialContent
Optical Module OEM Customers
Marvell provides merchant PAM4 and coherent digital signal processors (DSPs) to a broad ecosystem of optical module manufacturers, who integrate Marvell silicon into pluggable transceivers sold to hyperscalers and carriers.
Primary OEM Partners
Lumentum
Lumentum integrates Marvell’s coherent DSPs (e.g., Orion) into its 800G and 1.6T coherent pluggable modules. Lumentum recently demonstrated interoperability between Marvell Orion DSP-based modules from Lumentum, Coherent, and other vendors at 520km, confirming Marvell’s DSP dominance in the coherent ecosystem. ✓
Sources: Marvell Optical DSP Solutions
Innolight
Innolight is among the largest suppliers of high-speed datacom optical modules (800G and above). Innolight uses Marvell PAM4 and coherent DSPs as a preferred merchant supplier, leveraging Marvell’s technology independence (Marvell does not compete with module makers by selling integrated lasers). ✓
Sources: Optical Module Market Deep Dive
Coherent Corporation (formerly II-VI)
Coherent (formerly II-VI) manufactures high-speed optical modules and integrates Marvell’s Orion coherent DSP into its pluggable offerings. Coherent is transitioning some low-cost 800G multimode designs from Marvell to Broadcom or MaxLinear DSPs to optimize cost, but remains a material user of Marvell’s portfolio. ✓
Accelink, Eoptolink
Accelink and Eoptolink are emerging optical module suppliers building 800G and 1.6T modules with Marvell DSP technology. These suppliers rely on Marvell as the merchant DSP provider of choice to avoid competing module-maker conflicts. ✓
Carrier Tier-1 Customers
Marvell’s optical DSPs are integrated into carrier-grade coherent and long-haul optical transmission systems used by major telecom operators.
Typical Tier-1 Carrier Wins
- Coherent ZR/ZR+ modules with Marvell DSPs deployed in carrier metro and long-haul networks
- Coherent-lite (1.6T PAM4) intra-data-center interconnect solutions for carrier data centers
- DCI (data center interconnect) optical modules using Marvell Nova 2 and Electra 2 DSPs
While Marvell’s 10-K does not name specific carrier customers, the company’s optical DSP products are standard in carrier-grade coherent systems from OEMs like Lumentum, Coherent, and Accelink. ◐
Revenue Concentration Summary
| Metric | Fiscal 2026 Value |
|---|---|
| Top distributor (% of revenue) | 37% ✓ |
| Top direct customer (% of revenue) | 14% ✓ |
| Top 10 customers (% of revenue) | 81% (FY 2025 baseline) ✓ |
| Data center segment (% of total revenue) | 73% ✓ |
Risk Disclosure: Marvell’s sales are concentrated in a few large hyperscale customers. Loss of, or significant revenue reduction from, any top customer (or significant market share decline among these customers) poses material downside risk to consolidated revenue. This concentration is typical for advanced semiconductor suppliers in the custom ASIC space, but creates dependency risk. ✓
Sources: Marvell 10-K (March 11, 2026), Marvell Q3 FY 2026 Earnings
Conclusion
Marvell’s customer ecosystem is dominated by three hyperscaler custom-silicon engagements (AWS Trainium, Microsoft Maia, emerging Google TPU role) and deep optical DSP relationships with a broad but competitive ecosystem of module OEMs (Lumentum, Innolight, Coherent, Accelink, Eoptolink). This creates a “barbell” revenue structure: ultra-concentrated hyperscaler dependency offset by diversified optical module OEM revenue across carriers and cloud operators. The concentration risk is material and disclosed; Marvell mitigates it by maintaining co-design depth with each hyperscaler, creating high switching costs and embedded engineering teams.