Broadcom CEO Hock Tan unveiled in December 11, 2025 Q4 earnings call the mystery that puzzled industry for months: the mysterious $10 billion AI chip order customer is none other than AI unicorn startup Anthropic. More stunningly, Anthropic not only placed $10B order in Q3 but added another $11B order in Q4, totaling $21 billion across two quarters, ordering Google’s latest Tensor Processing Unit (TPU) Ironwood racks, planning to deploy up to 1 million TPUs, bringing Anthropic over 1 gigawatt (GW) of AI computing capacity in 2026.
Mystery Order Truth Unveiled
Broadcom Earnings Call Revelation
December 11, 2025 Broadcom announced FY2025 Q4 (ending November 2, 2025) earnings, strong results but stock subsequently plunged 11%, triggering market concerns about AI bubble.
But during earnings call, CEO Hock Tan revealed more important information: the mystery major customer’s identity.
Hock Tan’s Statement “We received a $10 billion TPU order from one customer in third quarter, that customer is Anthropic.” “Anthropic placed another $11 billion order in fourth quarter, scheduled for late 2026 delivery.”
This news shocked industry because although Anthropic is well-known, as AI startup, placing such massive order is rare.
Order Details
Order Amounts
- Q3 Order (August-October 2025): $10 billion
- Q4 Order (November 2025): $11 billion
- Total: $21 billion
One of semiconductor industry’s largest orders from single customer to single supplier.
Product Content Order contents are Google Tensor Processing Unit (TPU) latest version:
- Ironwood racks (codename, specific specs unpublished)
- Custom-designed, optimized specifically for AI training and inference
- Designed and manufactured by Broadcom, Google-branded
Delivery Timeline
- Q3 order: Delivery begins late 2025 to H1 2026
- Q4 order: Scheduled late 2026 delivery
- Overall deployment: Continues throughout 2026
Staggering Scale
1 Million TPUs According to industry analysis, $21B order can deploy:
- Approximately 1 million Google TPUs
- Forming thousands of Ironwood racks
- One of world’s largest AI training clusters
Over 1 GW Computing Capacity 1 gigawatt (GW) = 1 billion watts
This AI cluster expected to consume over 1 GW power:
- Equivalent to medium-sized nuclear power plant output
- Can supply approximately 500,000 US households
- Highlights AI computing energy demand scale
Anthropic’s AI Chip Strategy
Multi-Cloud Multi-Chip Strategy
Anthropic doesn’t go all-in on Google TPU but employs multi-cloud, multi-chip strategy:
Google TPU
- Orders Google TPU through Broadcom
- Primarily for large-scale model training
- This $21B order is key investment
Amazon Trainium
- Uses Amazon’s proprietary Trainium AI chips
- Accessed via AWS cloud services
- Used for training and inference workloads
Nvidia GPU
- Traditionally most widely used AI training hardware
- Anthropic also uses Nvidia H100, H200 GPUs
- Diversifies risk, avoids single vendor dependence
Strategy Advantages
- Risk reduction: Not constrained by single hardware supplier
- Cost optimization: Choose most economical hardware for different tasks
- Technical flexibility: Leverage different chip architecture advantages
- Negotiating power: Multiple suppliers competing favors negotiations
Why Choose TPU?
Google TPU Advantages Compared to Nvidia GPUs, Google TPUs offer unique advantages:
Purpose-Built for AI
- TPUs are dedicated AI accelerators (ASICs), not general-purpose GPUs
- Optimized for matrix operations (AI core computations)
- Theoretically superior performance-per-watt ratio than GPUs
Price Competitiveness
- TPUs typically cheaper than comparable Nvidia GPUs
- Cost advantages pronounced at large-scale purchases
- $21B might only require 70-80% of Nvidia solution cost
Supply Stability
- Nvidia H100, H200 high-end GPUs in short supply
- Wait times 6-12 months
- Google TPU through Broadcom dedicated production, more stable supply
Google Collaboration Anthropic has close relationship with Google:
- Google is Anthropic’s major investor (invested over $2 billion)
- Uses Google Cloud Platform (GCP) infrastructure
- Technical collaboration, jointly advancing AI research
Anthropic’s AI Ambitions
Next-Generation Claude Models
$21B investment suggests Anthropic’s grand plans:
Claude 4 Series 2026 may launch Claude 4, expected features:
- Parameter scale: Possibly reaching multiple trillions (current Claude Opus 4.5 estimated hundreds of billions)
- Capability leap: Surpassing GPT-5.2 in reasoning, programming, scientific tasks
- Enhanced multimodal: Dramatically improved image, video, audio understanding
Commercial Expansion Claude usage grew rapidly in 2025:
- Enterprise Claude for Work proliferation
- API call volume surging
- Needs more inference computing resources
Competing with OpenAI, Google AI arms race continues heating up:
- OpenAI has Microsoft Azure backing
- Google has proprietary TPUs and data centers
- Anthropic must ensure computing resources don’t fall behind
Broadcom’s AI Chip Business
$73 Billion Backlog
Broadcom AI Product Orders Hock Tan revealed during earnings call:
- Broadcom currently has $73 billion AI product order backlog
- Expected to deliver within next six quarters (18 months)
- Average approximately $12 billion delivery per quarter
Customer Composition Broadcom’s AI chip customers include:
- Anthropic: $21 billion (two quarters)
- Google: Self-use TPU orders
- Meta: Custom AI chips
- Other tech giants: Amazon, Microsoft (presumed)
Broadcom Business Model
ASIC Design and Manufacturing Broadcom isn’t fab but:
- Designs AI-specific chips (ASICs)
- Contracts TSMC and other foundries for manufacturing
- Provides integrated solutions (chips + networking + systems)
Customization Services
- Designs custom chips based on customer needs
- Google TPU, Amazon Trainium, Meta’s MTIA all Broadcom-designed
- Long-term collaborative relationships, strong customer lock-in
High-Margin Business
- AI chip design gross margins high (estimated 60-70%)
- Large order scales, single customers reaching tens of billions
- Driving Broadcom stock price and market cap growth
Industry Impact
AI Computing Arms Race
Computing Costs Soaring AI model training costs growing exponentially:
- GPT-4 training cost estimate: $100 million
- GPT-5 training cost estimate: $500M-$1B
- Next-gen models may require billions in computing costs
Anthropic’s $21B investment partly reflects this trend.
Who Can Afford? Only well-funded companies can participate in race:
- Tech giants: Google, Microsoft, Meta own infrastructure
- Well-funded startups: Anthropic (Google, Amazon-backed), OpenAI (Microsoft-backed)
- Small-medium companies: Increasingly difficult to compete, may rely on open-source models or APIs
Chip Supply Chain
Nvidia vs Google TPU vs Amazon Trainium AI chip market forms multi-competitor landscape:
Nvidia Advantages
- Technical leadership, CUDA ecosystem mature
- High versatility, suitable various AI tasks
- Still highest market share (estimated 70-80%)
Google TPU Advantages
- Cost-effective, suitable ultra-large-scale deployment
- Google Cloud integration, easy to use
- Broadcom manufacturing, supply chain control
Trend
- Major customers adopting multi-vendor strategies
- Nvidia monopoly being challenged
- Custom ASIC proportion rising
Future Outlook
2026 AI Cluster Online
Deployment Schedule Anthropic’s 1 million TPU cluster:
- 2026 Q1: First Ironwood racks online
- Throughout 2026: Continuous deployment
- Late 2026: Second batch order delivery, cluster scale doubles
Training Plans Expected uses:
- Claude 4 series model training
- AI safety research experiments
- Long-term AI alignment research
- Commercial inference service expansion
Industry Trends
Computing Demand Continues Exploding
- Model parameter scale: Hundreds of billions → trillions → tens of trillions
- Training data: TB → PB
- Computing clusters: Thousands of cards → millions of processors
Energy and Sustainability 1 GW-scale AI clusters raising sustainability discussions:
- Data center carbon footprints
- Renewable energy usage
- Computing efficiency optimization
- Social responsibility balance
Conclusion
Broadcom’s revelation of Anthropic as $21B order customer marks AI industry entering new phase: no longer laboratory research but industrial-scale competition requiring millions of processors, gigawatt-scale power, billions in investment.
Anthropic’s Bet $21B investment demonstrates Anthropic’s confidence and ambition in AI future, also massive commitment to “safe, controllable AI” philosophy.
Industry Landscape This order changes AI chip market landscape:
- Google TPU becomes genuine Nvidia GPU competitor
- Broadcom consolidates AI infrastructure supplier position
- Multi-cloud multi-chip strategy becomes mainstream
2026, when Anthropic’s super AI cluster goes online, we’ll see answer outlines. And it all began with Broadcom CEO Hock Tan’s revelation in December 11, 2025 earnings call.
Sources: