Nvidia's Unspoken Problem: 40% of Revenue Comes From Companies Developing Their Own AI Chips
Briefly

Nvidia's Unspoken Problem: 40% of Revenue Comes From Companies Developing Their Own AI Chips
"Google's TPUs power Bard and Search. Amazon's Trainium chips offer AWS customers cheaper alternatives. Meta's MTIA handles inference workloads. Microsoft's Maia chip is deploying across Azure. These are production infrastructure. Nvidia's response? Custom chips "complement" rather than replace GPUs. But here's the math that should terrify investors: inference represents 80% of long-term AI compute. Training is 20%. If hyperscalers build inference chips in-house, Nvidia loses access to 80% of the addressable market."
"The market is noticing what enterprises already know: MI300X chips deliver competitive performance at 20-30% lower cost. Microsoft Azure now offers MI300X instances. Oracle Cloud partnered with AMD for infrastructure. Meta deployed MI300X for inference. OpenAI reportedly tested AMD chips to diversify away from Nvidia dependency. The CUDA moat argument assumes 2020 dynamics still apply. PyTorch support improved dramatically. AMD's ROCm software stack closed gaps. OpenAI's Triton compiler abstracts hardware differences. For inference workloads, where performance-per-dollar matters more than raw speed, AMD wins business."
Nvidia built a $4.6 trillion empire by selling GPUs and infrastructure for AI training and inference. Major customers — Microsoft, Meta, Amazon, and Alphabet — account for roughly 40–50% of revenue and are deploying custom AI chips in production. Inference workloads are projected to constitute about 80% of long-term AI compute, leaving GPUs vulnerable if hyperscalers internalize inference hardware. AMD's MI300X delivers competitive performance at 20–30% lower cost, with cloud providers and large enterprises adopting AMD instances and tooling improving to close software gaps. Together, customer chip development and AMD competition threaten Nvidia's pricing power and addressable market.
Read at 24/7 Wall St.
Unable to calculate read time
[
|
]