What to know
- Oracle will install 50,000 of AMD’s Instinct MI450 AI chips in its cloud infrastructure, starting in the third quarter of 2026.
- This marks the biggest deployment of AMD GPUs by any hyperscaler, signaling growing competition against Nvidia’s dominance in AI processing.
- Oracle partnered with AMD for an AI supercluster, focusing on maximizing scale, energy efficiency, and open-source software compatibility.
- The move reflects rising demand for diversified GPU supply as AI workloads surge across enterprise applications.
Oracle’s announcement sets a new benchmark for cloud and AI infrastructure. Scheduled for rollout in late 2026, Oracle will incorporate AMD’s latest Instinct MI450 GPUs in a major supercluster, making it the first major public AI deployment exclusively built on AMD hardware. Oracle’s investment follows similar commitments by OpenAI and other AI leaders, signaling a trend toward broadening chip supply chains and reducing dependency on Nvidia, which currently powers over 90 percent of large-scale AI GPUs for cloud workloads.
The AMD MI450 chips deliver up to 432 GB HBM4 memory and 20 TB/s bandwidth per GPU, positioning them as direct competitors to Nvidia’s H100 and H200. Oracle’s supercluster will use AMD’s Helios rack architecture, next-generation EPYC CPUs, and Pensando networking, aiming for extreme scalability and energy efficiency. The ROCm open-source software stack from AMD will enable developers to build AI applications without relying on Nvidia’s proprietary CUDA ecosystem.

Industry analysts view Oracle’s move as a challenge to Nvidia’s market dominance and a sign of accelerating innovation in AI compute hardware. Despite the broader market fluctuations, AMD’s chip announcement has driven a noticeable uptick in its stock performance, while Nvidia’s shares saw a decline. As AI model training and inferencing tasks grow in scale and complexity, cloud providers are racing to diversify partnerships and offer more open, flexible infrastructure options to customers.
For enterprises and researchers, Oracle’s AMD-powered infrastructure promises improved performance for large models, cost-efficient scaling, and reduced reliance on a single vendor. This ecosystem evolution is expected to drive competitive pricing, hardware innovation, and new opportunities to optimize workloads for cutting-edge AI applications.
Discussion