What to know
- Meta agreed a multiyear pact to buy millions of Nvidia AI chips — both current and future models.
- The deal includes GPUs for training and CPUs for inference — a first at this scale for Meta.
- Financial terms weren’t disclosed but analysts say it’s likely worth billions.
- Meta continues in-house chip efforts but has faced delays.
You’re likely hearing a lot about the Meta–Nvidia pact because it highlights both how critical AI hardware has become for big tech and how expensive that hardware is. In a multiyear strategic agreement announced this week, Meta Platforms — the company behind Facebook, WhatsApp, and Instagram — committed to buying millions of Nvidia’s latest AI chips to power a major expansion of its artificial intelligence data centers.
Meta’s order covers both graphics processing units (GPUs) — Nvidia’s Blackwell chips and the next-gen Rubin models — and central processing units (CPUs) like Nvidia’s Grace and upcoming Vera chips. GPUs are the workhorses of AI training, while CPUs are increasingly important for running AI (known as inference) and supporting general data center tasks.
This is noteworthy because Meta will be the first major Big Tech buyer to deploy Nvidia’s standalone CPUs at scale, not just GPUs — a sign the industry is shifting focus toward inference workloads and full-stack data center designs.
Meta isn’t just buying hardware — it’s locking in supply at a time when demand for top-tier AI chips far outpaces production. Nvidia’s most advanced chips are often back-ordered, so securing a large allocation helps Meta sustain its AI ambitions without delays.

Analysts estimate the pact is worth billions of dollars, though neither company confirmed exact figures. This deal comes as Meta plans to substantially increase its AI infrastructure spending in 2026, potentially up to over $100 billion, and invest heavily in new data centers through 2028.
Meta has been developing its own AI accelerators and processors, but that program has faced technical challenges and rollout delays. The expanded Nvidia deal suggests Meta still relies on external partners for the bulk of its compute needs — at least for now.
It also underscores the intense competition for chip supply among major cloud and AI players like OpenAI, Microsoft, Google, and Amazon — each racing to build larger, faster AI infrastructure.
For Nvidia, this deal reinforces its dominant position in the AI chip market, even as rivals develop alternative architectures and custom silicon. It also marks a push into new categories like standalone CPUs for AI inference, putting it in more direct competition with traditional CPU makers such as Intel and AMD.
Meta’s new agreement with Nvidia is about securing the hardware foundation for its future AI products — a huge investment in compute power that reflects how central chips are to the next generation of AI services.