Microsoft says it already has the infrastructure OpenAI is rushing to build

On October 9, 2025, Microsoft CEO Satya Nadella posted a video showcasing the company’s first large-scale AI system (sometimes dubbed an AI “factory,” a term Nvidia uses). He called it the “first of many” deployments that will work across Microsoft Azure’s global data centers to support OpenAI’s workloads.

Each of these systems consists of over 4,600 Nvidia GB300 racks equipped with the new Blackwell Ultra GPU, interconnected using Nvidia’s InfiniBand networking technology. Notably, Nvidia had acquired Mellanox (a leader in high-speed networking) in 2019, giving it control over this critical component.

Microsoft plans to deploy “hundreds of thousands” of Blackwell Ultra GPUs in these clusters around the world. The scale is striking — Microsoft claims it already operates more than 300 data centers across 34 countries, positioning itself to meet the demands of “frontier” AI models with potentially hundreds of trillions of parameters.

This announcement comes as OpenAI is making major commitments to build its own data center capacity, having secured substantial deals with both Nvidia and AMD. Estimates suggest OpenAI has committed roughly $1 trillion so far toward its infrastructure expansion.

By making this move, Microsoft is signaling that it’s not lagging behind — it already has the global backbone in place to support next-generation AI workloads. More details on this expansion are expected later this month, when Microsoft CTO Kevin Scott is scheduled to speak at TechCrunch Disrupt (October 27–29).

Previous Post
Next Post

Leave a Reply

Your email address will not be published. Required fields are marked *