CEO's Column
Search
More
AI Infrastructure

CoreWeave Deploys NVIDIA GB200 NVL72 Systems to Power Next-Gen AI Workloads

ByNeelima N M
2025-04-21.4 months ago
CoreWeave Deploys NVIDIA GB200 NVL72 Systems to Power Next-Gen AI Workloads
CoreWeave Deploys NVIDIA GB200 NVL72 Systems to Power Next-Gen AI Workloads

Cloud infrastructure provider CoreWeave has become one of the first to offer access to NVIDIA's GB200 NVL72 systems at scale, unlocking a new level of computing power for customers training and deploying cutting-edge AI models according to NVIDIA’s blog post. Industry players like Cohere, IBM, and Mistral AI are already leveraging the platform to accelerate AI development, citing dramatic performance gains.

CoreWeave, known for rapid adoption of next-gen NVIDIA technologies, is now making NVIDIA Grace Blackwellsuperchips available across its infrastructure. The deployment is centered around GB200 NVL72, a rack-scale system featuring 72 GPUs and 36 Grace CPUs tightly connected with NVIDIA NVLink for unified, high-throughput computing, ideal for agentic AI and large-scale reasoning models.

Cohere Sees 3x Training Performance

Enterprise AI platform Cohere uses CoreWeave’s Grace Blackwell systems to enhance model development and inference capabilities. The company reported up to a 3x improvement in training speed for its 100-billion parameter models, even before optimizing specifically for Blackwell.

Autumn Moulder, VP of Engineering at Cohere, highlighted how the GB200 NVL72 architecture supports “incredible performance efficiency across our stack,” including real-time inference improvements powered by FP4 precision and unified memory.

IBM Trains Granite Models on Blackwell

IBM is scaling its Granite model family, enterprise-focused open-source AI systems, across thousands of Blackwell GPUs hosted by CoreWeave. The collaboration combines NVIDIA GB200 systems with IBM’s Storage Scale platformto enhance training throughput and cost efficiency.

Sriram Raghavan, VP of AI at IBM Research, said, “We are excited to see the acceleration that NVIDIA GB200 NVL72 can bring to training our Granite family of models.”

He added, “This collaboration with CoreWeave will augment IBM’s capabilities to help build advanced, high-performance and cost-efficient models for powering enterprise and agentic AI applications with IBM watsonx.”

Also read: NVIDIA's CoreWeave Slashes IPO Size and Prices Shares Below Expectations Amid Market Caution

Mistral AI Doubles Training Speed

French open-source model developer Mistral AI is among the first to receive over 1,000 Blackwell GPUs from CoreWeave. With performance improvements of 2x for dense model training even without custom optimizations, Mistral is using the hardware to fast-track development of next-generation open-source models like Mistral Large.

CoreWeave’s deployment is part of a broader scale-up, with plans to offer over 110,000 GPUs connected via NVIDIA Quantum-2 InfiniBand, making it one of the most advanced AI cloud platforms available. Customers can now spin up instances built for training trillion-parameter models, high-throughput inference, and real-time agentic AI applications.

Related Topics

AI Infrastructure ScalingAI Computing ResourcesAI infrastructure

Subscribe to NG.ai News for real-time AI insights, personalized updates, and expert analysis—delivered straight to your inbox.