NVIDIA Unveils NVLink Fusion to Power Scalable, Custom AI Infrastructure

NVIDIA has introduced NVLink Fusion™, a groundbreaking new silicon technology designed to enable industries to build semi-custom AI infrastructures.
This innovative development allows companies to integrate custom XPU silicon with NVIDIA’s NVLink™ connectivity, the most advanced and widely adopted computing fabric, creating a more flexible and scalable solution for AI deployments.
The partnership with major industry players such as MediaTek, Marvell, Alchip Technologies, Astera Labs, Synopsys, and Cadence marks a pivotal step toward delivering next-generation AI infrastructure optimized for demanding workloads, such as AI model training and agentic AI inference.
The Power of NVLink Fusion
NVLink Fusion offers cloud providers and hyperscalers the ability to scale AI factories by seamlessly integrating custom silicon with NVIDIA’s robust rack-scale systems and end-to-end networking platform. It delivers up to 800Gb/s of throughput, supporting the rapid growth in AI infrastructure while maintaining high-performance and reliability.
Hyperscalers can leverage this new technology to expand their AI infrastructure efficiently, scaling to millions of GPUs, and ensuring that the data center performance needed for emerging AI workloads is met.
This fusion of proprietary XPUs with NVIDIA’s cutting-edge networking technology creates a unified, efficient system for AI-driven applications.
Strategic Collaboration with Industry Leaders
The launch of NVLink Fusion is backed by leading tech partners driving AI scalability and performance. MediaTek contributes ASIC design and high-speed interconnects, while Marvell offers custom silicon for next-gen trillion-parameter AI models. Alchip provides advanced manufacturing and design support for broad deployment.
Astera Labs enhances server efficiency with low-latency, high-bandwidth interconnects. Synopsys and Cadence add vital chip design and chiplet infrastructure, helping scale AI factories across cloud and on-premise environments.
Also read: Nvidia Launches Lepton: A Marketplace to Simplify AI GPU Access for Developers
Boosting AI Performance with Fujitsu and Qualcomm
NVLink Fusion also supports integration with CPUs from Fujitsu and Qualcomm, boosting AI performance. Fujitsu’s 2nm Arm-based MONAKA CPUs and Qualcomm’s custom processors, combined with NVIDIA’s AI platform, enable scalable, energy-efficient computing for advanced AI workloads.
NVIDIA’s fifth-gen NVLink platform, featuring GB200 and GB300 NVL72 racks, delivers 1.8 TB/s GPU bandwidth, 14x faster than PCIe Gen5, boosting AI workload deployment for hyperscalers.
NVIDIA also launched Mission Control™, a unified software platform that automates AI data center management, streamlining deployment, validation, and orchestration of advanced AI workloads.