CEO's Column
Search
More
AI Infrastructure

AMD Introduces Next-Generation AI Infrastructure with "Helios" and Strategic Partnerships

ByNeelima N M
2025-06-13.2 months ago
AMD Introduces Next-Generation AI Infrastructure with "Helios" and Strategic Partnerships
AMD unveils "Helios," a high-performance AI rack platform powered by Instinct MI400 GPUs and UALink for next-gen AI workloads and distributed inference.

AMD is taking the next leap in AI infrastructure by unveiling “Helios,” its new AI rack platform designed for high-performance workloads, including large-scale AI training and distributed inference.

This marks a significant advancement in AMD’s commitment to AI solutions, aiming to meet the growing demand for high-performance computing (HPC) in industries deploying agentic AI across various sectors.

The Power of "Helios" and AMD’s AI Vision

Helios integrates AMD’s latest technologies, including the next-gen Instinct™ MI400 Series GPUs, 6th Gen EPYC™ “Venice” CPUs, and AMD Pensando “Vulcano” AI NICs.

This infrastructure is designed to deliver the compute density, memory bandwidth, and scale-out bandwidth needed for the most demanding AI workloads. The system can provide up to 40 petaflops of FP4 performance and 432GB of HBM4 memory, making it ideal for training massive models and running distributed inference at scale.

One of the key features of the Helios platform is its seamless integration of UALink™, an open standard that allows flexible scaling across 72 GPUs in a single rack.

UALink enables direct communication between CPUs, GPUs, and scale-out NICs, optimizing performance and ensuring all components work as a unified system, breaking performance bottlenecks often seen in AI clusters.

AMD’s Four Guiding Principles for AI Infrastructure

AMD’s AI infrastructure is built on four key pillars: high-performance Instinct MI350 GPUs with up to 3.58x better FP6 performance, enterprise-grade 5th Gen EPYC™ CPUs for seamless integration, low-latency Pensando Pollara NICs for advanced networking, and support for open standards like UALink and OCP to ensure flexible, interoperable system design.

Strategic Partnerships: Oracle Leads the Charge

Oracle Cloud Infrastructure (OCI) is among the first to adopt AMD’s Instinct MI355X-powered rack-scale solution, underscoring its commitment to providing scalable and reliable AI infrastructure. With OCI’s broad adoption of AMD-powered bare metal instances, Oracle is poised to offer robust AI solutions that cater to generative and agentic AI applications at an enterprise scale.

Mahesh Thiagarajan, Executive Vice President of Oracle Cloud Infrastructure, “Oracle Cloud Infrastructure continues to benefit from its strategic collaboration with AMD. We will be one of the first to provide the MI355X rack-scale infrastructure using the combined power of EPYC, Instinct, and Pensando.”

He added, “In addition, Oracle relies extensively on AMD technology, both internally for its own workloads and externally for customer-facing applications. We plan to continue to have deep engagement across multiple AMD product generations, and we maintain strong confidence in the AMD roadmap and their consistent ability to deliver to expectations.”

Also read: AMD Acquires AI Startup Brium to Challenge Nvidia’s Market Grip

“Helios” Redefines AI Rack Solutions

The introduction of “Helios” isn’t just an upgrade; it redefines what’s possible with AI infrastructure. Built to accommodate the evolving needs of AI workloads, from training frontier models to executing distributed inference, “Helios” is designed for rapid deployment, enabling businesses to accelerate their AI innovation with cutting-edge performance and flexible integration.

Related Topics

AI infrastructureAI Infrastructure Scaling

Subscribe to NG.ai News for real-time AI insights, personalized updates, and expert analysis—delivered straight to your inbox.