CEO's Column
Search
More
AI Infrastructure

Mirantis Unveils AI Reference Architecture for Enterprises

ByNeelima N M
2025-06-18.14 days ago
Mirantis Unveils AI Reference Architecture for Enterprises
Mirantis launches AI Factory Reference Architecture, enabling rapid, scalable AI workload deployment across cloud, edge, and hybrid environments.

Mirantis, a leading company specializing in Kubernetes-native AI infrastructure, has unveiled the industry's first comprehensive reference architecture designed to support AI workloads.

The Mirantis AI Factory Reference Architecture, built on the company’s k0rdent AI platform, provides a secure, composable, scalable, and sovereign infrastructure for building, operating, and optimizing AI and machine learning (ML) at scale.

This solution is targeted at enterprises and service providers looking to efficiently deploy and manage large-scale AI workloads across various environments.

Key Features of the Mirantis AI Factory Reference Architecture

The Mirantis AI Factory Reference Architecture enables the rapid deployment and management of AI workloads in a variety of environments, including dedicated servers, hybrid or multi-cloud, and edge locations.

The architecture leverages Kubernetes and supports various AI workload types, such as training, fine-tuning, and inference. It is built to address complex issues related to high-performance computing, including remote direct memory access (RDMA) networking, GPU allocation and slicing, performance tuning, and Kubernetes scaling.

One of the key highlights is the ability for AI workloads to be deployed within days of hardware installation. This is made possible by using k0rdent AI’s templated, declarative model for rapid provisioning, which significantly accelerates the AI development lifecycle.

Additionally, it enables faster prototyping and iteration of AI models and services, reducing the time it takes to deploy these models from development to production.

Comprehensive Integration and Flexibility

The reference architecture features a curated catalog of AI/ML tool integrations for observability, CI/CD, security, and more, built on open standards and customizable for diverse needs. Its composable design uses reusable templates for compute, storage, GPU, and networking, offering flexibility for AI workloads.

Mirantis tackles AI infrastructure challenges with secure, compliant solutions for data sovereignty, multi-tenancy, and GPU resource management. Its architecture supports fine-tuning and seamless operation across diverse hardware, ensuring scalable, high-performance AI workloads.

Also read: AIXA Miner Cloud Mining Investment Ltd Launches AI-Driven Services for Crypto Mining

Supporting AI Innovation with Advanced Technologies

Mirantis AI Factory supports NVIDIA, AMD, and Intel accelerators, allowing flexible hardware choices for performance and scalability. It also enhances AI data management and secures sensitive IP.

Shaun O’Meara, Chief Technology Officer at Mirantis, said, “We’ve built and shared the reference architecture to help enterprises and service providers efficiently deploy and manage large-scale multi-tenant sovereign infrastructure solutions for AI and ML workloads.”

He added, “This is in response to the significant increase in the need for specialized resources (GPU and CPU) to run AI models while providing a good user experience for developers and data scientists who don’t want to learn infrastructure.”

The introduction of the Mirantis AI Factory Reference Architecture is a major step forward in transforming how AI and ML workloads are deployed and managed at scale. It provides a robust foundation for building intelligent, sustainable networks capable of supporting the next generation of AI innovations, particularly in cloud, edge, and hybrid environments.

Related Topics

AI infrastructure

Subscribe to NG.ai News for real-time AI insights, personalized updates, and expert analysis—delivered straight to your inbox.