CEO's Column
Search
More
Foundation Models

Mistral Medium 3 Surpasses Llama 4 Maverick in Performance and Cost-Efficiency

ByRishabh Srihari
2025-05-09.2 months ago
Mistral Medium 3 Surpasses Llama 4 Maverick in Performance and Cost-Efficiency
Mistral Medium 3 Surpasses Llama 4 Maverick in Performance and Cost-Efficiency

Mistral AI has introduced Mistral Medium 3, a new artificial intelligence model designed for enterprise-scale deployments. The company positions the model as a high-performing yet cost-efficient option, capable of matching or surpassing more expensive alternatives in core benchmarks.

According to internal performance data, Medium 3 exceeds Meta’s Llama 4 Maverick and outperforms models like DeepSeek v3 and Cohere Command R+, all while offering significantly lower usage costs.

Specialised Capabilities with Broader Accessibility

Optimised for coding, STEM applications, and multimodal understanding, Mistral Medium 3 delivers inference speeds of up to 150 tokens per second and supports a 128k-token context window. This makes it suitable for demanding use cases such as complex document analysis, technical content generation, and scientific problem-solving. Despite being a mid-sized model, it reportedly handles tasks with greater accuracy and speed than many larger offerings on the market.

Also read: Mistral AI Unveils Le Chat for Enterprises Amid Rapid Revenue Growth

The company emphasises the model’s flexibility in deployment: it can operate in cloud, hybrid, or on-premise environments and only requires four GPUs for effective performance. This positions it as a practical option for businesses with varying levels of compute infrastructure, while also supporting continuous pretraining and integration with enterprise systems.

Market Availability and Roadmap

Mistral Medium 3 is currently accessible via Mistral’s proprietary platform and Amazon SageMaker. Support for additional platforms, including Azure AI, Google Cloud, IBM WatsonX, and NVIDIA NIM, is expected in the near future. Early adopters from finance, energy, and healthcare sectors are already leveraging the model for use cases ranging from automated customer engagement to domain-specific data analysis.

This release follows closely on the heels of Mistral Small 3.1, the company's open-source model, and builds on its foundation with improved text comprehension and enhanced multimodal abilities.

Looking Ahead

In line with its push to broaden accessibility and drive open innovation, Mistral has confirmed that a larger open-weight model is in development. The Medium 3 launch signals the company’s ongoing effort to offer enterprises flexible, high-performance models that reduce cost without compromising capability.

Related Topics

LLMsLarge Language Models (LLMs)

Subscribe to NG.ai News for real-time AI insights, personalized updates, and expert analysis—delivered straight to your inbox.