Sarvam AI Launches 24B-Parameter Language Model for Indian Languages

Sarvam AI, a Bengaluru-based artificial intelligence startup, has launched Sarvam-M, a 24-billion-parameter large language model (LLM) tailored explicitly for Indian languages, capable of tackling reasoning tasks such as math and programming. Built on Mistral Small, a compact, open-source model, Sarvam-M combines supervised fine-tuning with reinforcement learning guided by measurable outcomes, such as accurately solving math problems.
The model has been optimized for real-time responsiveness and higher accuracy. According to Sarvam AI, Sarvam-M outperforms models of similar size in Indian language processing, while also excelling in math and code generation tasks.
Benchmark Results Show Promising Gains
Sarvam-M demonstrated strong performance across benchmarks: a 20% improvement in Indian language tasks, 21.6% on math, and 17.6% in programming compared to the base model. In tasks that combine Indian languages and reasoning, such as a Romanized Indian version of the GSM-8K benchmark, the model achieved an 86% improvement.
The model performs comparably to larger models, such as Llama 3.3 70B and Gemma 3 27B, and outperforms Llama-4 Scout on most evaluations. However, Sarvam-M is approximately 1 percentage point below baseline models on English-heavy tasks, such as MMLU.
Also Read- Ruvi AI Gains Traction as Investors Look Beyond Avalanche
Pushing Toward India’s Sovereign AI Vision
Designed for versatility, Sarvam-M supports a wide range of use cases, including conversational agents, translation tools, and educational platforms. It is publicly available via Hugging Face and can be accessed through Sarvam AI’s playground and APIs.
The launch follows Sarvam AI’s recent selection by the Indian government to develop the nation’s sovereign Large Language Model (LLM) under the India AI Mission, reinforcing its role in building a robust domestic AI ecosystem. Sarvam-M is the first in a planned series of models that aim to advance India’s leadership in AI innovation.