Multiverse Raises $215M to Cut AI Costs with Quantum Tech

Spanish startup Multiverse Computing has secured a significant €189 million (approximately $215 million) in a Series B funding round, propelled by its innovative technology, CompactifAI. This quantum computing-inspired compression technology promises to reduce the size of large language models (LLMs) by up to 95% without sacrificing performance.
Disrupting AI Efficiency with CompactifAI
CompactifAI offers a transformative solution for reducing the size of popular open-source LLMs, including models like Llama 4 Scout, Llama 3.3 70B, and Mistral Small 3.1. These compressed models are 4x to 12x faster than their non-compressed counterparts, resulting in a 50% to 80% reduction in inference costs. For example, Multiverse's compressed Llama 4 Scout Slim runs at a cost of just 10 cents per million tokens, compared to 14 cents for the standard version. This reduction in size and energy consumption opens up the possibility of running LLMs on devices like PCs, smartphones, drones, and even Raspberry Pi computers.
Also Read: Adobe Raises Full-Year Forecast Amid Strong Demand for AI-Powered Tools
A Quantum Leap for AI with Strong Backing
Multiverse’s breakthrough technology is rooted in tensor networks, computational tools that mimic quantum computing but can run on standard computers. Co-founded by CTO Román Orús, a pioneer in tensor network research, and CEO Enrique Lizaso Olmos, the company has already patented 160 technologies and serves 100 global clients, including major companies like Iberdrola, Bosch, and the Bank of Canada. The Series B funding round was led by Bullhound Capital and supported by key players such as HP Tech Ventures and Santander Climate VC, bringing Multiverse’s total funding to approximately $250 million.
With this new funding, Multiverse aims to scale its AI solutions, promising to significantly lower costs for enterprises looking to deploy AI at scale, while contributing to energy efficiency across industries.