Multiverse Computing Raises $215M to Scale Ground-Breaking Technology that Compresses LLMs by up to 95%

SAN SEBASTIAN, Spain , June 12, 2025 (GLOBE NEWSWIRE) -- Multiverse Computing, the global leader in quantum-inspired AI model compression, has developed CompactifAI, a compression technology capable of reducing the size of LLMs (Large Language Models) by up to 95% while maintaining model performance. Having spent 2024 developing the technology and rolling it out to initial customers, the company today announces a €189 million ($215 million) investment round.

The Series B will be led by Bullhound Capital with the support of world-class investors such as HP Tech Ventures, SETT, Forgepoint Capital International, CDP Venture Capital, Santander Climate VC, Quantonation, Toshiba and Capital Riesgo de Euskadi - Grupo SPRI. The company has brought on widespread support for this push with a range of international and strategic investors. The investment will accelerate widespread adoption to address the massive costs prohibiting the roll out of LLMs, revolutionizing the $106 billion AI inference market.

LLMs typically run on specialized, cloud-based infrastructure that drives up data center costs. Traditional compression techniques—quantization and pruning—aim to address these challenges, but their resulting models significantly underperform original LLMs. With the development of CompactifAI, Multiverse discovered a new approach. CompactifAI models are highly-compressed versions of leading open source LLMs that retain original accuracy, are 4x-12x faster and yield a 50%-80% reduction in inference costs. These compressed, affordable, energy-efficient models can run on the cloud, on private data centers or—in the case of ultra compressed LLMs—directly on devices such as PCs, phones, cars, drones and even Raspberry Pi.

"The prevailing wisdom is that shrinking LLMs comes at a cost. Multiverse is changing that," said Enrique Lizaso Olmos, Founder and CEO of Multiverse Computing. "What started as a breakthrough in model compression quickly proved transformative—unlocking new efficiencies in AI deployment and earning rapid adoption for its ability to radically reduce the hardware requirements for running AI models. With a unique syndicate of expert and strategic global ...