VSORA, an innovative startup offering high-performance AI chips for Generative AI (GenAI) and autonomous driving, and partners CEA-Grenoble and Valeo were named winners of the Embedded AI Call for Projects, part of France 2030.
The project called SHAPE AI (Scalable & High-Performance Accelerator Processor at Edge for Artificial Intelligence) was selected because it addresses the new business segment of AI inference co-processors providing low latency, low power consumption and high computing capacity able to work in autonomous driving and other applications. Emmanuel Macron, President of France, announced the eight France 2030 Embedded AI winners during his keynote championing AI at the start of the VivaTech Conference May 21 in Paris.
France 2030 is a national investment plan endowed with $36.89 billion (€34 billion) with 10 objectives to understand better, live better and produce better in France by 2030. Part 2 of the program will invest in innovative technologies such as embedded AI architectures, a category that will receive $43.267 million (€40 million). VSORA and its partners will receive $7.05 million (€6.5 million) from the French government.
VSORA has developed a unique scalable and unified processor architecture to enhance the performance and user experience of a broad spectrum of computing applications ranging from GenAI and autonomous driving to edge AI. The CEA-Grenoble research center focuses on the development of innovative solutions in the fields of energy, health, information and communication. Valeo is a global automotive supplier based in France.
The France 2030 Embedded AI winning entry follows the 2023 announcement that VSORA was the recipient of $13.18 million (€12.0 million) from the European Innovation Council (EIC) Accelerator Program. That funding is used for the development of its hardware acceleration solutions for AI inference, advanced signal processing and complex algorithms.
VSORA’s Jotunn™ is a chiplet-based scalable solution offering a performance jump for GenAI inference by accelerating the broad variety of advanced AI and compute-intensive algorithms. The Jotunn8 chip provides up to 6.4 PetaFLOPS with reduced power consumption compared to existing solutions. It includes a compute efficiency of more than 50% for large language models (LLMs) with massive amounts of parameters like GPT-3.5 (175 billion) or GPT-4 (1.8 trillion) compared to a typical efficiency range of about 2-4% of the current industry solutions.
The Tyr™ family of PetaFLOPS computational companion chips to accelerate Levels 3 through 5 autonomous driving and advanced driver-assistance systems (ADAS) platforms as well as low-cost Generative AI applications. It delivers between 800-trillion and 3,200-trillion operations per second with a power consumption of as little as 10 watts.