Fractile Licenses Andes Technology’s RISC-V Vector Processor

Andes Technology, a supplier of high-efficiency, low-power 32/64-bit RISC-V processor cores and Founding Premier member of RISC-V International, announced a partnership with Fractile, the company building the chips and systems needed to reach the next frontier of AI performance.

Andes Technology, a supplier of high-efficiency, low-power 32/64-bit RISC-V processor cores and Founding Premier member of RISC-V International, announced a partnership with Fractile, the company building the chips and systems needed to reach the next frontier of AI performance. Fractile is developing AI inference accelerators based on in-memory compute, and aim to be able to run frontier AI models – large language, vision and audio models – two orders of magnitude faster than existing hardware, at a tenfold reduction in cost.

Large language models and other foundation models have become the driving force behind the skyrocketing scale of data center AI compute requirements. From ChatGPT to the open-source Llama model series, LLMs and other foundation models are finding widespread application. Model inference – the process of serving these trained models – is coming to be the dominant portion of compute costs, exceeding the cost of model training.  Fractile has licensed the powerful Andes AX45MPV RISC-V vector processor, combined with ACE (Andes Automated Custom Extension™) and Andes Domain Library, and plans to incorporate the vector processing unit into their first generation data center AI inference accelerator.

Fractile’s uses novel circuits to execute 99.99% of the operations needed to run model inference in on-chip memory. This removes the need to shuttle model parameters to and from processor chips, instead baking computational operations into memory directly.  This architecture drives both much higher energy efficiency (TOPS/W) as well as dramatically improved latency on inference tasks (tokens per second per user in an LLM context, for instance). The company has been betting on inference scaling – leveraging more inference time-compute to improve AI performance – as the next frontier of AI scaling. The AI world seems to agree, with OpenAI recently releasing their latest LLM, o1, which requires orders of magnitude more inference compute than previous LLMs. Fractile’s hardware and software stack is built to take models that can still take many seconds to produce an answer on current hardware, and make this instantaneous.

As part of the collaboration, Fractile will integrate Andes Technology’s high-performance RISC-V vector processor with its own groundbreaking in-memory computing architecture via ACE. Fractile’s architecture leverages the strengths of both companies, aiming to deliver an exceptionally fast and cost-effective AI inference system that overcomes the limitations of conventional computing methods – blasting through the memory bottleneck.

Dr. Charlie Su, President and CTO of Andes Technology, expressed his enthusiasm for the partnership, “AX45MPV, with strong compute capabilities, high memory bandwidth and the flexible ACE tool, has been chosen by innovative AI companies large and small since its debut in 2023. Andes RISC-V vector processors have enabled many AI SoCs to break free from architecture limitation and achieve new levels of performance and efficiency. We are confident that the synergy between Fractile’s In-Memory Computing technologies and Andes’ award-winning RISC-V vector processing will lead to yet another success.”

Dr. Walter Goodwin, CEO and founder of Fractile, added: “The limitations of existing hardware present the biggest barrier to AI performance and adoption. Andes Technology has unmatched technical and commercial leadership on RISC-V vector processors and is a natural partner for us as we build Fractile’s accelerator systems. Building hardware for AI acceleration is intrinsically hard – the world’s leading models can change overnight, while chips take time to bring to market. Software-programmable vector processors like Andes’ are a key part of staying robust to these changes. We’re delighted to announce this collaboration as Fractile furthers its mission to supercharge inference.

Exit mobile version