GLOBALFOUNDRIES and SiFive to Deliver Next Level of High Bandwidth Memory on 12LP Platform for AI Applications

GLOBALFOUNDRIES® (GF®) and SiFive, Inc. announced today at GLOBALFOUNDRIES Technology Conference (GTC) in Taiwan that they are working to extend high DRAM performance levels with High Bandwidth Memory (HBM2E) on GF’s recently announced 12LP+ FinFET solution, with 2.5D packaging design services to enable fast time-to-market for Artificial Intelligence (AI) applications. In order to achieve the capacity and bandwidth for data-intensive AI training applications, system designers are challenged with squeezing more bandwidth into a smaller area while maintaining a reasonable power profile.

GLOBALFOUNDRIES® (GF®) and SiFive, Inc. announced today at GLOBALFOUNDRIES Technology Conference (GTC) in Taiwan that they are working to extend high DRAM performance levels with High Bandwidth Memory (HBM2E) on GF’s recently announced 12LP+  FinFET solution, with 2.5D packaging design services to enable fast time-to-market for Artificial Intelligence (AI) applications.

In order to achieve the capacity and bandwidth for data-intensive AI training applications, system designers are challenged with squeezing more bandwidth into a smaller area while maintaining a reasonable power profile. SiFive’s customizable high bandwidth memory interface on GF’s 12LP platform and 12LP+ solution will enable easy integration of high bandwidth memory into a single System-on-Chip (SoC) solutions to deliver fast, power-efficient data processing for AI applications in the computing and wired infrastructure markets. 

As a part of the collaboration, designers will also have access to SiFive’s RISC-V IP portfolio and DesignShare IP ecosystem, which will leverage GF’s 12LP+ Design Technology Co-Optimization (DTCO), enabling them to significantly increase silicon specialization, improve design efficiency and deliver differentiated SoC solutions quickly and cost-effectively. 

“Extending SiFive’s reference IP platform, with HBM2E, on GF’s best-in-class performance 12LP+ solutiondelivers new levels of performance and integration for next generation SoCs and accelerators,” said Mohit Gupta, vice president and general manager, IP Business Unit at SiFive. “Deployment of highly optimized silicon requires highly customizable capabilities in order to realize the much-needed higher TOPS per milliwatt with low latency performance required for AI, while balancing the needs for low power and smaller area footprints.”

“At GF, we continue our commitment to providing differentiated FinFET specific application solutions and IP that allow our clients to develop performance enhanced products for AI applications,” said Ted Letavic, CTO of Computing and Wired Infrastructure at GF. “Together, with GF’s most advanced FinFET platform and SiFive’s unique design methodology, we will develop a unique high performance edge computing solution which empowers designers to take full advantage of the data deluge.”

GF’s 12LP+ an innovative new solution for AI training and inference applications, offers designers a high-speed, low-power 0.5Vmin SRAM bitcell that supports the fast, power-efficient shuttling of data between processors and memory. Moreover, a new interposer for 2.5D packages facilitates the integration of high-bandwidth memory with processors for fast, power-efficient data processing. 

SiFive’s HBM2E interface and custom IP solution on GF’s 12LP and 12LP+ are now under development at GF’s Fab 8 in Malta, New York. Clients can start optimizing their chip designs to develop differentiated solutions for high performance compute and edge AI applications in 1H 2020.

Exit mobile version