JOE O’HARE, Everspin Technologies
As embedded systems grow increasingly sophisticated with artificial intelligence (AI) and edge processing, memory technology has become increasingly important in overcoming performance and efficiency challenges. Spin-transfer-torque magnetoresistive random access memory (STT-MRAM) continues to stand out by meeting critical demands for high endurance, non-volatility, and low latency, particularly in applications at the far edge. This year, the focus has been on advancing STT-MRAM for the needs of emerging applications in the industrial internet of things (IIoT) and neuromorphic computing and in-memory compute to accelerate AI inferencing.
Expanding upon the foundation set in prior years, we are leveraging new standards like Compute Express Link (CXL) to enhance MRAM’s role in compute for AI and machine learning enhancement. As an industry, we see high-bandwidth memory rising in data centers. Yet, these technologies face limitations in energy-intensive applications, enabling persistent memory approaches to reducing power via fewer data movements from memory to storage. Everspin’s CXL-enabled MRAM concept is poised to address this by providing fast, persistent, scalable memory solutions that support inference engines in autonomous systems and real-time edge processing.
This evolution also means STT-MRAM’s role as a NOR flash enhancement is broadening. For both embedded and stand-alone markets, STT-MRAM provides reliability and endurance that traditional NOR flash cannot match, making it increasingly valuable in environments where data integrity and rapid updates are essential, such as IIoT and automotive. Field-programmable gate array (FPGA)-based systems will benefit from the ability to perform over-the-air updates as frequently as needed.
Looking ahead, we’re preparing for STT-MRAM’s role in neuromorphic and in-memory computing, utilizing an innovative MRAM design approach for low-latency, low-power data storage. As these systems evolve, MRAM will be instrumental in enabling real-time learning and adaptation in AI models, enhancing capabilities across industries like aerospace, medical monitoring, and smart cities.
Embedded systems with multi-core CPUs and real-time compute demands are needing more software code at a faster rate to enable execute-in-place. External non-volatile memory standards, such as low-power double-data rate (LPDDRx) non-volatile memory (NVM) are in development to provide extremely fast data access in order to supply code and data to the CPU with orders-of-magnitude improvement over conventional interfaces, such as expanded serial peripheral interface (XSPI).
In 2025, the memory industry stands at a transformative juncture, committed to pushing the boundaries of what memory technology can achieve, providing solutions to the next generation of intelligent technologies.
Click here to read the 2025 Executive Viewpoints in Semiconductor Digest