NEO Semiconductor, a developer of innovative technologies for 3D NAND flash and DRAM memory, today announced that its ground-breaking technology, 3D X-DRAM, captured the top prize at Flash Memory Summit 2023, winning the “Best of Show” award for the Most Innovative Memory Technology. This category addresses innovations that will change the way high-performance memory is used in products and raise the bar to new levels of performance, availability, endurance and scalability.
“The ability for DRAM technology to scale in recent years has slowed dramatically, yet the demands for high performance computers that can address workloads such as AI, machine learning and big data analytics create an insatiable appetite for significantly more DRAM than is possible today,” said Jay Kramer, Chairman of the Awards Program and President of Network Storage Advisors Inc. “We are proud to recognize NEO Semiconductor’s 3D X-DRAM solution as the world’s first 3D NAND-like DRAM cell array that reduces the number of chips required for a DRAM product and has the ability to increase memory capacity by up to 800%.”
“3D X-DRAM™ will revolutionize the future computer memory system and enable a whole new world of products and applications, that up until now weren’t even possible. DRAM will play a key role in the current booming AI era, as high-density and high-speed memories are required to process the AI algorithm for huge amounts of data, like ChatGPT,” said Andy Hsu, Founder and CEO of NEO Semiconductor. “We are honored and thankful to accept this prestigious award, and I applaud the entire NEO Semiconductor team for the hard work and dedication it has taken to make this ground-breaking technology a reality.”
3D X-DRAM™ is the first 3D DRAM, based on today’s 3D NAND flash process, without developing a new process. This will greatly reduce the risk, and save huge amount of development time, and cost. By using the current 230 layers, it can increase the DRAM density by 8 times, to reach 128 Gb. The density can be continuously increased by stacking more layers. This can provide huge amount of local memory for CPU and solve the bottleneck of today’s AI system.