In a keynote talk at The ConFab earlier this year, Samsung exec Yoon Woo (Y.W.) Lee. predicted that optical interconnects would soon be required. “Exascale computing will require optical interconnection to communicate between the CPU and memory chip,” he said. This appears to be moving closer to reality with last week’s demonstration by Fujitsu and Intel of the world’s first Optical PCIe Express (OPCIe) based server.
Intel’s 50Gbps silicon photonics link was demonstrated in 2010, and it has now been put into practice. Just last week, Fujitsu said it has demonstrated the world’s first Intel Optical PCIe Express (OPCIe) based server. In a blog, Intel’s Victor Krutul said that Fujitsu took two standard Primergy RX200 servers and added an Intel Silicon Photonics module into each along with an Intel designed FPGA. The FPGA did the necessary signal conditioning to make PCI Express “optical friendly”. Using Intel Silicon Photonics they were able to send PCI Express protocol optically through an MXC connector to an expansion box. In this expansion box was several solid state disks (SSD) and Xeon Phi co-processors.
It’s commonly known that silicon is not a good material for generating or detecting light (although silicon dioxide is quite good at channeling light). Optical interconnects will require III-V lasers to convert electrical signals into pulses of light and, on the receiving end, photodetectors, typically germanium-based, to convert that light back into electrical signals. Intel has demonstrated that it’s feasible to directly integrate photonics with silicon CMOS in an impressive prototype, but most solutions will require some type of some type of advanced packaging, such as flip-chipped lasers.
During a discussion with Ludo Deferm, executive vice president at imec on interconnects – imec had recently released details about the benefits of manganese as a diffusion barrier and some work on low-temperature low-k etch – I asked him about optical interconnects.
For intrachip applications – such as between microprocessor and memory — Deferm said that will depend on the data rates required. “If you have 1000, 5000 parts to be connected over a distance of a couple of millimeters and you want to transfer a Gb of data, just copper lines can have some limitations,” he said. “If you have the space — and it takes space — you can do it. But you will do that where you need the high transmission rates. There is no need to change the intrachip interconnects with photonics. But the interchip, between the different chips, we are working on that because we are even now providing photonics technology to startup companies and other companies who want to design in it.“
Part of the challenge is that various optical components are required – waveguides, detectors, modulators and polarizers, for example – and those are not available at standard foundries. Imec has a design kit and a library and IME in Singapore has capabilities as well.
Deferm said these components are not so easily integrated. “Most of the problems are related to losses. Optical kinds of interconnects have the advantage – they’re good at data speed – but you have to be careful because you can create losses because you need wide ideal structures. Not because they have to be so small in dimension, but they have to be controlled very well over the edges: it is light and light scatters. You also have losses because of coupling,” he said. On a silicon substrate, a high frequency signal in Gb/seconds also creates coupling towards the substrate and that creates losses as well.
Good opportunity and chance on the business of interchip optical interconnectors will coming soon if one solves the cost effective packaging issues.