…we reach the atomic limits of device scaling.
At ~4nm pitch we run out of room “at the bottom,” after patterning costs explode at 45nm pitch.
Lead bongo player of physics Richard Feynman famously said, “There’s plenty of room at the bottom,” and in 1959 when the IC was invented a semiconductor device was composed of billions of atoms so it seemed that it would always be so. Today, however, we can see the atomic limits of miniaturization on the horizon, and we can start to imagine the smallest possible functioning electronic device.
Today’s leading edge ICs are made using “22nm node” fab technology where the smallest lithographically defined structure—likely a transistor gate—is just 22nm across. However, the pitch between such transistors is ~120nm, because we are already dealing with the resolution limits of lithography using water-immersion 193nm with off-axis-illumination through phase-shift masks. Even if a “next-generation” lithography (NGL) technology were proven cost-effective in manufacturing— perhaps EbDW for guidelines combined with DSA for feature fill and EUV for trim—we still must control individual atoms.
We may have confidence in shrinking to 62nm pitch for a 4x increase in density. We may even be optimistic that we can shrink further to a 41nm pitch for a ~10x increase in density…but that’s nearing the atomic limits of variability. There are many hypothesized nanoscale devices which could succeed silicon CMOS in IC, but one commonality of all devices is that they will have to be electrically connected. Therefore, we can simplify our consideration of the atomic limits of device scaling by focusing on the smallest possible interconnect.
So what is the smallest possible electrical interconnect? So far it would be a Single-Walled Carbon NanoTube (SWCNT) doped with metals to be conducting. The minimum diameter of a SWCNT happens to be 0.4nm, but that was found inside another CNT and the minimum repeatable diameter for a stand-alone SWCNT is ~1nm. So if we need three contacts to a device then the smallest device we can build with atoms would be a 3nm diameter quantum dot. As shown in the figure at right, if we examine a plan-view of such a device we can just fit three 1nm diameter contacts within the area.
Our magical device will have to be electrically isolated and so some manner of dielectric will be needed with some minimal number of atoms. Atomic Layer Deposition (ALD) of alumina has been proven in very tight geometries, and 3 atomic layers of alumina takes up ~1nm so we can assume that spacing between devices. A rectangular array would then result in ~16nm2 as the smallest possible 3-terminal device that can be built on the surface of planet Earth.
Note that a SWCNT of ~1 nm diameter theoretically could carry ~25 microAmps across an estimated 5kOhm internal resistance [(ECS Transactions, 3 (2) 441-448 (2006)]. I will leave it to someone with a stronger device physics background to comment as to the suitability of such contacts for useful circuitry. However, from a manufacturing perspective, to ensure electrical contacts to billions of nanoscale devices we generally use redundant structures, and doubling the number of SWCNT contacts to a 3-terminal device would call for ~8 nm pitch.
However, before we reach the 4-8nm pitch theoretical limits of device scaling, we will reach relative economic limits of scaling just one device feature such as a transistor gate. Recall that there are just 22 silicon atoms (assuming silicon crystal lattice spacing of ~0.3nm) across a ~7nm line, and every atom counts in controlling device parameters. Imec’s Aaron Thean recently provided an excellent overview of scaled finFET technologies, and though the work does not look at packing density we can draw some general trends. If we assume 41nm pitch and double fins with 20nm gate length then each device would use ~1,600 nm2.
Where are we now? Let us consider traditional 6-transistor (6T) SRAM cells built using “22nm node” logic process flows to have minimal area of ~100,000 nm2 or ~16,000 nm2 per transistor. At IEDM2013 (9.1), TSMC announced a “16nm node” 6T SRAM with ~70,000 nm2 area or ~10,000 nm2 per transistor.
IBM recently announced that 6 parallel 30nm long SWCNT spaced 8nm apart will be developed as transistors for ICs by the year 2020. Such an array would use up ~1440 nm2 of area. Again, this is at best another 10x in density compared to today’s “22nm node” ICs.
Imec held another Technology Forum at SEMICON/West this year, in which Wilfried Vandervorst presented an overview of innovations in metrology needed to continue shrinking device dimensions. His work with Scanning Spreading Resistance Microscopy (SSRM) is extraordinary, showing ability to resolve 1-2nm conductivity variations in memory cell material. Working with Resistive RAM (ReRAM) material using a 2nm diameter probe tip as the top contact, researchers were able to show switching of the material only underneath the contact…thus proving that a stable ReRAM cell can be made with that diameter. If we use cross-bar architectures of that material we’d be at a 4nm pitch for memory, coincidentally the same pitch needed for the densest array of 3-terminal logic components.
IC SCALING LIMITATION |
Pitch / “Node” |
Transistor nm2 |
Scale from 22nm |
193nm lithography double-patterning |
124nm / “22nm” |
16000 |
1 |
Atomic variability (economics) |
41nm / “7nm” |
1600 |
10 |
Perfect atoms (physics) |
4nm |
16 |
1000 |
The refreshing aspect of this interconnect analysis is that it just doesn’t matter what magical switch you imagine replacing CMOS. No matter whether you imagine quantum-dots or molecular memories as circuit elements, you have to somehow connect them together.
Note also that moving to 3D IC designs does not fundamentally change the economic limits of scaling, nor does it alter the interconnect challenge. 3D ICs will certainly allow for greater number of devices to be packed into a given volume, so mobile applications will likely continue to pull for 3D integration. However, the cost/transistor is limited by 2D process technologies that have evolved over 60 years to provide maximum efficiency. Stacking IC layers will allow for faster and smaller devices, though generally only with greater costs.
Atoms don’t scale.
Past posts in the blog series:
Moore’s Law is Dead – (Part 1) What defines the end, and
Moore’s Law is Dead – (Part 2) When we reach economic limits.
The final post in this blog series (but not the blog) will discuss:
Moore’s Law is Dead – (Part 4) Why we say long live “Moore’s Law”!
—E.K.
I would like to correct one statement: “However, the cost/transistor is limited by 2D process technologies”
Monolithic 3D does change the cost dramatically. The 3D NAND is leveraging this as multiple layers (32 at the recent released by Samsung) are processed together providing
32 layers of memory for the cost of one !!!
This was the key of what Toshiba called BICS.
Please visit our to read many other cost reduction opportunities open up by monolithic 3D.
Hello Zvi,
Well, the 3D NAND cell certainly challenges this direct assumption. However, the costs for depositions and etches at least somewhat scale with the number of layers in the stack, so it is not correct to say that the this approach provides 32 layers for the cost of one. Also note that this approach has so-far relied upon relaxed design rules to allow for etch yield.
How commercially scalable is the 3D NAND cell? I’ve heard rumors that etch issues will limit the number of layers to 64 (2x of today), while lithography needing extreme multi-patterning will limit the X-Y scaling to ~20nm half-pitch from today’s 40nm (4x of today), so odds are it will go 8x more dense before the end.
I certainly agree, however, that monolithic 3D (“M3D” according to Leti) provides essential capabilities for the next 50 years of IC fabrication.
Interestingly the size of a potassium channel in nerve cells is also 3.1 nm in diameter with an internal conduction channel of about 1nm. So much the same as the CNTs. If evolution hasn’t managed anything better then maybe this is another guide to the practical scaling limit, though Nature has managed to solve the 3D integration cost issue 🙂
Mark Twain would laugh and probably say something like …
“News of Moore’s Law’s death of is greatly exaggerated.”
Is the bottom really defined by solid state devices centered around the single atomic layers? Perhaps for solid state devices as we imagine them now but in a world where dark matter makes up most of the universe and lacking a good theory of quantum gravity (and inertia) there is hope that the obituary du jour will simply require a retraction for at least another day.
The bottom may drop out (once again) to subatomic electronics using for example quarks and gluons?
And then there is that pesky spooky-action-at-a-distance. Could entanglement and other well known physics phenomena be harnessed for the energy, information and propulsion?
The real answer is that we don’t know where the bottom is. However, we do know where there are safe places to stand on while we figure out where the bottom really is.
Hello David,
Sorry, but sub-atomic reality disallows for the formation of static structures with which to build ICs. Atoms are the smallest stable forms of matter on planet earth with which we can build anything. Atoms are, in fact, the bottom…so no matter (pun intended) what hypothetical mechanism you’d like to use in a switch (you can invoke dark-matter, n-dimensional hyper-strings, spooky-action-at-a-distance, etc.) you’re limited to building it out of atoms. Atoms don’t scale.
Structures do not have to be stable to be used> Qubits can be usefully built and broken down, and spintronics can work with individual electrons. We shall see if these technologies mature into fab-able form.
In the only world I’ve ever know, structures have to be stable to be used. While a lab may have shown a spintronic device working with an individual electron, said spintronic device must be built out of stable physical atoms of finite size.
I guess this analysis represents what has been (perhaps distantly) foreseen for a long time; people just kept arguing about what is practical (gate thicknesses, lithography, etc.). The time has to come when scaling size stops, and these structures become a commodity which may connect to something else.
The cost of CMOS at this level is really not a big impediment: a whole lot of things could be done just by building out infrastructure at a constant cost. Energy is the limit: we cannot turn all of our intellectual activity over to computers with present energy/computation. So for progress of the economy which depends on computers, energy should be the focus.
Without ruling out some grand innovation in digital computation, it seems to me (as it has seemed for decades) that this is where neural network-type computing has to be taken seriously. Our brains work on 50W, and while they can’t factor primes, they still compete with Watson which I believe takes a few MW.
IEEE Spectrum has a great new article about Silicon being used in Quantum Computing. Note that the size of the smallest stable shown quantum-mumble in silicon is reportedly ~20nm, and that does not include read/write structures. Theoretically, 3nm diameter quantum dots could be connected using 3 SWCNT…but to the best of my knowledge no one has shown anything similar in practice and I know of no one currently exploring such work.