The new approach comes with a snappy name: chiplets. You can think of them as something like high-tech Lego blocks. Instead of carving new processors from silicon as single chips, semiconductor companies assemble them from multiple smaller pieces of silicon—known as chiplets.
It turns out that when you shrink a Vacuum transistor to absolutely tiny dimensions, you can recover some of the benefits of a vacuum tube and dodge the negatives that characterized their usage. According to a report in IEEE Spectrum, vacuum transistors can draw electrons across the gate without needing a physical connection between them. Make the vacuum area small enough, and reduce the voltage sufficiently, and the field emission effect allows the transistor to fire electrons across the gap without containing enough energy to energize the helium inside the nominal “vacuum” transistor. According to researchers, they’ve managed to build a successful transistor operating at 460GHz — well into the so-called “Terahertz Gap,” which sits between microwaves and infrared energy. The “gap” refers to the fact that we have a limited number of devices that can generate this frequency and only a handful of experimental applications for this energy band.
Kahng says chipmakers may face a more immediate struggle with wiring in just a few years as they attempt to push chip density down past the 10-nm generation. Each copper wire requires a sheath containing barrier material to prevent the metal from leaching into surrounding material, as well as insulation to prevent it from interacting with neighboring wires. To perform effectively, this sheath must be fairly thick. This thickness limits how closely wires can be pushed together and forces the copper wires to shrink instead, dramatically driving up the resistance and delays and drastically lowering performance. Although researchers are exploring alternative materials, it’s unclear, Kahng says, whether they will be ready in time to keep up with Moore’s Law’s steady pace.
Amdahl’s law is a model for the relationship between the expected speedup of parallelized implementations of an algorithm relative to the serial algorithm, under the assumption that the problem size remains the same when parallelized. For example, if for a given problem size a parallelized implementation of an algorithm can run 12% of the algorithm’s operations arbitrarily quickly while the remaining 88% of the operations are not parallelizable, Amdahl’s law states that the maximum speedup of the parallelized version is 1/1 – 0.12 = 1.136 times as fast as the non-parallelized implementation.