For 50 years, the industry has been scaling chip technology following Mooreʼs Law, with scaling targets that addressed cost, area, power, and performance, more or less at the same pace. The systems built from those chips followed suit, upgrading features and performance as new chip generations became available.
But the last years, the explosive growth of data traffic has led to a demand for processing power beyond what is possible with traditional transistor scaling. Additionally, the Internet of Things is growing into a veritable system of systems, demanding highly specialized functionality for e.g. low-power sensoring, security, or high-performance computing.
An Steegen talks about the pipeline of materials, device architectures and advanced techniques for a number of new technology generations. And Diederik Verkest and Ingrid Verbauwhede look at higher system functions and see how we may create an optimized technology to implement these as efficiently as possible.
Technologies to extend semiconductor scaling
The explosive growth of data traffic fuels the demand for ever more processing power and storage capacity. Mooreʼs Law continues to be necessary, but innovations are needed beyond this law to help managing the devices power, performance, area and cost. An Steegen reveals some of the secrets of semiconductor scaling – a pipeline full of materials, device architectures and advanced techniques that promise to further extend semiconductor scaling.
The end of happy scaling?
Data traffic explosion, fueled by the Internet of Things, social media and server applications, has created a continuous need for advanced semiconductor technologies. Servers, mobile devices, IoT devices... they drive the requirements for processing and storage. An Steegen: “At the same time, this trend is also creating more diversification. IoT devices for example will need low-power signal acquisition and processing, and embedded non-volatile memory technologies. For mobile and server applications, on the contrary, further dimensional scaling, continuous transistor architecture innovations and memory hierarchy diversification are among the key priorities.” But will we be able to continue traditional semiconductor scaling, as initiated by Gordon Moore more than 50 years ago? An Steegen: “For a long time, we have lived in the happy scaling era, where every technology node reshrinks and redoubles the number of transistors per area, for the same cost. But the last 10-12 years, we have not been following that happy scaling path. The number of transistors still doubles, but device scaling provides us with diminishing returns. Weʼve seen these dark periods of ‘dark siliconʼ before, but, fortunately, weʼve always managed to get out of these periods. Again, the technology box will provide new features to help manage power, performance and area node by node as we move to the next generation.”
The technology box for dimensional scaling
On the dimensional scaling side, extreme ultraviolet lithography (EUVL) is considered an important enabler for continuing Mooreʼs Law. An Steegen: “Ideally, we would need it at the 10nm node, where we will start replacing single exposures with multiple exposures. More realistically, it will hopefully be ready to lower the costs for the 7nm technology. At imec, we already showed that EUVL is capable of printing 7nm logic dimensions with one single exposure.” Still, issues need to be resolved, related to, for example, the line-edge roughness. An Steegen: “At the same time, to enhance dimensional scaling, we increasingly make use of scaling boosters, such as self-aligned gate contact or buried power rail. These tricks allow a standard cell height to be reduced from 9 to 6 tracks, leading to a bit density increase and large die cost reduction - a nice example of design-technology co-optimization.”
Improving power/performance in the front-end of line
FinFET technology has been the killer device for the 14 and 10nm technology nodes. But for the 7-5nm, An Steegen foresees challenges. “At these nodes, FinFET technology canʼt meet the 20% performance scaling and 40% power gain anymore. To go beyond 7nm will require horizontal gate-allaround nanowires, which promise better electrostatic control. In such a configuration, the drive current per footprint can be maximized by vertically stacking multiple horizontal nanowires. In 2016, at IEDM, we demonstrated for the first time the CMOS integration of vertically stacked gate-all-around Si nanowire MOSFETs. Vertical nanowires, although requiring a more disruptive process flow, could be a next step. Or junction-less gate-all-around nanowire FET devices, which, as shown at the 2016 VLSI conference, appear as an attractive option for advanced logic, low-power circuits and analog/RF applications.” Further down the road, from the 2.5nm node onwards, fin/ nanowire devices are expected to run out of steam. An Steegen: “Sooner or later, we will need to find the next switch. Promising approaches are tunnel- FETs, which can provide a 3x drive current improvement, and spin-wave majority gates. ”Spin-wave majority gates with micro-sized dimensions have already been reported. But to be CMOS-competitive, they must be scaled and handle waves with nanometer-sized wavelengths. An Steegen: “In 2016, imec proposed a method to scale these spin-wave devices into nanometer dimensions, opening routes towards building spin-wave majority gates that promise to outperform CMOS-based logic technology in terms of power and area reduction.”
Extending or replacing Cu in the back-end-of-line
Looking ahead, it might as well be the interconnect that will threaten further device scaling. Therefore, the back-end-of-line (BEOL) and the struggle to keep scaling the BEOL needs attention as well. “We look at ways to extend the life of Cu, for example with liners of rubidium (Ru) or cobalt (Co). On the longer term, we will probably need alternative metals, such as Co for local interconnects or vias”, says An Steegen.
The future memory hierarchy
Besides a central processing unit, memory to store all the data and instructions is another key element of the classical Von Neumann computer architecture. The ever increasing performance of computation platforms and the consumerʼs hunger for storing and exchanging ever more data drive the need to keep on scaling memory technologies. Besides this scaling trend, existing memories that make up todayʼs memory hierarchy are challenged with the need for new types of memory. An Steegen: “STT-MRAM, for example, is an emerging memory concept that has the potential to become the first embedded non-volatile memory technology on advanced logic nodes for advanced applications. It is also an attractive technology for future high-density standalone applications. It promises non-volatility, high-speed, low-voltage switching and nearly unlimited read/write endurance. But its scalability towards higher densities has always been challenging. Recently, we have been able to demonstrate a high-performance perpendicular magnetic tunnel junction device as small as 8nm, combined with a manufacturable solution for a highly scalable STT-MRAM array.” The future memory landscape also requires a new type of memory able to fill the gap between DRAM and solid-state memories: the storage class memory. This memory type should allow massive amounts of data to be accessed in very short latency. Imec is working there on MRAM and resistive RAM (RRAM) approaches.
Beyond classical scaling – towards system-technology cooptimization...
A challenge for traditional Von Neumann computing is to increase the data transfer bandwidth between the processing chip and the memory. And this is where 3D approaches enter the scene. An Steegen: “With advanced CMOS scaling, new opportunities for 3D chip integration arise. For example, it becomes possible to realize different partitions of a system-on-chip (SoC) circuit and heterogeneously stacking these partitions with high interconnect densities. At the smallest partitions, chips are no longer stacked as individual die, but as full wafers bonded together.” An increased bandwidth is also enabled by optical I/O. In this context, imec continues its efforts to realize building blocks (e.g. optical modulators, Ge photodetectors) with 50Gb/s channel data rate for its Si photonics platform.
Mooreʼs Law will continue, but not only through the conventional routes of scaling. An Steegen: “We have moved from pure technology optimization (involving novel materials and device architectures) to design-technology cooptimization (e.g. the use of scaling boosters to reduce cell height). And we are already thinking ahead about a next phase, system-technology co-optimization. And to keep computing power improving, we are exploring ways beyond the classical Von Neumann model, such as neuromorphic computing, a brain-inspired computer concept and quantum computing, which exploits the laws of quantum physics. There are plenty of creative ideas that will allow the industry to further extend semiconductor scaling...”
An Steegen is imecʼs Executive Vice President Semiconductor Technology & Systems. In that role, she heads the research hubʼs efforts to define and enable nextgeneration ICT technology and to feed the industry roadmaps. Dr. Steegen is a recognized leader in semiconductor R&D and an acclaimed thought leader and speaker at the industryʼs prominent conferences and events. An Steegen joined imec in 2010 as senior VP responsible for imecʼs CORE CMOS programs in logic and memory devices, processing, lithography, design, and optical & 3D interconnects. Before, she was director at IBM Semiconductor R&D in Fishkill, New York, responsible for the bulk CMOS technology development. While at IBM, Dr. Steegen was also host executive of IBMʼs logic International Semiconductor Development Alliance and responsible for establishing collaborative partnerships in innovation and manufacturing. Dr. An Steegen holds a Ph.D. in Material Science and Electrical Engineering, which she obtained in 2000 at the KU Leuven (Belgium) while doing research at imec. She has published more than 30 technical papers and holds numerous patents in the field of semiconductor development.
Optimizing technology for IoT systems – adding fingerprints and brains
The IoT is fast becoming a multilevel system of systems spanning the globe. But to realize the growth path that is forecasted, weʼll need optimized and specialized hardware, capable of, amongst others, sensoring at ultralow power, guaranteeing a systemʼs security during its full lifetime, and learning from huge amounts of data. Imecʼs Diederik Verkest and Ingrid Verbauwhede talk about the next step: how technology can be further optimized to solve specific system and application demands. As examples, Ingrid proposes hardware-entangled security and Diederik explains imecʼs efforts in neuromorphic processing.
A heterogeneous chip future (Moore on steroids)
“Until recently,” says Diederik Verkest, “we concentrated almost all of our scaling effort on the smallest unit of a chip, the transistor, whatever that unit was used for afterwards. Next, to stay on the course predicted by Mooreʼs Law, we co-optimized technology with lower-level design units such as e.g. memory cells. Now weʼre working our way up in the system hierarchy, always looking how we can optimize technology to better implement a function. So naturally, we also arrive at the key functions needed for the future IoT, such as a failsafe security. And we are also eying specialized processors for e.g. neuromorphic computing, complete subsystems to tackle specific, hard problems.”
Verkest adds that he is excited about imecʼs recent merger: “This is a great opportunity for both sides. My new colleagues are application experts in domains such as bio-informatics or security. They can help us screen technology and direct us to the solutions that are best fit to solve the hard problems in their domains. And reversely, as application experts, they will learn from all the hardware opportunities that we are considering. What I see happening going forward, is a much more intimate, structured interaction between hardware and application R&D, greatly speeding up this system/ technology co-optimization.”
Secure chips with unclonable fingerprints
Today already, electronics are embedded in many objects in our environment. Think of your carʼs keys, security cameraʼs, smart watches, or even implanted pacemakers. “This makes security considerably more complex than it used to be," says Ingrid Verbauwhede. "Existing cryptographic algorithms demand a lot of compute power, so they run mainly on high-end platforms. But most microchips in the IoT are small, lightweight, low-power and have a limited functionality. So traditional cryptography doesnʼt fit well. Our ambition is to make chips that are inherently more secure through the way they are designed and processed."
In 2016, Ingrid Verbauwhede and her team received a prestigious European ERC research grant for their Cathedral project. “This grant is at once a recognition for what we have been doing, and a great support going forward. A support that will allow us to independently look for the best solutions.”
Ingridʼs team is exploring various ways of doing that: “In the past, R&D looked at dedicated designs methods for e.g. low-power chips. We now want to do the same to better secure chips. Chips e.g. that donʼt leak information while they are computing so they are more resistant against side-channel attacks. And another of our focus points is implementing future-proof cryptography, algorithms that will protect a system during its long lifetime, even if it is attacked by future quantum computers.”
Asked for her plans for 2017, Ingrid Verbauwhede points to the direct access her team now has to technology processing at imecʼs fabs: “One of the characteristics of todayʼs chip scaling is process variation: each chip is slightly different from all others. From a reliability perspective, that is a nuisance. It requires engineers to take extra measures so that computations remain predictable. But there is also an upside that we want to exploit: the variations are like a fingerprint, a way to uniquely identify each chip without expensive calculations. It is what we call a physically unclonable function (PUF). And if you tie that function to the software running on the processor, you have another layer of security which is well-suited for IoT devices.”
Smart chips with brain power
Our brains are formidable computing wonders, using only a fraction of the power than traditional computers to obtain comparable results. Therefore, engineers are eager to mimic the brain on chip to speed up deep learning from massive amounts of data, or low-power image recognition.
“But to do so,” says Diederik Verkest, “we have to replicate the brainʼs architecture, a tight interconnection of an enormous number of relatively primitive processing nodes (the neurons) and their interconnections (the synapses). That is usually done with some type of crossbar architecture, wires laid out in a matrix (or cube), so that each input line connects with all outgoing lines. At a crossing of two lines, there is a switch that implements the synapses. They contain the intelligence of the system, the ability to hold data, process and learn from experience. So they should be made programmable and selfadaptable.
Work on this emerging domain at imec started some two years ago, partly embedded in the EC Horizon2020 project NeuRAM3. In 2016, we have selected an architecture and screened options to implement the self-adapting synapses. We are convinced that our concept is uniquely suited to tackle the problem, so weʼve taken out a patent and are now building a proof-of-concept. In 2017, we will tape-out a first chip and package it into a neuromorphic computing system that we can test against neuromorphic application simulators with growing numbers of neurons.”
These brain-on-chips may not be exact copies of our brain circuits, but nature teaches us that it is physically possible to build much better computers than we do today. Computers that we need to make sense from the enormous amounts of data that the IoT will generate. But also for the intelligent sensors and robots of the connected world. Small, low-power, long-lasting devices that have to stand their ground among an ever growing stream of data, continuously adapt themselves to their environment, even learn and become smarter over their life time.
Diederik Verkest is director of imecʼs INSITE program. After earning a Ph.D. in micro-electronics engineering from the KU Leuven, Diederik joined imec in 1994, where he has been responsible amongst others for hardware/software co-design. In 2009, he started imecʼs INSITE program focusing on co-optimization of design and process technology for sub-14nm nodes. The program offers the fab-less design-community insights into advanced process technologies and provides a platform for foundries and fab-less to discuss directions for next generation technologies.
Diederik Verkest published over 150 articles in international journals and at international conferences. Over the past years he has been involved in numerous technical conferences. He was the general chair of, DATE, the Design, Automation, and Test in Europe conference in 2003. Verkest is a Golden Core member of IEEE Computer Society.
Ingrid Verbauwhede is professor at the KU Leuven (Belgium) in the imec- COSIC research unit where she leads the embedded systems and hardware group. She is also adjunct professor at the electrical engineering department at UCLA, Los Angeles (USA).
Ingrid Verbauwhede is an IEEE fellow, member of IACR and she was elected as member of the Royal Academy of Belgium for Science and the Arts in 2011. Her main interest is in the design and design methods for secure embedded circuits and systems. She has published around 70 papers in international journals and 260 papers at international conferences. She is also inventor on 12 issued patents.