Even until way into the nineties, a lot of people were convinced mobile telephony was nonsense. Goes to show how terrible we are in predicting our needs. And here we are now looking ahead to the era after the smartphone. Jan Rabaey, distinguished professor at the University of California in Berkeley, takes us to a world of amorphous/disintegrated mobile devices and elaborates on the consequences for system- and semiconductor technology design. He also describes how we will increasingly be able to build our own technological devices. Will everyone be an engineer in 2035?
The disintegrating smartphone
Anyone attempting to formulate forward -looking projections on the role of IT in society should base these on the ubiquity of technology. Just look at today’s abundance of sensors and cameras integrated in new cars. This gives a glimpse of how our entire environment will become injected with sensors and actuators. Also we, as human beings, will increasingly be connected. The era of the smartphone has introduced a number of irreversible changes that will just not go away.
However, a smartphone itself is a far from optimal device. The microphone is not close to your mouth, but next to your jaw. The wireless signal from the antenna suffers from the interference by our heads and brings a radiation source next to our brains. To be able to operate your smartphone, you must remove it from your ear, so you cannot simultaneously have a conversation. Not surprisingly, many people no longer keep their smartphone to their ear. They hold it in front of them, with their caller on the speaker.
It is therefore not entirely unthinkable that in the future the smartphone will disintegrate. Where it is now an all-inclusive device, the functions will increasingly split up. A smartwatch for part of the user interface, an earpiece for the sound, the microphone as a crown over your tooth, glasses or contact lenses for the images and the antenna woven into the textile of your jacket or backpack. In a well-orchestrated symphony these individual devices will work together to give you an optimal user experience.
And not just the devices around your body. You may also incorporate devices of nearby people and objects depending on specific application needs. Too little computing power for hi-res streaming? The processor of the person next to you in the subway turns up a notch to also serve you. Bad connection over a long distance? The antenna on a nearby building collects global data from the cloud sends it to you via a local connection.
The conductor of this orchestra will be an artificial intelligence engine that has learned your preferences and can interpret them in relation to your ever-changing context. With the opportunity to be overruled in case you know best. In other words: you stay in the driver’s seat.
It’s quite obvious that this scenario, if it materializes, will dramatically influence our daily lives. What is less known are the radical changes behind the scenes.
A new era for engineers and designers
And thereby we enter a world unknown to the average smartphone user: that of chip technology and system design. Before we look ahead to the implications of the depicted scenario on this domain, let’s first go back in time.
Historically, semiconductor technology development has been driven by Moore’s and Dennard’s laws. It has run along a rather predictable roadmap based on the scaling of transistors and the associated increase in computing power per chip. At some point however, this trajectory ran into some roadblocks. Most important ones being the energy per function not scaling appropriately and limitations on how much heat can be dispersed per unit area. Resulting in your battery running empty even faster. Or generating unacceptable temperatures in both mobile and non-mobile electronics.
Around the change of the century, we therefore diversified from a single chip technology roadmap to separate roadmaps. For memory and for logic. Or for low-power and for high-performance devices. Which meant that engineers targeting low power had to select the technology and materials for low leakage currents, while the ones targeting high performance focused on parameters such as compatibility with high voltages or extreme temperatures. As a result, chips, memories and other IT building blocks have already become increasingly specialized. Where you have the 'general purpose processor' on one side of the spectrum (a computer chip that can be programmed for just about anything), we have been evolving for quite some time to more dedicated processors. Chips optimized for image processing, for example, which are up to a factor thousand more energy-efficient for carrying out the same task. This of course is already very much in line with the future vision of the disintegrating smartphone where each device will be assigned a highly specialized task.
At the same time some of the fundamentals of computing are changing. In the original paradigm inspired by people like Turing and Von Neumann, computing was synonym for 'processing' (calculating) and was performed using sequential algorithms. To oversimplify: a computer was something in which you inserted a number and which could very efficiently calculate a new number according to a predefined set of formulas.
Today, computing is no longer only about calculating. It’s also about interaction.
And not the algorithms, but the data are leading. In other words, a computer is no longer something you 'insert a number' into, but a complex system with multiple possibilities for input and output. Its intelligence no longer builds on a series of linear formulas, but on a self-learning system that – optimized through interaction with the environment - generates output in view of the desired experience.
This entire evolution makes it even more difficult to define semiconductor roadmaps and to assess what would be a useful and cost-efficient next step in technology development. That’s why a thorough collaboration between system- and technology design becomes increasingly vital. And why, from the earliest stage in development, it must be clear for which application a certain technology will be used: IoT, cloud, self-learning ...
"Meet in the middle" is what imec fellow Hugo De Man called it a few decades ago. So the concept is certainly not new, but is becoming increasingly relevant. For organizations such as imec and the semiconductor industry, capturing and enabling such a methodology is an ongoing challenge that has an impact on the entire internal organization. Teams that used to operate separately must suddenly work closely together. And a team that could focus on a whole technology domain now has to split up to be able to increasingly specialize.
Will we all be engineers?
This entire development may offer opportunities for any of us, as relatively advanced technological knowledge will continue to democratize.
Take the example of photo and video. Advanced cameras, image editing software and online distribution channels are now available in everyone’s smartphones. Whereas in the past it took a lot of effort to make home videos and photos. A somewhat smaller group of technology enthusiasts are now experimenting with sensors, 3D printers and all sorts of DIY technology and software that until recently was expensive and scarce and is now easily available open source or via every online electronics platform. And an even smaller group of tech fanatics explores the possibilities of the blockchain and all kinds of other emerging trends.
It is like a reverse pyramid: at the top you have a large number of users that has access to some degree of technology. The further down, the smaller the group of people and the more complex the technology they have access to. But as technology democratizes, the toolbox of what is available in every layer of the pyramid changes. It can therefore be predicted that those who now make digital videos and photos will soon discover DIY electronics and 3D printers. And the technology enthusiasts who now explore 3D printers and program in Augmented and Virtual Reality will by then be working with more advanced and integrated applications and systems. In other words: in 2035 we will all become somewhat of an engineer...
And the engineers themselves? They remain dispersed throughout the entire pyramid to feed it with their knowledge and expertise. Assisting end users in the implementation of proven technologies, while also developing new building blocks for emerging technologies and applications. Building blocks that will increasingly become a hybrid of the biological and electronic world. Because that’s one of the most exciting developments in the field. When it comes to logic and calculation power, nature has several tricks up its sleeve that we as engineers and technicians can only dream of. Think of the power of the human and animal brain, the cooperation within a flock of birds or an army of ants, the communication between whales or dolphins... None of this can be realized with electronics and physics alone. This world, and the whole range of possibilities associated with it, will only open up for us once we will be able to incorporate chemical signals and signal exchange into our technological building blocks and systems. Enough to look forward to for at least another 35 years.
How is imec contributing to this future?
Since its foundation, imec has been one of the forerunners in technology- and system development for semiconductors and chips. As illustrated by the large number of press releases about new developments in this area. Imec has also generated a leading technology portfolio in the areas of bioelectronics and life sciences. In 2018, imec and its partners presented an organ-on-chip for drug screening, a neural probe to measure brain activity and a breakthrough in the integration of electronics in contact lenses. Imec is also increasingly focusing on the development of artificial intelligence, sensor networks and communication technology for the Internet of Things.
Want to know more?
- Want to read more about future developments in semiconductor technology? Browse further in this magazine and read 'In 2035, quantum processors with a few thousand qubits will run first small applications.'
- 'Technologies for tomorrow’s world: more sensors, more quality of action, and improved insights': a July 2018 imec magazine article.
- ‘Francky Catthoor on computer architectures’: a January 2017 imec magazine article.
This article is part of a special edition of imec magazine. To celebrate imec's 35th anniversary, we try to envisage how technology will have transformed our society in 2035.
Jan Rabaey holds the Donald O. Pederson Distinguished Professorship at the University of California at Berkeley. Before joining the faculty at UC Berkeley, he was a research manager at imec from 1985 until 1987. He is a founding director of the Berkeley Wireless Research Center (BWRC) and the Berkeley Ubiquitous SwarmLab, and has served as the Electrical Engineering Division Chair at Berkeley twice.
Jan Rabaey has made high-impact contributions to a number of fields, including advanced wireless systems, low power integrated circuits, mobile devices, sensor networks, and ubiquitous computing. His current interests include the conception of the next-generation distributed systems, as well as the exploration of the interaction between the cyber and the biological world.
He is the recipient of major awards, amongst which the IEEE Mac Van Valkenburg Award, the European Design Automation Association (EDAA) Lifetime Achievement award, the Semiconductor Industry Association (SIA) University Researcher Award, and the SRC Aristotle Award. He is an IEEE Fellow, a member of the Royal Flemish Academy of Sciences and Arts of Belgium, and has received honorary doctorates from Lund (Sweden), Antwerp (Belgium) and Tampere (Finland). He has been involved in a broad variety of start-up ventures, including Cortera Neurotechnologies, of which he is a co-founder.
9 January 2019