Machine learning is transforming our world by increasingly influencing different aspects of our life. Some of the most powerful machine learning algorithms, called deep learning algorithms or deep neural networks, demonstrate state-of-the-art performance on a range of tasks. Although impressive, this high performance comes at a cost: Deep learning algorithms require billions of computations during inference and even more during training, they can be hard to train, and require high power and performance, as well as large memories to store the trained weights in the networks. This makes them hard to use in mobile devices at the edge.
To enable energy efficient yet high performance inference for neural networks, imec is working on dedicate memory and logic devices that can enable in-memory computation for convolutional neural networks (CNN) and long short-term memory (LSTM) networks. In this case, the trained weights are stored in the (possibly non-volatile) memory and the ubiquitous multiply-accumulate operation is performed in the memory in an analog fashion. The analog sums of products are then converted back to digital signals by analog-to-digital convertors (ADCs) in the memory periphery.
This project’s goal is to design the ADCs, starting from the
specifications for analog in-memory compute for neural networks. Since the ADCs
will be placed in an array in the periphery of the memory, there will quite a
few physical constraints to match it to the memory arrays. The work will cover design,
circuit, layout and simulations to quantify the ADC trade-offs in performance, area,
energy and precision for deep learning applications
Type of project: Internship
Duration: 3 months
Required degree: Master of Engineering Science
Required background: Electrotechnics/Electrical Engineering
Allowance only for students from a non-Belgian university