Enablement of Training hardware for Deep Learning

Leuven - PhD
|
Meer dan twee weken geleden

Paving the way towards self-learning intelligent chips

Solliciteer

Machine learning is changing our world by increasingly influencing different aspects of life. Some of the most powerful machine learning algorithms, called deep learning algorithms or deep neural networks, require vast amounts of computations and memory during training. As a result, training a neural network takes time, space and energy. In this PhD, we will tackle these problems by exploring how accelerator architectures can be made efficient for neural network training.

Most of today's neural network accelerators are focused on efficient neural network inference, i.e. making classifications based on data collected from sensors or other inputs. However, neural networks need to be trained before they can infer. Training a neural net is typically performed in GPU-based hardware. However, GPUs are sizeable and consume much energy. Indeed, efficient accelerators for neural network training are still mostly lacking.

In this PhD, we would like to explore ways to add training functionality to the accelerator architectures, by co-optimization smarter, hardware friendly algorithms while leveraging imec's knowledge in novel memory and logic technology and 3D integration technology . 





Required background: Electrical engineer, Computer science or equivalent

Type of work: 80% modeling/simulation, 20% literature

Supervisor: Rudy Lauwereins

Daily advisor: Stefan Cosemans

The reference code for this position is 1812-43. Mention this reference code on your application form.

Solliciteer

Share this on

truetrue

Deze website maakt gebruik van cookies met als enige doel het analyseren van surfgedrag, zonder enige commerciƫle insteek. Lees er hier meer over. Lees ook ons privacy statement. Sommige inhoud (video's, iframes, formulieren,...) op deze website zal pas zichtbaar zijn na het accepteren van de cookies.

Accepteer cookies