Enablement of Training hardware for Deep Learning

Leuven - PhD
|
More than two weeks ago

Paving the way towards self-learning intelligent chips

Apply

Machine learning is changing our world by increasingly influencing different aspects of life. Some of the most powerful machine learning algorithms, called deep learning algorithms or deep neural networks, require vast amounts of computations and memory during training. As a result, training a neural network takes time, space and energy. In this PhD, we will tackle these problems by exploring how accelerator architectures can be made efficient for neural network training.

Most of today's neural network accelerators are focused on efficient neural network inference, i.e. making classifications based on data collected from sensors or other inputs. However, neural networks need to be trained before they can infer. Training a neural net is typically performed in GPU-based hardware. However, GPUs are sizeable and consume much energy. Indeed, efficient accelerators for neural network training are still mostly lacking.

In this PhD, we would like to explore ways to add training functionality to the accelerator architectures, by co-optimization smarter, hardware friendly algorithms while leveraging imec's knowledge in novel memory and logic technology and 3D integration technology . 





Required background: Electrical engineer, Computer science or equivalent

Type of work: 80% modeling/simulation, 20% literature

Supervisor: Rudy Lauwereins

Daily advisor: Stefan Cosemans

The reference code for this position is 1812-43. Mention this reference code on your application form.

Apply

Share this on

truetrue

This website uses cookies for analytics purposes only without any commercial intent. Find out more here. Our privacy statement can be found here. Some content (videos, iframes, forms,...) on this website will only appear when you have accepted the cookies.

Accept cookies