About the course
Machine Learning is rapidly gaining importance. Due to the advance in computational power more and more applications for machine learning and deep learning systems are becoming reality and are increasingly deployed into embedded, resource-constrained devices. This puts stringent requirements on electronic and integrated system design. In this course we will focus on the hardware-efficient implementation of machine learning, more specifically deep neural networks. It will become clear that a truly efficient design must be, optimized across the complete algorithmic / architecture / circuit level design space. The course will go in depth on all these aspects.
In this 3-day program the participant will learn about:
- Deep learning concepts and algorithm-driven efficiency enhancement techniques
- Digital processor and datapath architectures for neural network execution
- Exploiting mixed-signal processing for machine learning
- Exploiting in-memory computations and emerging memory devices for machine learning
- Cross-layer dataflow and scheduling optimizations across algorithms - architecture - circuits
Who should attend?
Engineers, IC designers and engineering managers who are interested in the hardware implementation of machine learning and deep learning systems should follow this course. Furthermore, this course is of great interest to people who work on the software implementation of machine learning and artificial intelligence algorithms, and want to understand the implications of algorithmic choices within a complete embedded system.
Prof. Marian Verhelst, KU Leuven, Belgium
Prof. Boris Murmann, Stanford, USA