Reinforcement Learning for Compiling Neural Networks to In-Memory Neural Network Accelerator

More than two weeks ago
Several neural network accelerators have emerged in recent years. Many of these accelerators expend significant energy fetching operands from various levels of the memory hierarchy. A significant amount of designer effort is required in optimizing the mapping for arbitrary neural networks on a given accelerator. Furthermore, it is non-trivial to use the same mapping across multiple accelerators, without sacrificing on performance. The goal of this project is to build a generic accelerator model, that captures the essential features of various accelerators. This model will be used as an environment for training a Reinforcement Learning framework that would efficiently compiles an arbitrary neural network (primarily Convolutional Neural Networks) onto the accelerator to achieve high energy efficiency. Skills: Mandatory: Python, experience with PyTorch or any other deep neural network framework, fundamentals of Computer Architecture Optional: Familiarity with Reinforcement Learning algorithms, Graph Convolutional Networks

Supervising scientist(s): For further information or for application, please contact: Debjyoti Bhattacharjee ( and Nathan Laubeuf (

Share this on


This website uses cookies for analytics purposes only without any commercial intent. Find out more here. Our privacy statement can be found here. Some content (videos, iframes, forms,...) on this website will only appear when you have accepted the cookies.

Accept cookies