Reinforcement Learning for Compiling Neural Networks to In-Memory Neural Network Accelerator

Meer dan twee weken geleden
Several neural network accelerators have emerged in recent years. Many of these accelerators expend significant energy fetching operands from various levels of the memory hierarchy. A significant amount of designer effort is required in optimizing the mapping for arbitrary neural networks on a given accelerator. Furthermore, it is non-trivial to use the same mapping across multiple accelerators, without sacrificing on performance. The goal of this project is to build a generic accelerator model, that captures the essential features of various accelerators. This model will be used as an environment for training a Reinforcement Learning framework that would efficiently compiles an arbitrary neural network (primarily Convolutional Neural Networks) onto the accelerator to achieve high energy efficiency. Skills: Mandatory: Python, experience with PyTorch or any other deep neural network framework, fundamentals of Computer Architecture Optional: Familiarity with Reinforcement Learning algorithms, Graph Convolutional Networks

Supervising scientist(s): For further information or for application, please contact: Debjyoti Bhattacharjee ( and Nathan Laubeuf (

Share this on


Deze website maakt gebruik van cookies met als enige doel het analyseren van surfgedrag, zonder enige commerciële insteek. Lees er hier meer over. Lees ook ons privacy statement. Sommige inhoud (video's, iframes, formulieren,...) op deze website zal pas zichtbaar zijn na het accepteren van de cookies.

Accepteer cookies