/Digital Compute-in-Memory and non-volatile memory benchmarking

Digital Compute-in-Memory and non-volatile memory benchmarking

Leuven | More than two weeks ago

Evaluate ML training process in compute-in-memory using non-volatile memories

ML techniques such as DNNs have realized important breakthroughs in a myriad of application domains. Training of DNNs have been traditionally carried out using software compute capabilities while a considerable research effort has been spent by the research community to accelerate the inference on-chip for (near) real-time outcomes optimising energy and accuracy. There is a need to look at optimised training procedures for reducing the energy footprint with minimal accuracy trade-off. Minimising the data movement between the compute and memory blocks (non-Von Neuman trajectory) has had great success towards energy optimisation, especially targeting the accelerated-inference landscape for ML applications. This has primarily been achieved through compute near/in memory (CnM/CiM) techniques. Devices based on standard technology as well as novel/emerging technology have been the main contributors to CiM/CnM paradigm, helping to optimise the core MVM operation. Both digital multiply-accumulate circuits and Kirchoff's law-based analogue domain processing have been explored to avoid costly memory fetches to an external memory.

Dense non-volatile memories (NVM), with large resistance (MOhm) and narrow parameters distribution are a promising candidate, however typical write penalties for the standard STT variant of MRAM technology could be a bottleneck for their adoption. This is mitigated by making use of MRAM emerging writing concepts: SOT and VCMA. In addition, design solutions are proposed to create multi-level bit MTJs cells and are currently being prototyped for further demonstration. This project will explore Design-Technology Co-optimization (DTCO) using in-house dense non-volatile memories (NVM), with large resistance (MOhm)  - STT/SOT/VGSOT-MRAM technology based ML hardware. The primary target would be to optimise system performance for ML Training with respect to a dedicated application space. This will help to close the bottom-up loop connecting device characteristics to system power/performance metrics, enabling system technology co-optimisation (STCO) for CiM-centric ML applications.

Required background: Electrical engineering with CMOS design background, understanding of SRAM, python programming, C/C++

Type of work: 20% literature, 80% modelling

Daily advisor: Fernando Garcia Redondo, Dwaipayan Biswas



Type of project: Combination of internship and thesis

Duration: 6-9 months

Required degree: Master of Engineering Technology, Master of Science, Master of Engineering Science

Required background: Computer Science, Electrotechnics/Electrical Engineering, Physics

Supervising scientist(s): For further information or for application, please contact: Fernando Garcia Redondo (Fernando.GarciaRedondo@imec.be) and Dwaipayan Biswas (Dwaipayan.Biswas@imec.be)

Imec allowance will be provided for students studying at a non-Belgian university.

Who we are
Accept marketing-cookies to view this content.
Cookie settings
imec's cleanroom
Accept marketing-cookies to view this content.
Cookie settings

Send this job to your email