As a consequence of our increasing reliance on information nowadays, both for home and personal use along with business and professional needs, more and more data is being generated, processed, moved, stored, and retained in multiple copies for longer periods of time. Aggressive technology scaling has also, placed limitations on current memory and data storage technologies like SRAM, DRAM and Flash. Thus, several new Non-Volatile Memory (NVM) technologies are currently being investigated to eventually satisfy the need for continuously higher storage capacity and system performance, lower power consumption, smaller form factor, lower system costs and long data‐retention capability. Resistive RAM (RRAM), Phase Change RAM (PC-RAM) and Magneto-Resistive RAM (MRAM) are among the more mature NVM technologies. Spin Orbit Torque MRAM, Domain Wall Memory, IGZO DRAM, etc. are some of the more recent NVMs with interesting characteristics. However, naively replacing the whole memory hierarchy with NVRAM is not a good idea. Each technology comes with a set of inherent flaws, like limited write endurance, or high access latency, etc. To mitigate this problem, the memory system architecture of the system-on-chip should combine several regions of memory with different characteristics (see Figure 1), with the use of the appropriate interface technologies.
In this PhD thesis, the different technologies will first be compared and evaluated from a system perspective for the targeted platforms and application domains. Both memory designs and memory interfaces should be considered at the same time, as they each significantly contribute to the performance, power and area. This will also involve laying the groundwork for an efficient translation between link hardware level abstraction (RTL) and system level abstraction (e.g. gem5). After that, promising technology-architecture combinations can be optimized by inferring crucial design parameters with various machine learning techniques such as deep learning. Also, the candidate will study how machine learning can help solve the problem of data allocation in such complex memory systems. This PhD covers multiple abstraction layers and requires close interaction with circuit experts, system architecture experts, machine learning experts, and software experts.
Figure 1: Illustration of emerging non-volatile memories and different application domain targets
Required background: Electrical or Computer Science engineer with a background in memory architecture and interest in performance analysis and machine learning.
Type of work: 30% architecture design, 30% system evaluation, 30% software, 10% literature
Supervisor: Francky Catthoor
Daily advisor: Timon Evenblij
The reference code for this position is 2020-028. Mention this reference code on your application form.