PhD - Leuven | More than two weeks ago
Over the last decade computing has moved from a homogeneous to heterogeneous paradigm where the best features of different types of functional cores (CPU, GPU) and accelerators can be intelligently combined to achieve even further computational gains. Heterogeneous Computing aims to match the requirements of each application to the strengths of different compute sub-systems and achieve load balancing by avoiding idle time for both the different processing units. The significance of heterogeneous computing is evident from the fact that a surprisingly large fraction of TOP500 and Green500 supercomputers now use both CPUs and GPUs. More tightly integrated heterogeneous system architectures are also becoming the norm in Mobile to Advanced driver-assistance systems (ADAS) domains. This coupled with the surge of data-dominated applications has however exacerbated the “Memory Wall” problem, wherein moving data from a memory to a processing element has a higher cost overhead than computation.
To effectively tackle the resulting intense data traffic management demands on the memory subsystem, comprehensive solutions that encompasses both the interconnect and memory sub-system optimizations for such heterogenous fabrics are required. This means envisioning and designing large shared and power gated memory instances that are tightly coupled with a distributed NoC like network to transfer data in the most efficient manner to the functionally different compute units. For this micro-architectural modifications of the interconnect topology, IO sub-system, Memory controller and Memory Banks/Arrays will be required. An ideal design methodology would comprehend the fundamental memory streaming requirements of the SoC and provides the necessary capabilities for optimal Quality of Service (QoS) while ensuring the best use of available memory bandwidth and reduced impact of tail latency.
The aim of this Ph.D. is three-fold: 1) To first evaluate and assess some of the performance challenges for the off-chip/on-chip memory centric heterogenous fabrics for present and future systems, 2) To propose micro-architectural and design innovations that result in the best Power, Performance and Area metrics for such heterogenous fabrics and 3) To analyze workload aware configurations of the proposed fabrics based on optimization of the micro-architectural and design parameters.
Required background: Master’s degree in electrical engineering or Computer Engineering
Type of work: 10% literature study, 40% digital design, 25% analog & memory design, 25% computer architecture
Supervisor: Dragomir Milojevic
Daily advisor: Leandro M. Giacomini Rocha, Dwaipayan Biswas, Mohit Gupta
The reference code for this position is 2023-034. Mention this reference code on your application form.