/Leveraging Compilers for AI Application-Technology Co-Optimization and Data Movement Mitigation

Leveraging Compilers for AI Application-Technology Co-Optimization and Data Movement Mitigation

PhD - Leuven | Just now

Taking a unified system view to better leverage emerging technologies and adapt modern applications.

Modern systems targeting AI-based workloads are highly heterogeneous. They will often integrate CPUs, NPUs, GPUs, DSPs, and other custom specialized logic hardware alongside a complex and ever-evolving memory hierarchy both on- and off-chip in order to run multi-modal workloads. These workloads, in principle, leverage inputs and outputs from an entire set or subset of available hardware resources. For example, a single multimodal workload tasked with generating an edited image may use a DSP for initial image processing, NPU for abstract reasoning of the desired edits, GPU for image modification, and CPU for task orchestration. Such complex workloads need careful orchestration and load balancing in order to ensure competitive performance without overrunning power and thermal constraints.

The emergence of MLIR’s highly modular framework for heterogeneous system compilation combined with the regularity of AI data access patterns and the ONNX standardized representation of neural networks have broadly unified the software stack towards the deployment of these highly multi-modal AI applications on heterogeneous systems. However, as imec continues work on ground-breaking and innovative new memory technologies and advancements in logic nodes, it is increasingly apparent that understanding the actual cost of data movement as a function of not only the underlying technology, but also of tiling, scheduling, orchestration, and packaging in these systems is crucial to scalability of future AI applications. Therefore, the goal of this doctoral thesis topic is to lay the groundwork for a data movement-based approach towards building technology-aware software stacks.

For this thesis, your work will span neural network applications, middleware such as compilers, drivers, and operating systems, as well as system and cycle-accurate modelling. Your approach will be demonstrated using research studies of emerging imec technologies. Key responsibilities include the following:

  • Researching AI load balancing strategies in modern and upcoming heterogeneous systems.
  • Working and coordinating with technology teams to incorporate models of emerging logic, memories, and packaging.
  • Analysing data movement using profiling and system-level modelling tools to propose orchestration methodologies and subsequently evaluating these methodologies using abstract implementations.

 

This role is ideal for someone who is deeply interested in hardware-software-technology codesign, computer architecture, compilers, and working in an interdisciplinary environment that values innovation, creativity, and real-world impact.

Profile: You are analytical and detail-oriented, with a strong interest in technology, system-level modelling and hardware-software codesign. You are adept at or have a keen interest in programming, software frameworks, technology modelling, and performance evaluation tools.

Background: You have or are currently pursuing a degree in computer engineering, computer science, or electrical engineering. Knowledge of compiler frameworks, MLIR, machine learning frameworks, and system-level modelling tools is an advantage.



Required background: Engineering Technology, Computer Engineering, Electrical Engineering, or equivalent

Type of work: 20% literature, 30% experimental, 50% engineering

Supervisor: Frank Piessens

Daily advisor: Joshua Klein

The reference code for this position is 2026-013. Mention this reference code on your application form.

Who we are
Accept analytics-cookies to view this content.
imec's cleanroom
Accept analytics-cookies to view this content.

Send this job to your email