/Soft- and hardware co-optimization for sustainable AI

Soft- and hardware co-optimization for sustainable AI

Leuven | More than two weeks ago

Develop eco-design principles of artificial intelligence and machine learning systems, by co-optimizing the soft- and hardware to minimize environmental impact across the entire life-cycle.

Context

In recent years, Artificial Intelligence (AI) has seen extremely rapid adoption following the success of Machine Learning (ML). Deep neural networks in particular revolutionized computer vision and natural language processing with convolutional neural networks and large language models respectively. However, these deep neural networks require massive amounts of data to train their billions of parameters, and the strong growth of resource-intensive models amplifies the environmental pressures threatening life on earth as we know it. The design of sustainable AI systems has thus emerged as a crucial research question today.

 

This problem is mainly being attacked from two angles. First, the machine learning community is leading many efforts in software improvements. The aim of these improvements is to design ML algorithms that are less compute-intensive, typically by using fewer parameters (e.g., sparse neural networks, knowledge distillation), fewer datapoints (e.g., coresets, compressive learning), and less precision (e.g., weights quantization). Second, the hardware community is developing a broad family of hardware accelerators that are specifically designed for running existing ML tasks efficiently (e.g. Neural Processing Units, or NPUs). There is growing interest in going beyond those separated approaches and towards a holistic one where algorithms and hardware are co-designed (as argued for in the Hardware Lottery essay by Sarah Hooker). By opening up the design space, the aim is to enable more radical solutions to emerge to bring substantial performance gains (e.g., neuromorphic computing, hyperdimensional computing).

 

However, these optimizations so far mainly target improvements during algorithm execution (e.g., a NPU with reduced power consumption and latency during training and/or inference). Although those improvements can succeed in achieving sustainability gains in one targeted area (e.g., by reducing the power consumption of an ML model, the NPU reduces the emissions during the use-phase of that model), it remains unclear whether those developments are beneficial to the environment when looking at the whole life-cycle of the ML model (e.g., the potential increased embodied emissions or decreased lifetime of the specialized NPU vs. an off-the-shelf general-purpose graphics processing unit or GPU).

 

High-level objective

This PhD thesis will study the eco-design of machine learning systems while co-optimizing the soft- and hardware. A doubly holistic approach will thus be followed: the design space encompassing soft- and hardware co-optimization, and the target objective encompassing the whole life-cycle of the ML system.

 

This will be tackled from an interdisciplinary perspective, involving knowledge from machine learning and algorithm design, hardware design (System Technology Co-Optimisation or STCO), and life-cycle analysis. 

 

The PhD topic is intentionally very open, but could for example address the following  research questions:

  • How can we model the whole life-cycle of a machine learning system, including (i) both hardware and software, and (ii) both model training and i­­­­nference?
  • What is a fair unit of comparison in the case of a machine learning system when targeting sustainable AI?
  • How good are existing sustainable AI efforts (e.g. NPU architectures, neuromorphic computing) at taking the whole life-cycle into account?
  • What is the environmental impact of datacenters and how can we allocate part of their impact to one specific ML model?
  • Which hardware and/or software technology decisions minimize environmental impact?
  • How can we make these choices robust given the numerous underlying uncertainties?

 

Candidate profile

The candidate holds an engineering master’s degree (computer science, electrical engineering, applied mathematics) and has good knowledge of at least one of the following areas:

  • Machine Learning (e.g., deep learning, sustainable ML, hardware-aware ML, ...)
  • Hardware design (integrated circuit design, STCO, NPU and other AI accelerators, ...)
  • Sustainability (LCA methodology/software/databases, footprint of ICT, ...)

As the research project is interdisciplinary, the candidate moreover has strong interest and the ability to develop expertise in the remaining areas. Good communication skills, collaboration abilities, and a solid project management record will also be considered.

 

 

Research environment

The research will be carried out within the imec's IRIS team which has expertise in sustainability research for semiconductor manufacturing and systems, and with strong collaborations envisioned with other pathfinding teams for the System Technology Co-Optimization aspects.

 

Required background: Engineering (Computer Science, Electrical Engineering, Applied Mathematics) with backround in at least one: (i) Machine Learning, (ii) Hardware design, (iii) Sustainability.

Type of work: 80% modeling/simulation, 20% literature

Supervisor: David Bol

Daily advisor: Vincent Schellekens

The reference code for this position is 2025-174. Mention this reference code on your application form.

Who we are
Accept marketing-cookies to view this content.
imec's cleanroom
Accept marketing-cookies to view this content.

Send this job to your email