/Event based AI sensor fusion representations

Event based AI sensor fusion representations

PhD - Antwerpen | More than two weeks ago

How to create a compactly formalised output representation of multiple sensorial modes in a low-power, low-latency and privacy-preserving way?

The objective of this PhD topic is to research energy-efficient compute hardware together with closely coupled software methods and tools.  AI will become increasingly embedded in edge hardware and matching software. This has the well-known benefits of reducing power consumption, latency and privacy concerns in real-world applications (traffic regulation, autonomous driving, drones, robotics,…). Such edge AI applications requires custom hardware and software paradigms, which are the focus of this PhD.  

In this PhD, you will work together with imec’s hardware teams to create edge AI algorithms that can perform sensor fusion based on appropriate low-power low-latency algorithms, like spiking neural networks (SNN). SNNs research has been considerable in recent years, yet the technique currently still has multiple limitations. Mainstream SNN architectures are still rather underdeveloped when compared to non-spiking DNN techniques like CNN. In addition, SNN are at the moment not well suited to do online learning, as an appropriate learning algorithm that works as well as backpropagation in DNN is not available yet. Finally, the size of the NN that can be supported by an SNN is still limited. Addressing these concerns towards an approach that is able to appropriately run AI on the edge is of concern to advance the state-of-the-art in this domain. 

You will be focussing on ways to make the output of the sensor fusion algorithms (which combines input sensor modalities) more fit for purpose and more compact in its representation. Instead of getting scattered representations of point clouds in a 3D space resulting from various sensor readings, the aim will be to create compactly formalised representations of the objects in a scene. These representations should contain object type, 3D coordinates, and vector of all relevant objects in the scene. The challenge will be to do this in a way that is power-efficient, low-latency and respectful of privacy so it can operate on the edge in a real-world environment. Therefore, the main research questions in this project would be:  

  • What are the predominant architectures and techniques for data fusion being used in different industries? 
  • Which AI techniques are used in these various architectures? 
  • How can neuromorphic algorithms be helpful in reducing the power budget and the latency of such data fusion algorithms? 
  • How can neuromorphic algorithms make optimal use of the latest generations of neuromorphic AI accelerator chips?    
  • What is the most appropriate algorithmic paradigm for low-power low-latency edge AI sensor fusion? 
  • How to create a compactly formalised output representation of multiple sensorial modes in a low-power, low- latency and privacy-preserving way? 

The application domains for this research can be diverse, but we foresee initial applications in the automotive sector, traffic management, robotics and life sciences. In this project, you will collaborate with imec teams working on the development of new sensing technology, like radar, sodar and lidar. In addition, you will closely collaborate with teams working on new AI accelerator hardware, like neuromorphic chips.  

Way of work 

You will work with teams from various parts of imec, working in a highly applied way towards contributions related to imec’s nascent edge AI sensor fusion program. You will also contribute to the definition of the research roadmap and will get the opportunity to support junior researchers. The focus of your research will be on addressing the above research questions through the creation and evaluation of real-world demonstrators with industrial clients in either the automotive, traffic management, robotics, or life sciences domains.  

Relevant papers 

The following papers are indicative of the intended research scope: 

  • [Vogginger2022] Vogginger B, Kreutz F, López-Randulfe J, Liu C, Dietrich R, Gonzalez HA, Scholz D, Reeb N, Auge D, Hille J, Arsalan M, Mirus F, Grassmann C, Knoll A and Mayr C (2022) Automotive Radar Processing With Spiking Neural Networks: Concepts and Challenges. Front. Neurosci. 16:851774. doi: 10.3389/fnins.2022.851774 
  • [Knobloch2022] Neuromorphic AI - An Automotive Application View of Event Based Processing, K. Knobloch, P. Gerhards, Infineon Development Center Automotive Electronics & AI 2022-06-29  
  • [Cordone2022] Cordone, L., Miramond, B., & Thiérion, P. (2022). Object Detection with Spiking Neural Networks on Automotive Event Data. 2022 International Joint Conference on Neural Networks (IJCNN), 1-8. 
  • [Kim2020] Kim, Seijoon, Seongsik Park, Byunggook Na and Sungroh Yoon. “Spiking-YOLO: Spiking Neural Network for Energy-Efficient Object Detection.” AAAI (2020). 
  • [Xiang2022] Xiang, S.; Jiang, S.; Liu, X.; Zhang, T.; Yu, L. Spiking VGG7: Deep Convolutional Spiking Neural Network with Direct Training for Object Recognition. Electronics 2022, 11,2097. https://doi.org/10.3390/ electronics11132097  
  • [Safa2021] Safa, Ali & Corradi, Federico & Keuninckx, Lars & Ocket, Ilja & Bourdoux, Andre & Catthoor, Francky & Gielen, Georges. (2021). Improving the Accuracy of Spiking Neural Networks for Radar Gesture Recognition Through Preprocessing. IEEE Transactions on Neural Networks and Learning Systems. PP. 1-13. 10.1109/TNNLS.2021.3109958. 
  • [Tsang2021] Tsang,I.J.;Corradi,F.; Sifalakis, M.; Van Leekwijck, W.; Latré, S. Radar-Based Hand Gesture Recognition Using Spiking Neural Networks. Electronics 2021, 10, 1405. https://doi.org/10.3390/electronics 1012 
  • [Stuijt2021] μBrain: An Event-Driven and Fully Synthesizable Architecture for Spiking Neural Networks, Front. Neurosci., 19 May 2021 https://doi.org/10.3389/fnins.2021.664208 
  • [Schuman2022] Schuman, C.D., Kulkarni, S.R., Parsa, M. et al. Opportunities for neuromorphic computing algorithms and applications. Nat Comput Sci 2, 10–19 (2022). https://doi.org/10.1038/s43588-021-00184-y 
  • [Tavanaei2019] Tavanaei, Ghodrati, Kheradpisheh, Masquelier, Maida, Deep learning in spiking neural networks, Neural Networks, Volume 111, 2019, PP 47-63, https://doi.org/10.1016/j.neunet.2018.12.002. 


Required background: computer science, engineering, machine learning


Type of work: 60% programming, 20% literature review, 10% requirements gathering

Supervisor: Steven Latré

Co-supervisor: Tanguy Coenen

Daily advisor: Tanguy Coenen

The reference code for this position is 2023-102. Mention this reference code on your application form.

Who we are
Accept marketing-cookies to view this content.
Cookie settings
imec's cleanroom
Accept marketing-cookies to view this content.
Cookie settings

Send this job to your email