PhD - Antwerpen | More than two weeks ago
How to create a compactly formalised output representation of multiple sensorial modes in a low-power, low-latency and privacy-preserving way?
The objective of this PhD topic is to research energy-efficient compute hardware together with closely coupled software methods and tools. AI will become increasingly embedded in edge hardware and matching software. This has the well-known benefits of reducing power consumption, latency and privacy concerns in real-world applications (traffic regulation, autonomous driving, drones, robotics,…). Such edge AI applications requires custom hardware and software paradigms, which are the focus of this PhD.
In this PhD, you will work together with imec’s hardware teams to create edge AI algorithms that can perform sensor fusion based on appropriate low-power low-latency algorithms, like spiking neural networks (SNN). SNNs research has been considerable in recent years, yet the technique currently still has multiple limitations. Mainstream SNN architectures are still rather underdeveloped when compared to non-spiking DNN techniques like CNN. In addition, SNN are at the moment not well suited to do online learning, as an appropriate learning algorithm that works as well as backpropagation in DNN is not available yet. Finally, the size of the NN that can be supported by an SNN is still limited. Addressing these concerns towards an approach that is able to appropriately run AI on the edge is of concern to advance the state-of-the-art in this domain.
You will be focussing on ways to make the output of the sensor fusion algorithms (which combines input sensor modalities) more fit for purpose and more compact in its representation. Instead of getting scattered representations of point clouds in a 3D space resulting from various sensor readings, the aim will be to create compactly formalised representations of the objects in a scene. These representations should contain object type, 3D coordinates, and vector of all relevant objects in the scene. The challenge will be to do this in a way that is power-efficient, low-latency and respectful of privacy so it can operate on the edge in a real-world environment. Therefore, the main research questions in this project would be:
The application domains for this research can be diverse, but we foresee initial applications in the automotive sector, traffic management, robotics and life sciences. In this project, you will collaborate with imec teams working on the development of new sensing technology, like radar, sodar and lidar. In addition, you will closely collaborate with teams working on new AI accelerator hardware, like neuromorphic chips.
Way of work
You will work with teams from various parts of imec, working in a highly applied way towards contributions related to imec’s nascent edge AI sensor fusion program. You will also contribute to the definition of the research roadmap and will get the opportunity to support junior researchers. The focus of your research will be on addressing the above research questions through the creation and evaluation of real-world demonstrators with industrial clients in either the automotive, traffic management, robotics, or life sciences domains.
Relevant papers
The following papers are indicative of the intended research scope:
Required background: computer science, engineering, machine learning
Type of work: 60% programming, 20% literature review, 10% requirements gathering
Supervisor: Steven Latré
Co-supervisor: Tanguy Coenen
Daily advisor: Tanguy Coenen
The reference code for this position is 2023-102. Mention this reference code on your application form.