/Frankenstein sensor: AI-enabled cross-modality data fusion for new mobility

Frankenstein sensor: AI-enabled cross-modality data fusion for new mobility

Leuven | More than two weeks ago

Do you want to impact next generation power-efficient safe mobility by developing an AI making sense of low level RaDAR, SoDAR and LiDAR data representation?

The automotive and mobility sector experience an accelerated pace of evolution that shatters established compromises and concepts. Paradigm shifts appear, imposed by the emergence of Advanced Driver Assistance Systems (ADAS) of increasing levels of autonomy and the evolution towards fully electrified personal mobility. Sensorics plays here a crucial role.

Novel sensors are introduced to complement existing cameras and RaDARs. LiDARs, SoDARs, polarimetric imaging, SWIR and thermal imaging are some of the various physical modalities that propose to augment the data stream glanced from the environment vehicles are moving through. Novel algorithms are developed to handle and make sense of the scattered data sets generated from these multiple modalities. Novel compute architectures are proposed to implement efficiently and in a robust fashion algorithms and sensor data flows.

However, a crucial issue hampering the promises realization of this new mobility paradigm is linked to the power consumption of current ADAS. Indeed, gathering information from multiple cameras, LiDARs, RaDARS, …, manipulating, registering and reconciliating multiple point clouds emerging from these modalities as well as interpreting those so as to understand the environment a vehicle has to move through can be as costly as one third the total battery energy available in the vehicle. The use of these expert ADASystems today results in cutting short a car range by 1/3, which seems unacceptable from a user but also from an environmental point of view.

Through this PhD, we propose to develop a novel complete system approach combining RF, light and sound sensing in one fused platform where data are combined at low level in a new representation properly interpreted by a dedicated AI architecture. Starting from imec developments in frequency modulated continuous wave (FMCW) LiDAR and RaDAR as well as direct time of flight (dToF) SoDAR, the candidate will assess the impact of overlapping or complementary field of views and resolutions in produced point clouds on scene interpretation and decision-making. Models for these modalities will be integrated in traffic simulation environments so as to confirm their combined usability in digital twins of actual corner cases. Finally, we expect the key results of this work to be tested on controlled traffic situations at partner test sites with sensoric-enhanced modified vehicles.

We look for a strong candidate to join our teams and develop a demonstrator for low level sensor data fusion, decimation and interpretation in the automotive / urban mobility context. To that purpose, we offer access to our state-of-the art RaDAR, SoDAR and LiDAR technology platforms, fusion algorithm developments and system architecture simulation framework. These are seen as a strong basis for the development of the low power robust sensing platform required for the future of mobility.


Required background: Physics, Data Sciences, Nano-engineering, Electrical Engineering

Type of work: 60% modeling/simulations, 30% experimental, 10% literature

Supervisor: Steven Latré

Co-supervisor: Xavier Rottenberg

Daily advisor: Xavier Rottenberg

The reference code for this position is 2024-148. Mention this reference code on your application form.

Who we are
Accept marketing-cookies to view this content.
Cookie settings
imec's cleanroom
Accept marketing-cookies to view this content.
Cookie settings

Send this job to your email