/Radar-Image depth completion for scene perception

Radar-Image depth completion for scene perception

Leuven | More than two weeks ago

Bringing intelligence to a radar's observations
Depth estimation is a fundamental task for scene understanding in autonomous driving and robotics navigation. While learning-based supervised approaches for monocular depth have achieved good performance in outdoor scenarios [1], the LiDAR sensors provide the most accurate depth information. However, the depth maps generated by these devices are sparsely distributed compared to the ones obtained from RGB images. This sparsity significantly impacts the performance of LiDAR-based applications. Image-guided methods for predicting dense depth maps from sparse LiDAR data using lidar-camera fusion showed significant improvement in results compared to the conventional depth-only techniques [2, 3]. Recently, due to the common use of radar sensors in the automotive and robotics industry, researchers attempted to address the problem using radar and camera fusion [4]. Still, using radars for depth completion is not yet thoroughly explored due to the increased sparsity and noises in the measurements. However, radars are proven to be more robust in harsh environments and generally, they are more cost-effective and used in the automotive/robotics industry.

The aim of this thesis is to investigate existing radar-camera completion approaches. In addition, we will investigate a self-learning fusion approach for scene reconstruction in adverse surrounding conditions.


Required qualifications:

  • Following an MSc in a field related to one or more of the following: Electrical engineering, Computer Science, or Applied Computer Science.
  • Experience with image processing, signal processing, and computer vision. Some knowledge of radar concepts is a plus.
  • Experience with machine learning and statistics.
  • Strong programming skills (Python).
  • Interest in developing state-of-the-art Machine Learning methods and conducting experiments.
  • Ability to write scientific reports and communicate research results at conferences in English.



  • Master Thesis internship (6 months)
  • Preceded by (optional) summer internship (1-3 months)
     (the summer internship alone is not possible)


Responsible scientist(s):

Prof. Hichem Sahli (IMEC / VUB) <hiiceh.sahli@imec.be)

André Bourdoux (IMEC) <Andre.Bourdoux@imec.be>

Seyed Hamed Javadi (IMEC) <hamed.javadi@imec.be>



[1] S.F. Bhat, I. Alhashim, and P.  Wonka. “AdaBins: Depth estimation using adaptive bins”. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4009–4018, 2021.

[2] M.A.U. Khan, D. Nazir, A. Pagani, H. Mokayed, M. Liwicki, D. Stricker, M.Z. A Afzal, “Comprehensive Survey of Depth Completion Approaches”, Sensors 2022, 22, 6969.

[3] C. Fu, C. Mertz and J. M. Dolan, "LIDAR and Monocular Camera Fusion: On-road Depth Completion for Autonomous Driving," 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 2019, pp. 273-278, 

[4] S. Gasperini, P. Koch, V. Dallabetta, N. Navab, B. Busam and F. Tombari, "R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of Dynamic Scenes," 2021 International Conference on 3D Vision (3DV), London, United Kingdom, 2021, pp. 751-760.

Type of project: Combination of internship and thesis, Thesis

Required degree: Master of Engineering Science, Master of Science, Master of Engineering Technology

Required background: Electrotechnics/Electrical Engineering, Computer Science, Physics

Supervising scientist(s): For further information or for application, please contact: Hichem Sahli (Hichem.Sahli@imec.be) and Andre Bourdoux (Andre.Bourdoux@imec.be) and Seyed Hamed Javadi (hamed.javadi@imec.be)

Who we are
Accept marketing-cookies to view this content.
Cookie settings
imec's cleanroom
Accept marketing-cookies to view this content.
Cookie settings

Send this job to your email