PhD - Gent | Just now
Increasingly, IoT devices use machine learning to perform complex tasks such as passenger counting or navigation of smart cars. With the rise of Agentic AI on the edge, the amount of sensitive data these systems will process will increase significantly. To ensure users trust these devices enough to put them in their home, robust multi-layered security measures are needed. While dedicated hardware is emerging to enable energy efficient AI, such as HAILO and NVIDIA Jetson, the security of these technologies is not keeping up. Sensitive data and models are still stored unencrypted in memory on devices that are physically accessible by adversaries.
This poses significant risks.
Intellectual property theft
Adversarial misuse such as repurposing a smart camera for surveillance
Countermeasure development such as developing camouflage that tricks the ML models
Encryption at rest by itself does not provide enough protection because an adversary could still read memory contents of the device or use software vulnerabilities to gain unauthorized access to the device software itself.
Confidential computing can protect data and algorithms in-use, but the current generation of technologies is focused on high-end server GPUs and CPUs. These solutions typically rely on virtualization technologies such as hypervisors or secure enclaves, which can use up to 40% more energy than bare-metal execution. While this is an acceptable trade-off for data centers, they are inherently unsuitable for energy-constrained edge devices, where every watt matters.
While ARM TrustZone aims to provide trusted execution environments on more energy-efficient devices, it does not support GPU or TPU workloads and lacks native support for multithreading, making it unsuitable for machine learning on edge devices.
This research aims to bridge the gap by developing lightweight, energy-efficient techniques for confidential inference. More specifically, it investigates hardware-accelerated confidential machine learning without the typical overhead of virtualization and without the workload limitations of Arm TrustZone.
Current confidential AI solutions:
Focus on high-end server-grade hardware that is expensive and difficult to obtain.
Have an inherent power consumption overhead due to virtualization.
Or do not support ML acceleration using GPU/TPU/NPU
There is thus a need for a platform for
protecting AI models
during hardware accelerated inference (TPU/NPU/GPU)
that is energy efficient
and flexible enough to adapt to different hardware security functionalities of different devices.
This PhD thesis will span the divide between software and hardware.
It will start by investigating low-level software measures to perform energy efficient hardware-accelerated confidential machine learning inference on CotS hardware such as HAILO and Jetson.
It will then use the knowledge gained from developing these methodologies to design, in collaboration with a hardware research group, adaptations to applicable hardware accelerators to further improve the security of the solution in an energy efficient way.
The end result is a flexible software solution that can adapt the security level based on the energy usage trade-off and based on available hardware security functionalities on both Arm and RISC-V CPUs.
Privacy-preserving AI in healthcare wearables.
Secure smart home and industrial IoT systems.
Protecting intelligence of Unmanned Vehicles (UVs).
Intellectual property protection for commercial AI models.
Privacy preserving edge agentic AI.
Required background: Computer Science with strong interest in hardware, Information Engineering Technology with strong interest in hardware, Electrical Engineering with strong interest in software, Electronics and ICT Engineering with strong interest in software.
Type of work: 10% literature study, 50% prototyping, 40% experimental evaluation
Supervisor: Bruno Volckaert
Co-supervisor: Merlijn Sebrechts
Daily advisor: Merlijn Sebrechts, Tom Bergmans, Stefan Lefever
The reference code for this position is 2026-096. Mention this reference code on your application form.