As AI systems evolve toward agentic behavior, autonomously orchestrating workflows and making proactive decisions, understanding the dynamics of user trust, acceptance, and interaction becomes critical for responsible deployment. This PhD project investigates the behavioral and computational foundations of trust in agentic AI, with a focus on conversational interfaces as mediators of human-agent relationships.
Leveraging the potential of the synergetic AAA project, the PhD will focus on the design, modeling, and validation of structured, in-the-wild experiments. It will explore how variations in agent behavior (e.g., autonomy, transparency, authenticity) influence user attitudes and behaviors, using multimodal data (telemetry, logs, surveys, diaries) and probabilistic modeling techniques (e.g., Dynamic Bayesian Networks). The research will contribute to the development of generalizable models of trust and acceptance, grounded in empirical data and validated across diverse contexts.
The work combines computer science, and data-driven behavioral modeling, and is embedded within imec’s Connected Society roadmap and the VLAM project. It aims to deliver both theoretical insights into human-agent interaction and methodological innovations for early-stage evaluation of agentic AI systems, supporting imec’s mission to bridge deep-tech innovation with societal relevance. More specifically, this research will be conducted at research group imec-MICT, an interdisciplinary research group that explores the complex interplay between media, innovation, contemporary technologies, and society. The group brings together researchers from social sciences, computer sciences, psychology, engineering, and design, fostering a transdisciplinary approach to technological development and societal impact.