In situations where multiple operators observe large areas from a control room through numerous audio-visual cameras, emergencies lead to large volumes of incoming information that must be monitored, interpreted and acted upon. The SenseCity project seeks to develop multimodal scene analysis algorithms, a scalable real-time monitoring platform and optimal workflows to help operators accomplish their tasks more effectively.
New possibilities lead to data overload
Thanks to recent technological advances, large areas can now easily be observed remotely from control rooms. This is especially useful in public domain monitoring as well as transport and industrial applications such as the surveillance of bridges, tunnels, oil refineries, chemical plants and ports. However, the data overload resulting from these monitoring possibilities puts additional strain on human operators in charge of surveilling and analyzing the scene.
Lowering the burden
SenseCity seeks to lighten the burden by overcoming two main challenges. First, the project aims to reduce errors and time needed to detect an incident and alert the operator. Second, it wants to ensure that everyone involved in the process – from the police department to the fire brigade and medical services – receives a personalized information feed in case of emergencies.
Context-aware, multimodal and automated
Current solutions present all operators involved in a monitoring activity with an identical visualization of the available information. SenseCity seeks to construct a context-aware, multimodal monitoring platform that simultaneously analyzes video and audio data and can be used in real time by multiple parties on many devices. The project will also investigate machine learning techniques to automate part of the monitoring, data interpretation and information visualization process.
Many experts, a focused game plan
SenseCity relies on industrial and academic experts in control rooms, workflow optimization, context-aware graphical user interfaces, machine learning, distributed intelligence in Internet of Things (IoT) environments and smart cities, and more. The main innovation goals are as follows:
- Gain insights into how audio and video streams are monitored in practice, enabling an optimal workflow that enhances the operator’s performance while taking crisis conditions into account;
- Explore how audio processing can enhance and complement video scene analyses, helping operators detect crisis events more effectively;
- Investigate how to create a flexible and scalable data-driven processing procedure capable of supporting context-aware user interfaces that can be queried.
Societal and industrial benefits
SenseCity will deliver a proof-of-concept for a platform that allows the detection of events and anomalies over a large set of devices in an urban context. It will be tested in the City of Things living lab and open new avenues for applications in public domain and industrial surveillance.
“SenseCity aims to support human operators who rely on audiovisual data from remote cameras to observe, interpret and act upon emergency situations. To achieve that goal, the project seeks to develop multimodal scene analysis algorithms, a scalable real-time monitoring platform and optimal workflows.”
Monitoring and interpreting audio-visual streams to support operators in emergencies
SenseCity is an imec.icon research project funded by imec and Agentschap Innoveren & Ondernemen.
It started on 01.06.2019 and is set to run until 31.05.2021.Download as pdf
- imec – IDLab IBCN – UGent
- imec – IDLab MOSAIC – Uantwerpen
- imec – MICT – UGent
- Project lead: Stijn Rammeloo
- Research lead: Werner Van Leekwijck
- Proposal Manager: Werner Van Leekwijck
- Innovation manager: Eric Moons