During sports events, athletes use wearables and sensors to track their performances. Currently, this data is used for coaching, yet it might be useful for covering the events as well. As such, new Artificial intelligence (AI) applications in professional sports based on sensors, wearables and video data might enrich live sports reporting. However, a platform capable of turning insightful sports data into stories for live commenters, content editors or viewers does not yet exist. The DAIQUIRI project will develop AI algorithms that address current challenges associated with data overload, sensor-video matching, dynamic captioning and multi-modal stories. The outcome will be a sensor data platform and dashboard that supports media professionals in their live sports coverage and the audiences’ viewing experiences.
There is a gap between traditional reporting and the translation of sports sensor data into interesting stories in real time – including the visualization of these data. The DAIQUIRI project will develop a scalable data workflow to support broadcasters in augmenting live sports reporting using Internet of Things (IoT) data generated by athletes and their equipment. The platform will enable content creators to bring professional stories about the athletes, team performance and in-competition circumstances to their audiences.
Composed of experts in sports event capturing, sensor data platforms, editorial tools and interactive user experiences, the DAIQUIRI consortium will achieve 4 key project outcomes:
The scalable data workflow developed by the consortium will be showcased via application in cyclocross and hockey. The sensor data and insights gathered by the platform will be used to create templates for real-time visualization and will then be fed into a content creation dashboard. Media professionals can then use this dashboard to rapidly and dynamically add data-driven insights to sports stories.
“The DAIQUIRI project will use AI algorithms to overcome current challenges associated with data overload, sensor-video matching, dynamic captioning and multi-modal stories. It will develop a sensor data platform and dashboard that enables media professionals to enrich live sports experiences.”