TailoredMedia - Tailored and Agile enrIchment and Linking fOR sEmantic Description of multiMedia
Artificial intelligence for automatic tagging of audio-visual content to enable efficient reuse of content for a variety of use cases.
Film, television and video have not only maintained their importance as a leading means of communication through digitisation, but have even expanded it through the Internet and constant availability via mobile devices. In order to archive the enormous amount of video content as part of the cultural heritage and to enable further uses, the content must be described in the best possible way.
Until now, this description has been done manually by trained documentarians, which means an enormous amount of time and can therefore only be done for a limited amount of content. Despite this high effort, it is not possible to describe content completely and in detail, for example, to record in which time periods or where in the picture a politician can be seen. Existing additional information on a piece of content is usually recorded in several sources that are structured differently (e.g. script, journalist's notes) or have different modalities (e.g. notes as text, audio recording of an interview). Even if content and additional information are already available digitally, they are currently usually processed independently of each other. As a result, information between sources cannot be linked or can only be partially linked, and thus the description often remains incomplete.
TailoredMedia - Tailored and Agile enrIchment and Linking fOR sEmantic Description of multiMedia - researches and develops methods for the automatic analysis of audio-visual content based on artificial intelligence (AI). This automates semantic enrichment and enables use cases in media monitoring, journalism and archiving via an easily accessible user interface. The results of this automatic analysis are incorporated into novel methods for the fusion of multimodal information, which store the information thus obtained in a knowledge graph. In this way, the description of text and media content can be enriched or supplemented with semantic metadata.
The analysis tools created by TailoredMedia are designed as interoperable micro-services and thus enable the implementation of a wide variety of workflows. They can be used both on-site and in private or public cloud infrastructures. Their application is oriented towards the needs of the users and is thus intuitive, simple and efficient.
Project duration: 01.11.2020 - 31.10.2022
The consortium led by JOANNEUM RESEARCH is multidisciplinary in order to bring in the necessary competences to achieve the project goals. With the Austrian Mediathek and the ORF, two organisations of different sizes from the media sector are involved. The Mediathek has a clear focus on archiving, while the ORF covers the entire media life cycle. Both partners can thus contribute a wide range of workflows, best practices and technical requirements. St. Pölten University of Applied Sciences will use these inputs as a starting point for a user-centred design process to involve a broader group of media industry experts in the design of an intuitive interface for editing and searching. The technical expertise in semantic technologies and audio-visual content analysis will be provided by Redlink and JOANNEUM RESEARCH respectively. Both partners have many years of experience in R&D of these technologies and their application in the media industry, Redlink as an innovative SME will incorporate the results into the product development pipelines. The project partners are well connected in the national and international communities, which ensures the exchange of knowledge during and after the project as well as the exploitation of the project results.
This project is funded by the Austrian Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology (BMK) as part of the "ICT of the Future" programme.