post_ibc_logo.png
post_ibc_logo.png

Presenting Novel Approaches for Media Clouds at IBC2013

Archives are not considered that important. Well, up until the moment when someone searches for something. One specifically difficult type of content is raw video footage. Can innovative media clouds help? Find out at IBC2013 in Amsterdam, from September 12.–17. 2013, meet us at booth 14.380.

Reliable, scalable and searchable digital content archives are an increasingly needed backbone of many larger organizations, specifically in the media. This specifically applies to digital content. All kinds of digital content–from texts to video to audio. As a partner in the VISION Cloud project the DW Innovation and Italian broadcaster RAI have developed novel approaches to store, retrieve and use digital video content better–by using technologies researched in the project.

For this work the VISION Cloud project was awarded with the IBC2013 Special Award based on the achievements of research project in terms of practical usability.

Using "popularity data" to find video content better over time

The key concept we worked on in the VISION Cloud project is file enrichment with metadata, but not manually. It is simply not feasible anymore to expect video editors to annotate each snippet of captured material, the interactions with the raw video through searches are so far not used at all. It takes too much time. So, to reach a deeper level of findability of any content over time we worked on a concept coined „popularity data".

How "popularity data" works

The idea is simple: Each time an editor searches for usable or re-usable video content, she or he interacts with files–opening, watching, maybe doing something, maybe doing nothing with that specific file. Some videos are closed right after seeing the first few seconds, some are watched entirely. Other video files might be put into a "lightbox", as a first filtered collection. Yet other files might be annotated quickly with just one word, because some footage in there is interesting. Yet others files might be selected for further use, etc.

Grading video content based on user actions

This is the principle, we use in the VISION Cloud media demonstrator: While it is impossible to have the manpower to annotate every video file, specifically raw video files, the interactions of users while searching and interacting with a cloud based video archive leave "traces", which can separate "good" from "unusable" content over time. By capturing such actions of users and using the content-centric storage technology of VISION Cloud more and more video files would be enhanced by adding these actions to the metadata per file. Which means: Over time the best, the almost best and the files which are never used can be distinguished from each other. Which step by step results in better search results, depending on a specifically developed algorithm.

Shed some light into digital archives

This is a proposed, partially novel use of usage data to enhance a cloud-based archive. While every webpage today counts visits and pageviews, such records are not kept for digital archives so far. In the media use case of VISION Clound the advanced software is used to grade raw material video over time, mainly by making use of the human eye: When editors go on a search for usable raw material, their interactions are stored on each of the video files.

Depending how often one of the files is viewed, marked as relevant, annotated or exported to an video editor, an algorithm helps to separate the not-so-great from the good to the great videos over time. Might sound simple, but is not in technical terms.

If you are interested getting in touch and seeing a demonstrator of this principle in action: The VISION Cloud booth at IBC2013 is located at booth 14.380.

More

VISION Cloud Website

Author
team_ruben_bouwmeester.jpg
Ruben Bouwmeester