Distributed Video Production (DVP) project

EU ACTS Project (AC089), OFES 95.0493

This is the logo for the DVP project
Overview

   Video production is an inherently distributed process: resources are physically distributed over several sites, and broadcasters increasingly outsource specific production and post-production phases to specialized studios or upcoming virtual studios. There is presently an increasing demand for computing and communication technology enabling media producers to remotely collaborate in digital video and multimedia production, post-production, archiving, indexing, and retrieval.

   In the ACTS project "Distributed Video Production", leading European broadcasters, computer, communication and media technology providers have started an innovation initiative to integrate and develop state-of-the-art technology in order to explore new applications, to define new products and new markets,

   In DVP, the starting point is the communication infrastructure for transferring studio-quality compressed digital video over broadband networks (ATM). Problems of transmission and processing delays, synchronization, and quality of service are being addressed. Built upon this infrastructure are four pilot applications which demonstrate that the technology is ready for commercial exploitation:

  • distributed virtual studio;
  • distributed telepresence;
  • distributed virtual reality;
  • distributed video editing, archival and retrieval.
     

Distributed Video Editing, Archival and Retrieval

   Within the DVP project, the CUI Computer Vision Group is a major actor in the Distributed Video Editing, Archival and Retrieval (DVER) application. The goal is to provide broadcasters with a complete solution for distributed video post-production, which integrates archival, retrieval, and editing functionalities. The Computer Vision Group has coordinated the design of the system architecture, which includes an archive server, an editing server, a catalog server, and a client station for the end user (cf. Figure 1).


dver schematic
(click to enlarge)

Figure 1 : DVER system architecture

   The archive server stores videos at both low and high bit-rates, and offers video streaming and file transfer services. The catalog server hosts a database where video clips' metadata are stored and indexed. The client station allows users to perform archival and retrieval operations, as well as video editing using existing material at low bit-rate. The editing list created by the user is then processed by the editing server and applied to the corresponding high bit-rate material, in order to produce the ready-to-broadcast final video.


archival tool screenshot
(click to enlarge)

Figure 2: Main panel of the Archival Tool

Video Archival

   The catalog server automatically fetches the low bit-rate version of each new clip in the video archive and preprocesses it, in order to extract metadata. First, a video clip is decomposed into smaller segments, by detecting the transition between shots and by analyzing motion properties. For each shot, still images (keyframes) are extracted for display purposes, and to enable automatic image indexing using a wavelet approach. Camera and camera lens motion (pan, tilt, zoom, stationary) properties are then computed from the motion vectors. These preprocessing steps are performed on the low bit-rate stream (MPEG-1), without decompression.

   The archival tool (cf. Figure 2) allows the documentalist to visualize/edit the results of the clip preprocessing algorithm, and to enter additional textual annotation.
 

Video Retrieval

   Graphical user interfaces have been built to enable a journalist or a program director to retrieve video material from the archive, using the available metadata from the catalog server. Once the desired items are selected, it is possible to export them to the editing tool. The retrieval tool (cf. Figure 3) allows one to query the database using textual and visual information. Textual queries address specific fields entered during the archival process. Visual queries address metadata extracted during the preprocessing phase. The user specifies an example image, and defines the desired type of camera motion.


retrieval tool screenshot
(click to enlarge)

Figure 3 : Main panel of the Retrieval Tool

Members
 
Publications
 
Collaborations
 


| Research | Members | Publications | Software (LaboImage) | Demonstrations | Teaching |

| Diploma and licence subjects | Jobs and Postgraduate Opportunities | Search | Home |

Computer Vision Group
University of Geneva
24 rue du Général-Dufour
1211 Geneva 4
Switzerland

Group director :
Thierry Pun

Designed by :
Lori Stefano Petrucci & David Squire


_______________________
Copyright © 1999 CUI