Document Server@UHasselt >
Research >
Research publications >

Please use this identifier to cite or link to this item: http://hdl.handle.net/1942/11622

Title: Video Manipulation using External Cues
Authors: DE DECKER, Bert
Advisors: Bekaert, Philippe
Issue Date: 2010
Abstract: When it comes to traditional 2D video editing, there are many video manipulation techniques to choose from, but all of them suffer from the limited amount of information that is present in the video itself. When more information about the scene is available, more powerful video manipulation methods become possible. In this dissertation, we examine what extra information about a scene might be useful and how this information can be used to develop powerful yet easy-to-use video manipulation techniques. We present a number of novel video manipulation methods that improve on the way the scene information is captured and the way this information is used. First we show how a 2D video can be manipulated if the scene is captured using multiple video cameras. We present an interactive setup that calculates collisions between virtual objects and a real scene. It is a purely image based approach, so no time is wasted computing an explicit 3D geometry for the real objects in the scene, all calculations are performed on the input images directly. We demonstrate our approach by building a setup where a human can interact with a rigid body simulation in real-time. Secondly, we investigate which manipulation techniques become possible if we track a number of points in the scene. For this purpose, we created two novel motion capture systems. Both are low cost optical systems that use imperceptible electronic markers. The first is a camera based location tracking system. A marker is attached to each point that needs to be tracked. A bright IR LED on the marker emits a specific light pattern that is captured by the cameras and decoded by a computer to locate and identify each marker. The second system projects a number of light patterns into the scene. Electronic markers attached to points in the scene decode these patterns to obtain their position and orientation. Each marker also senses the color and intensity of the ambient light. We show this information can be used in many applications, such as augmented reality and motion capture.
URI: http://hdl.handle.net/1942/11622
Category: T1
Type: Theses and Dissertations
Appears in Collections: PhD theses
Research publications

Files in This Item:

Description SizeFormat
PhD De Decker6.73 MBAdobe PDF

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.