Content-Preserving Warps for 3D Video Stabilization
ACM Transcations on Graphics (Proceedings of ACM SIGGRAPH 2009), Volume 28, Number 3 — 2009
We describe a technique that transforms a video from a hand-held
video camera so that it appears as if it were taken with a directed
camera motion. Our method adjusts the video to appear as if it were
taken from nearby viewpoints, allowing 3D camera movements to
be simulated. By aiming only for perceptual plausibility, rather than
accurate reconstruction, we are able to develop algorithms that can
effectively recreate dynamic scenes from a single source video. Our
technique first recovers the original 3D camera motion and a sparse
set of 3D, static scene points using an off-the-shelf structure-frommotion
system. Then, a desired camera path is computed either
automatically (e.g., by fitting a linear or quadratic path) or interactively.
Finally, our technique performs a least-squares optimization
that computes a spatially-varying warp from each input video
frame into an output frame. The warp is computed to both follow the
sparse displacements suggested by the recovered 3D structure, and
avoid deforming the content in the video frame. Our experiments
on stabilizing challenging videos of dynamic scenes demonstrate
the effectiveness of our technique.
Images and movies
BibTex references
@Article{LGJA09, author = "Liu, Feng and Gleicher, Michael and Jin, Hailin and Agarwala, Aseem", title = "Content-Preserving Warps for 3D Video Stabilization", journal = "ACM Transcations on Graphics (Proceedings of ACM SIGGRAPH 2009)", number = "3", volume = "28", year = "2009", url = "http://graphics.cs.wisc.edu/Papers/2009/LGJA09" }