Warp Propogation for Video Resizing
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition — jun 2010
This paper presents a video resizing approach that provides
both efficiency and temporal coherence. Prior approaches
either sacrifice temporal coherence (resulting in
jitter), or require expensive spatio-temporal optimization.
By assessing the requirements for video resizing we observe
a fundamental tradeoff between temporal coherence
in the background and shape preservation for the moving
objects. Understanding this tradeoff enables us to devise
a novel approach that is efficient, because it warps each
frame independently, yet can avoid introducing jitter. Like
previous approaches, our method warps frames so that the
background are distorted similarly to prior frames while
avoiding distortion of the moving objects. However, our
approach introduces a motion history map that propagates
information about the moving objects between frames, allowing
for graceful tradeoffs between temporal coherence
in the background and shape preservation for the moving
objects. The approach can handle scenes with significant
camera and object motion and avoid jitter, yet warp each
frame sequentially for efficiency. Experiments with a variety
of videos demonstrate that our approach can efficiently
produce high-quality video resizing results.
Images and movies
BibTex references
@InProceedings{NLLG10, author = "Niu, Yuzhen and Liu, Feng and Li, Xueqing and Gleicher, Michael", title = "Warp Propogation for Video Resizing", booktitle = "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", month = "jun", year = "2010", url = "http://graphics.cs.wisc.edu/Papers/2010/NLLG10" }