Discovering Panoramas in Web Videos
Proceedings of the 16th ACM International Conference on Multimedia, page 329--338 — oct 2008
While methods for stitching panoramas have been success-
ful given proper source images, providing these source im-
ages still remains a burden. In this paper, we present a
method to discover panoramic source images within widely
available web videos. The challenge comes from the fact
that many of these videos are not recorded intentionally for
stitching panoramas. Our method aims to find segments
within a video that work as panorama sources. Specifically,
we determine a video segment to be a valid panorama source
according to the following three criteria. First, its camera
motion should cover a wide field-of-view of the scene. Sec-
ond, its frames should be "mosaicable", which states that
the inter-frame motion should observe the underlying con-
ditions for stitching a panorama. Third, its frames should
have good image quality. Based on these criteria, we for-
mulate discovering panoramas in a video as an optimization
problem that aims to find an optimal set of video segments
as panorama sources. After discovering these panorama
sources, we synthesize regular scene panoramas using them.
When significant dynamics is detected in the sources, we
fuse the dynamics into the scene panoramas to make activ-
ity synopses to convey the dynamics. Our experiment of
querying panoramas from YouTube confirms the feasibility
of using web videos as panorama sources and demonstrates
the effectiveness of our method.
Images and movies
BibTex references
@InProceedings{LHG08, author = "Liu, Feng and Hu, Yu-hen and Gleicher, Michael", title = "Discovering Panoramas in Web Videos", booktitle = "Proceedings of the 16th ACM International Conference on Multimedia", pages = "329--338", month = "oct", year = "2008", publisher = "ACM", address = "New York, NY, USA", doi = "http://doi.acm.org/10.1145/1459359.1459404", url = "http://graphics.cs.wisc.edu/Papers/2008/LHG08" }