Task-Driven Evaluation of Aggregation in Time Series Visualization
Proceedings of the 2014 ACM annual conference on Human Factors in Computing Systems, page 551--560 — may 2014
Many visualization tasks require the viewer to make judgments
about aggregate properties of data. Recent work has
shown that viewers can perform such tasks effectively, for
example to efficiently compare the maximums or means over
ranges of data. However, this work also shows that such effectiveness
depends on the designs of the displays. In this paper,
we explore this relationship between aggregation task and visualization
design to provide guidance on matching tasks with
designs. We combine prior results from perceptual science
and graphical perception to suggest a set of design variables
that influence performance on various aggregate comparison
tasks. We describe how choices in these variables can lead
to designs that are matched to particular tasks. We use these
variables to assess a set of eight different designs, predicting
how they will support a set of six aggregate time series comparison
tasks. A crowd-sourced evaluation confirms these
predictions. These results not only provide evidence for how
the specific visualizations support various tasks, but also suggest
using the identified design variables as a tool for designing
visualizations well suited for various types of tasks.
Images and movies
BibTex references
@InProceedings{ACG14, author = "Albers, Danielle and Correll, Michael and Gleicher, Michael", title = "Task-Driven Evaluation of Aggregation in Time Series Visualization", booktitle = "Proceedings of the 2014 ACM annual conference on Human Factors in Computing Systems", pages = "551--560 ", month = "may", year = "2014", publisher = "ACM", pmcid = "4204486", ee = "http://dl.acm.org/citation.cfm?id=2556288.2557200", doi = "10.1145/2556288.2557200", url = "http://graphics.cs.wisc.edu/Papers/2014/ACG14" }