Visual Validation versus Visual Estimation: A Study on the Average Value in Scatterplots
IEEE VIS Short Papers — Oct 2023
We investigate the ability of individuals to visually validate statistical models in terms of their fit to the data. While visual model
estimation has been studied extensively, visual model validation
remains under-investigated. It is unknown how well people are
able to visually validate models, and how their performance compares to visual and computational estimation. As a starting point,
we conducted a study across two populations (crowdsourced and
volunteers). Participants had to both visually estimate (i.e, draw)
and visually validate (i.e., accept or reject) the frequently studied
model of averages. Across both populations, the level of accuracy of
the models that were considered valid was lower than the accuracy
of the estimated models. We find that participants’ validation and
estimation were unbiased. Moreover, their natural critical point
between accepting and rejecting a given mean value is close to the
boundary of its 95% confidence interval, indicating that the visually
perceived confidence interval corresponds to a common statistical
standard. Our work contributes to the understanding of visual model
validation and opens new research opportunities.
Images and movies
BibTex references
@InProceedings{BSCGV23, author = "Braun, Daniel and Suh, Ashley and Chang, Remco and Gleicher, Michael and von Landesberger, Tatiana", title = "Visual Validation versus Visual Estimation: A Study on the Average Value in Scatterplots", booktitle = "IEEE VIS Short Papers", month = "Oct", year = "2023", ee = "https://arxiv.org/abs/2307.09330", url = "http://graphics.cs.wisc.edu/Papers/2023/BSCGV23" }