Reading 4: Evaluation

by Mike Gleicher on January 30, 2017

Due Date: please read before class on Monday, February 6th. And note, that while this is a lot of reading, there is also some reading for the Design School in a Day that you’ll need to do before doing this week’s Seek and Find. (discussion assignment)

The box directory for these readings is on Box in the folder, Reading 4.

A big theme in Visualization (and this class) is “How Do We Know a Visualization is Good?”

This is particularly important, because we’ve already seen that there are many visualizations we can make, many tasks we might want to address, many audiences we might want to reach, many goals we might have, …

So once again, here is a reading designed to give you a variety of perspectives on how we might decide if a visualization is good. The emphasis here is less on specific methods (we’ll talk about some of those later in the semester), and more on getting a range of perspectives.

  1. Chapter 4 of Munzner (link; Munzner_Ch4_Analysis Four Levels for Validation.pdf, 28 pages).
    The main ideas here I like a lot. They come from an earlier paper, that I think was an important milestone in the field.  The chapter is similar enough to the paper that reading the paper is a little redundant (if you want to see it, check here). If you’re interested in visualization as an academic field you should be familiar with the paper.
  2. Edward Tufte. The Fundamental Principles of Analytical Design. in Beautiful Evidence (link; Tufte_4-BeautEvid-5-FundamentalPrinciples.pdf, 17 pages)
    Of course, we can’t talk about “what is good” without consulting Tufte for his strong opinions. (not that he isn’t going to make his opinions clear). In hindsight, this Tufte chapter is actually much better in the “how” to make a good visualization, and trying to distill the general principles, than many of the others we’ve read. But its Tufte, so its still full of his opinions on “what is good.”
    Since we accidentally gave it to you to read for last week (and some people cannot get enough of Tufte), Please also read: “Graphical Integrity” (Chapter 2 of “The Visual Display of Quantitative Information”; Tufte_1-VDQI-2-GraphicalIntegrity.pdf; 25 pages). link here
  3. Bateman, S., Mandryk, R.L., Gutwin, C., Genest, A.M., McDine, D., Brooks, C. 2010. Useful Junk? The Effects of Visual Embellishment on Comprehension and Memorability of Charts. In ACM Conference on Human Factors in Computing Systems (CHI 2010), Atlanta, GA, USA. 2573-2582. Best paper award. DOI 10.1145/1753326.1753716. (get paper at the project page here; 10 pages)
    This is a pretty provacative paper. You can pick apart the details (and many have), but I think the main ideas are important. There is a ton written about this paper (those of the Tufte religon view this as blasphemy). Stephen Few has a very coherent discussion of it here. In some sense, I’d say it’s as useful than the original paper – but I would really suggest you look at the original first. While more level-headed than most, Few still has an Tufte-ist agenda.Reading the Few article is not optional – in some ways, its more interesting than the original.
  4. You should read at least one of the papers by Michelle Borkin and colleagues on the memorability of visualization. Again, these papers are very provocative, and provoked some people to be downright mean in attacking it. You don’t need to worry about the details – just try to get the essence. The project website has lots of good information.
    • Michelle Borkin et. al. What Makes a Visualization Memorable? [pdf] InfoVis 2013 (10 pages).
      This is another radical thought of “maybe Tufte-ism isn’t all there is – and we can measure it.” Again, we can quibble with the details, but they really re getting at something real here.
    • Michelle Borkin et. al. Beyond Memorability: Visualization Recognition and Recall. InfoVis 2015. (pdf; 10 pages)

  5. Chris North, “Visualization Viewpoints: Toward Measuring Visualization Insight”, IEEE Computer Graphics & Applications, 26(3): 6-9, May/June 2006. [pdf] (doi; 4 pages)
    I think this is an important paper (well, it’s a magazine article that is a lightweight version of a paper) because it gets at the challenge of evaluation at the higher levels. Reading the original paper (which details their experiment) isn’t necessary for getting this point – but it does show of how hard these kinds of experiments are.

A fair question in all this, is to ask “what can we get out of evaluation.” This will be a central theme in our discussion. I’m not going require you to read any of the writings on it, but here are some optional things you might look at:

  • Lam, H., Bertini, E., Isenberg, P., Plaisant, C., & Carpendale, S. (2011). Empirical Studies in Information Visualization: Seven Scenarios. IEEE Transactions on Visualization and Computer Graphics, 18(9), 1520–1536. http://doi.org/10.1109/TVCG.2011.279

  • Correll, M., Alexander, E., Albers Szafir, D., Sarikaya, A., Gleicher, M. (2014). Navigating Reductionism and Holism in Evaluation. In Proceedings of the Fifth Workshop on Beyond Time and Errors Novel Evaluation Methods for Visualization – BELIV ’14 (pp. 23–26). New York, New York, USA: ACM Press. (http://graphics.cs.wisc.edu/Papers/2014/CAASG14)
    What happens when I let my students rant.

  • Gleicher, M. (2012). Why ask why? In Proceedings of the 2012 BELIV Workshop on Beyond Time and Errors – Novel Evaluation Methods for Visualization – BELIV ’12 (pp. 1–3). New York, New York, USA: ACM Press. (link)
    Me ranting about how evaluation shouldn’t be an end unto itself. The workshop talk was much better than what I wrote.

Previous post:

Next post: