Reading 3: Why Visualization?

January 28, 2010

in Assignments

(due Tuesday, Feb 2)

Again, I’d like you to read 3 things to give you 3 different perspectives on the matter.

  1. Chapter 9 of Visual Thinking (the textbook) by Colin Ware. Yes, we’re reading the last chapter first. You might want to skim through the book leading up to it (I basically read qucikly) it in one sitting. Reading the ending might motivate you to read the whole thing (which we will later). The perspective here is how the perceptual science might suggest why vis is interesting.
  2. Chapter 2 of Tufte’s Visual Explanations (pages 26-53). The perspective here is historical – what can happen when Visualizations work or fail. A scan of the capter is here, and hopefully you remember how to access the protected course reader.
  3. The paper: J.-D. Fekete, J.J. van Wijk, J.T. Stasko, C. North,  The Value of Information Visualization.
    In: A. Kerren, J.T. Stasko, J.-D. Fekete, C. North (eds.), Information Visualization – Human-Centered Issues and Perspectives. LNCS 4950, Springer, p. 1-18, 2008. Which is here.

Originally, I was going to assign a different 3rd paper (which I still rcommend, if you want to read an optional 4th paper):  “Views on Visualization” by Jack van Wijk.   There’s a copy here. This is an extended version of his best-paper-ward winning “Value of Visualization” paper (which is here).

Please read these things and post some comments about what you think of them. We’ll discuss them in class through the week.

{ 13 comments }

turetsky February 1, 2010 at 8:39 pm

1. I found this to be stating what are somewhat inherently known concepts and ideas, especially if you have taken an undergraduate cognitive science course. The author does a good job of pulling these concepts together and building on them to show the cognitive processes behind interpreting a visualization and why design should integrate these ides.

2. This was a good historical overview of two examples of visualization, one which (arguably) worked, and one which didn’t. These events took place long enough ago that there is the hindsight history has provided on why certain visualizations worked and certain ones didn’t.

3. I found this paper to be an interesting merge of the previous two readings. It focused on the cognitive processes used to interpret visualizations as well as using historic examples of visualizations that are seen in the Tufte book. I also thought that the equation of the economic model of value was an interesting idea, a mathematical way to determine if the visualization is actually worth it.

Jim Hill February 1, 2010 at 8:50 pm

Chapter 9 in the Colin Ware book was really interesting and I actually read through chapter one after finishing it. We’ve kind of looked at the brain pipe line in class a little which is what this chapter started with. I found the topic of the fovea interesting and after confirming that I couldn’t read the text two lines above or below, I was wondering the following. Given that the environment is projected onto the eye and that the fovea is responsible for the highest amount of data going to the brain, is there an ideal distance from the eye that maximizes data?

Items 11 and 12 where also interesting. The idea that memory is just the playback of neural activity and that the brain doesn’t really control itself is a little unsettling. This probably isn’t the place for free will debates, but it’s hard not to draw the conclusion that we don’t really have any. Aside from that, I was wondering whether the visual system discussed in this chapter plays any part in what happens when I visualize something in my head. For instance if I close my eyes and visualized the water bottle on my desk, is that somehow being fed into the same system that process what enters my eye?

One topic of interest to me is the issue of stereoscopic video. The readings touched on the concept briefly but I wondered what purpose depth perception had. I did a quick internet search and pretty much found that it’s purpose is to perceive depth which has a lot of logical uses for moving around, but I wonder what implications it has for the new trend of 3D video in movies, TV and games.

Chapter 2 from Tufte seemed to focus on the theme of “compared to what” with the idea that some form of control is required for any visualization to have real meaning. I think this is obvious if you want to visualize the truth. Tutfe seems to think that the truth is the only thing that should be visualized, but I can think of a number of politicians that would say otherwise.

One think that really struck me was the methods of skewing results using visualizations and I couldn’t help but think about sampling theory and aliasing. The question of, given a visualization of a data set, can the true meaning of the data be recovered, is interesting. I wonder if there is maybe an equivalent to the Nyquist theorem, the number of samples required for recovering a signal, where there is an appropriate method for removing as much clutter from a visualisation without compromising the intended meaning.

My final comment on Tufte is that I couldn’t agree more with Feynman’s statement on reality trumping PR. Too many times have I seen the wishes of the higher ups be demolished by the reality of the situation at hand.

The Value of Information Visualization was interesting because I would have thought that InfoVis is a no brainer. In fact, seeing how a good visualization can push so much data into the brain so quickly seems to scream that it should be used.

Initially the authors seemed to make the assumption that InfoVis is all about exploration and browsing. The paper said so on page 3 as it was building it’s case. This was downplayed a little as the paper went on.

One idea that was touched on in all three pieces but I really got form this paper was the idea that visualizations are methods to expand our cognitive abilities. I thought of this as the brain being the CPU and the outside world being the devices with our senses being the interface. Seeing a table of data is kind of like having a general purpose FPU where the computations are fast but the brain still needs to provide meaning; having a good visual, like a good chart, is like having a good special purpose statistics processor that provides the meaning for the cpu.

The paper opened by arguing for certain methods of analysis using the idea that there is a measurable quantity that can promote one method over another. It didn’t really present any method for doing that with visualization however it did provide good rules of thumb for when to use a visualization over an algorithmic method.

The economic method was a nice touch. I think money’s the best way to get the attention of the decision makers in the business world. It only focused on instances where the visualization is used over and over and it didn’t mention cases where a visualization might produce a better, cheaper, method for doing something in which the actual visualization isn’t repeatedly used but still produces a significant return on investment.

Nate February 1, 2010 at 11:16 pm

In Ware — I’ve got Information Visualization so the comparison isn’t going to be exact — I found the most interesting points of the chapter to be the explanation as to how the visual system works and ways to take advantage of it. It’s great to see clearly “better” examples, such as representing multiple variables (direction, temperature, pressure) as different aspects of one visual unit instead of three separate glyphs, alongside the reason the first produces less cognitive load. The phrase “the world is its own memory” seems a particularly useful thing to keep in mind.

As to Tufte — I love his writing, I agree with him, and boy, is he a jerk. His examples of how visualization can both illuminate (Cholera) and conceal (Challenger) evidence are fabulous. I’m entirely behind his assertion that “there are right ways and wrong ways to show data.” It’s continually amazing to me just how poorly the engineers involved in the Challenger accident managed to communicate, both graphically and in writing.

I do tend to disagree with his statement that, had someone produced the proper chart or table, the disaster would not have happened — or, at least, that another wouldn’t have taken its place. The poor decision making and lack of realistic risk assessment involved go far beyond what better design alone could hope to solve.

In Fekete et al, I was entertained to see many of the examples Tufte uses back again, with a slightly different spin. Modeling the economic value of infovis, however, seemed a little like a stab in the dark — it almost felt like a way to convince a recalcitrant boss of the value of spending time making visualizations.

Shuang February 2, 2010 at 12:29 am

Chapter 9 of the Colin Ware’s textbook gives me basic sense of meanings in the visualization work. The steps of description of visual thinking are explained to build a process of cognition, from the neural machinery of brain to the sense of external world. One of the twelve points that interest me is the eighth one, using images, symbols and patterns to provide proxies of memory. After the proxies are fixed, the corresponding concept can be found in short time. That idea is really useful for people to present personal ideas in their writing, by pointing out certain pattern or symbol to readers and make them followed. When reviewing the design issue, how to choose good patterns and symbols to optimize the visual thinking process is another issue. Tracking eye movement and fixing focus are two related problems.

Chapter 2 of the Tufte’s book describes the scientific principle, making controlled comparison, by examples of both good and bad sides. To make the key parts visualized, statistical and graphical reasoning should be taken in to consideration. The former example in this chapter is how people used data to support their viewpoint in more than 150 years ago.

The paper, The Value of Information Visualization, provides examples of Tufte’s with slightly different viewpoints. One of the features I like is the explanation of statistical data analysis, called data mining here. Figure 6 illustrates the regression result simply and clearly, with all the useful statistics shown on a single plot. Figure 7 does the similar procedure but with more visualization contents, which somehow cannot be summarized easily by statistics tables. The equation of economic model of value makes the visualization quantitative, which helps improving the efficiency. I think the main topic of this paper is to show how and why is InfoVis useful.

hinrichs February 2, 2010 at 6:35 am

Ware, CH. 9: Don’t get me wrong – it’s an entertaining read, and had a few thought-provoking ideas – but it starts off with this sentence: “Meaning is what the brain performs in a dance with the external environment”. I defy anyone to tell me what that actually means. It *sounds* like an attempted definition of what meaning is, but I think it’s much more likely that the sentence is meant to make the reader feel good about reading the book. “By reading this sentence, I’m doing a dance!”

I have some real problems with a lot of the language in the rest of the chapter too. How about this one: “Various kinds of information are combined in a temporary nexus of meaning”. “Nexus” simply means “a place where things come together, so obviously a “nexus of meaning” is where “various kinds of information are combined”. The sentence is completely vacuous. (The only word that adds any meaning is “temporary”.)

The fist of the “4 implications” of the active vision model also grated a bit:
– “To support the pattern-finding capability of the brain; that is, to turn information structures into patterns.”
First of all, this is not an “implication” – it’s more of a design goal. Also – what is the difference between “information structures” and “patterns”? How does reading this sentence make it clearer how to do this? What does it mean to *not* turn “information structures” into “patterns”?

Granted, Ware makes the point that verbal reasoning and spatial reasoning are quite different, but having a focus on visual thinking is not a license to dispense with clear writing. (Writing for entertainment, however, can be.) It does, however, make a good contrast with Tufte, who at least writes clearly.

Fortunately the pictures are much more informative, they almost make up for the goofy writing.

Tufte, Ch. 2: I read this book back in August:
http://www.amazon.com/Ghost-Map-Steven-Johnson/dp/1594489254
It appears that Tufte’s book predates this one by almost 10 years.

The section on hiding data through aggregation was interesting and thought provoking, and felt like a bit of a digression – it didn’t come across that that was an actual problem with Snow’s presentation – but simply that this was a good example to talk about the phenomenon.

I felt that he was taking a bit of a cheap shot at Feinman – yes, a real experiment should have had 2 clear glasses, one with ice, one without – but Feinman didn’t have 2 glasses. He had one paper cup, one strip of rubber, and he almost had to do without that much. The point wasn’t to figure out the properties of cold rubber – the engineers had already done that – the point was to compensate for the engineers’ inability to communicate that the rubber really does behave differently when it’s cold. Incidentally, Feinman chose a mock experiment as his way of doing so because that would leverage his authority as a famous scientist in the minds of the public. I don’t think anyone could read that story and come away with a significantly different understanding, and yet Tufte had to belabor the fact that mock-science should never masquerade as the real thing. Granted, it’s not a good idea in general to mix mock science and real science, but in this case, it was an extremely effective bit of communicating. One would have to show that Feinman damaged the overall state of scientific awareness of the public before complaining about it being bad science.

Adrian Mayorga February 2, 2010 at 7:36 am

Tufte – While this is probably a case of “hindsight is 20/20” ( one that he uses to take a few cheap-shots), Tufte uses the two case studies very effectively. As he demonstrates, when the objective of using visualization is to “discover the truth” one must formulate a specific question (what is the source of the cholera, will the cold cause O rings to fail) and make sure that all of the visuals are aimed at exploring and communicating the answers to these questions.

The Value of Information Visualization – I found this paper to be a bit odd. To me it seems like its a scatter shots of a whole bunch of justifications for visualizations, in an attempt to have at least one of those strike a cord with the reader. However, I did like how there is an explanation of automatic analysis, and a clear way to determine if using a visualization is the best way to go.

lyalex February 2, 2010 at 8:19 am

Chapter 9 by

lyalex February 2, 2010 at 9:00 am

It’s a little bit too much for me to read, I still enjoy the chapters. I choose Colin Ware, CH. 9 first, then Tufte’s, then Fekete’s paper. For me it’s more like from easier to harder.
Colin Ware’s Charpter 9 did a good job to summary the ongoing content of the book into 12 points, while pointing out the 4 implications for good design. I especially like the example about the humpback whales, and the idea of using robbon for the tracking is very creative. For me, I thick the success of this example is also because of it did a proper amout of abstraction: omitting a lot of “too-detailed” information such as the whole body of a whale, while preserving most data of their track. It seems to me that a more detailed model will damage the ability for a design to support pattern finding, and a too-simlified model will lose significant data.

I fell Tufte’s chapter more technical but also interesting since so many case studies are provided. A more efficient way I found is to read Fekete’s paper first (since it gives more systematic theory-point view), and then go to Tufte’s paper for comparison and exampling.

Finally, comment cannot be edited… happens to me too….

ChamanSingh February 2, 2010 at 8:22 am

1. Chapter 9: Visual Thinking: Colin Ware
***************************************
This chapter is a “crash course” in human cognitive science. The main idea
in this chapter is to emphasize that all graphical designs must be based
on the concept of “economics of cognition”, which means that we humans are
more comfortable in accepting things that we already know and any radical
change requires a deep cognitive readjustment and therefore less appreciated.

One thing that is not very clear from this chapter is “Who is the target
audience ? Are they painters, mass communication designers, scientists who
are looking for hidden patterns in their dataset.

This chapter also supports the well known fact that computers and humans
are complementary. Humans are unmatched patterns finder and computers are
superior at churning out and pre-process high bandwidth data. And the
design challenge is to transform data into a form where important features
can be easily interpreted by humans.

Chapter 2: Visual and Statistical Thinking:
******************************************
This interesting chapter can be summarized in two sentences from the text.

1. There are right ways and wrong ways to show data, there are displays
that reveals the truth and display that do not.

2. Visual representation of evidence should be governed by principles
of reasoning about quantitative evidence.

Two examples have been given in the chapter (1) London Cholera Epidemic
(2) Decision to Launch Space Shuttle.

But In my view, both the examples shows great human logical thinking to find
the cause of an effect than the mapping tools that were used to display them.
In both the examples, there were strong hypothesis, and correlations and
visualization was just a tool to express them to the masses. Perhaps the main
people ( John Snow and Space Engineers) drew the conclusions without the aid
of data visualization with their great intuitive knowledge. Therefore I think
the title of the paper” Display of Evidence for Making Decision” is confusing
as decision were probably already made.

So perhaps I didn’t like the two examples given in the chapter, but I support
the two main points which are generic.

The Value of Information Visualization:
***************************************
I think this paper repeats concepts given in early twp papers

jeeyoung February 2, 2010 at 9:08 am

The chapter 9 of Ware’s book explained HOW InfoVis can amplify cognition (efficiency and task performance) on the ground of understanding of perception ability (how brain perceives and processes the info).

Tuft’s chapter points well the importance of right design to get the true information and shows WHY InfoVis is useful.

After reading those two chapters, especially Tuft’s chapter, I was confused where I am in visualization area as a statistics student because what Tuft’s examples are doing is what I would do. This confusion can be resolved after reading the paper (The value of Information visualization) – information visualization as an expansion of exploratory data analysis plus information theory and psychological theories, etc.
This paper well explained HOW and WHY InfoVis is useful, giving me a big picture after all.

As a relevant example, I heard that people well trained in abacus calculation are capable of imaging the abacus in the head and do the calculation in the head using the imaginary abacus. This tells me two things – 1. cognitive benefits by using visualization, 2. cognitive process becomes more automatic.

Nakho Kim February 2, 2010 at 9:13 am

All three readings touch the subject of visualisation as cognitive processes, with Ware’s last chapter (or rather, the summary of the whole book) laying the broad principles, Tufte’s chapter on how to feed visualizations into questions and back, and Fekete et al. giving a nice example of how cognitive processing of visualization can lead to right questions and findings.

In the 12 items of Ware chapter, I found 11 – long term memory as cognitive skills not repositories – interesting, which has a strong meaning not only for visualization but communication as a whole. It implies that the visualisation should not just present the data efficiently, but that it should be done in a way to motivate cognitive processes of the viewer. it would be great if the corresponding chapter in the textbook elaborates this subject more. Also I’d like to see if there is more on “how” patterns can be found, other than the example of simplification that was demonstrated in the humpback whale case.

As for Tufte, one question: I had the impression that in both main cases (Cholera, Shuttle) the researchers already knew what they were looking for, with visualisation refining where they should look. I wished he would also deal with cases that are the other way around, how to look for the right questions.

Fekete et al.’s data mining introduction and economic model of value fit right in where Tufte left off. However, I think that the cost – insight efficiency argument is more or less rhetorical, rather than a measurable theory of assessing a specific visualisation technic.

punkish February 2, 2010 at 9:50 am

Perhaps the overall, most lasting message I took away from the slew of readings was one made by Feteke, et. al. — “ask an interesting question, show the right representation, let the audience understand the representation, answer the question and realize how many more unexpected findings and questions arise.” Visualization attempts to identify patterns that can lead to asking more questions.

I keep on getting struck by the similarities between vis and computing concepts — pre-attentive processing seems to be an analog of GPU and level2 cache memory, attributing bandwidth to senses, etc.

But, I think there is more — visualization really is an attempt to break us out of expected ways of looking at very large datasets and make us look at the datasets from different angles. We can comprehend a few things at a time, but when faced with very large datasets, we seem to not change our methodology, and plow ahead with what would be appropriate for a data points but is inappropriate for lots of data points. Visualization is an attempt to bring into our main vision what may otherwise lie at the periphery of our vision.

faisal February 2, 2010 at 10:09 am

One of the prominent take away points from the Tufte’s readings was the significance of doing the science right irrespective of visual representation. The statistical thinking should be applied when making decisions about how to represent information. I think his assertive style of writing (as discussed in class last week) was also helpful in this chapter for conveying a very critical point. I particularly enjoyed reading Tufte’s critique of Feynman’s experiment towards the end of the chapter.
As for the “why” theme of this week’s reading assignment is concerned, I think Tufte’s paper was a more about “how”. To say this more precisely, how the visualization should be used to support scientific process of establishing linkage between cause and effect. This support can come either using visualization methods for a scientist’s own detective work or for presentation to others in order to convey the “right” meaning.
The third reading “The value of visualization” was mostly a summary of different points already discussed in other readings from this week’s assignment and also some classroom discussion. The new information was the economic model of value of visualization – detail of the model is in 4th reading.
Although, I couldn’t get to 4th reading for understanding any details of this economic model. But, generally models that try to provide objective measures to concepts that are inherently subjective (in this case visualization) are generally not practical. Such models are good to earn you a best paper award but other people might not be able to use it given that the individual terms in the model are still very subjective. Interesting the term used in the 3rd paper to summarize the economic model profit equation is “obvious insight”.
The book chapter from Colian’s “Visual Thinking for Design” was a good summary of some of the points we discussed in the last week lectures about pre-attentive processing of certain visual features in the scene etc.

Previous post:

Next post: