DC1 Grading Info

Here is some information on how DC1 was graded.

We graded each assignment, looking at all of the designs. In most cases, both graders looked at the assignment. In some cases, both graders filled out scoring sheets (and assigned scores independently, in which case your grade is the average). In other cases, one grader filled out a score sheet, and the other grader checked it. We did enough that we believe there is good inter-rater reliability.

We tried to provide detailed feedback, which is why it took so long to grade things. We also tried to provide consistent scores according to the rubric below. This also took a while since we wanted to make sure things were consistent.

Note that the score is for the “final handin” - your 4 designs and the documentation. We do not consider your peer reviews, whether you turned things in on time, whether you did the initial parts of the assignment, etc. All of these will be factored in later when we assign final grades. Your score is a measure of the quality of your designs and documentation.

We posted your score and the grading sheet to DC1-4. Unfortunately, Canvas removes the line breaks from the comments, so it can be hard to read. We are developing a new way to get you the feedback in a more readable form.

We will grade your peer reviews separately. You final grade for DC1 is 80% the DC1-4 grade, and 20% your peer review grade. (as specified on the Design Challenge 1 (DC1): One dataset / Four Stories assignment). Your DC1 Redux will be graded as a separate assignment.

Problem/Kudos codes

These are common comments made across many different assignments.

“Problems” (things you lose points for) count down from (A,B,C), Kudos (things you gain from) are given numeric codes (1,2,3).

Not all comments are here - sometimes we just put individual text.

The comments we give you are not always complete - we might leave some out even if they apply.

  • A - clear misuse of part/whole relationship (e.g., treemap or stacked bar for things that cannot be added)
  • B - unclear use of part/whole encoding (e.g., not obvious if bars are stacking or just on the same axis)
  • C - simple bi-variate (e.g., a single value per category) - note, this often is not given if the “multi-variate” rubric item is set to no
  • D - unclear story (not sure what is supposed to stand out)
  • E - obvious comparison not supported (e.g., not putting things to be compared along a common axis)
  • F - data dump (design does not seem to support a task other than (maybe) specific retrieval, questionably providing too many specifics in a way that makes things difficult to read)
  • G - (questionable/incorrect use of) continuous axis design for non-continuous variable
  • H - not clear that story emerges from the visualization (but can get from caption)
  • I - not clear what the story is (it neither jumps out from the visualization or caption) - requires reading rationale
  • J - not clear what the story is even after reading rationale
  • K - asserting a trend from a questionably fit trend line
  • L - questionable use of a diverging scale
  • M - poor captions
  • N - poor labels
  • O - design might obscure the point, rather than enhance it
  • P - basic chart not adapted to make story emerge
  • Q - main correlation that viewer would look for (either implied or stated in rationale/caption) is not exposed in a clear way

Kudos Codes (note: we will often just give Kudos in the summary comments)

  1. Good use of combinations of charts to make a visualization.
  2. Good use of design to focus on elements in a sea of data.
  3. Particularly interesting story (brings together data or focus to create something unexpected).
  4. Particularly compelling design to make the story stand out.
  5. Good use of providing context
  6. Uses design to combat scale
  7. Visualizations give provenence and other background for reliability and trust

Rubric

The numbers are for the “base grade ranges” - exact numbers given represent where in the range things are. An “89” is “an AB, barely not an A”, etc.

75 BC

  • turns in all files
  • plausible visualizations
  • some flawed visualizations (lack story, bad encodings)

80 B - everything for BC and

  • complete documentation
  • has simple stories
  • avoids “incorrect” encodings
  • uses appropriate designs
  • little adaptation to make story stand out

85 AB - everything for B and

  • multivariate stories
  • good designs that work for stories (not data dumps)
  • some adaptation / details chosen for stories
  • reasonably effective (or good rationale)

90 A - everything for an AB and

  • interesting and clear stories
  • designs well chosen to make stories clear
  • details chosen to make stories stand out
  • effective designs
  • good rationales

95 A+ - everything for an A

  • the A list, but done really well