In August 2007, Science published a bar graph that illustrates how design decisions made to attract a reader’s eye can also distort and conceal meaning. While the distortions were surely unintentional, the integrity of the story in the data was compromised nevertheless.
The graph shows a time series of retention rates for two cohorts of undergraduate students. One cohort had matriculated in engineering in a First Year Engineering Program (FYEP) and one cohort had not (non-FYEP). The main story is clear, that retention rates drop over time for both cohorts but that FYEP students are always retained at a higher rate than non-FYEP students. Yet the graph design distorts one comparison and omits another.
Graph as originally shown in Science. From (Fortenberry et al. 2007). Reprinted with permission from AAAS.
First, the appropriate design for displaying discrete values of a time series is a scatter plot, not a bar graph.
Second, the textual annotations in the original graph reveal that the difference in retention rates between the two groups remains nearly constant at 10%. Visually, however, the ratio of retention rates seems to increase over time because the ratio of bar heights increases over time. The FYEP bar is at first approximately 1/3 higher than the adjacent bar, then nearly twice as high, then more than 3 times as high.
Using the shorter bar as a unit of comparison to visualize the ratio of the heights of adjacent bars.
The cause of this well-known problem is the non-zero baseline. Naomi Robbins reminds us that not all graphs require a zero baseline, contrary to Darrell Huff’s advice in his 1954 classic How to Lie with Statistics. But a bar graph without a zero baseline inevitably (and sometimes purposefully) exaggerates differences.
The caption reinforces the miscommunication by stating that the FYEP course “improved retention … into the third, fifth, and seventh semester,” subtly implying a difference that increases over time rather than remaining constant.
Third, a significant story in these data is (inadvertently) concealed by omitting semester one, the point in time at which both cohorts would be considered 100% retained. Only by including the starting time point can we see the importance of the early semesters.
A redesigned graph that includes semester one and a full scale tells a more nuanced story.
The concealed story was the early impact of the FYEP course with its higher rate of retention to the third semester. After semester 3, however, the factors affecting attrition seem to act on both groups equally, indicated visually by the connecting lines; after semester 3, the lines are effectively parallel.
One final design point: unlike a bar graph, a scatter plot does not require a zero baseline. I included the full 0-100% range to display that the lowest rate of retention is still above 50%, an important result (as discussed in the prose of the article) compared to the retention rates of non-engineering disciplines. e.g., 42% in biological sciences and 30% in math and physical sciences.
My critique to this point has focused on clarity and a minimalist design aesthetic, hallmarks of the “rhetoric of science” (Kostelnick 2008). However, recognizing that these downward sloping curves represent decisions made by real students invites us to ask about the human stories in these data. Are the students in either group better off? Are the students who leave engineering graduating in other disciplines? Is retention even a concern to students? In light of such questions, the graphs seem inadequate, as if we’ve missed an opportunity to tell important human stories.
The authors do address such concerns in their prose. Retention rates reflect student decisions that are influenced almost exclusively by human factors: “student’s background, college administrative issues, academic and social integration, attitude and motivation, and fit within an institution.” Thus the retention data are a surrogate measure of some combination of these factors.
The original graph displays only six paired values (rate and semester). And though my redesigned graph corrects the distortions of the original, it still displays only eight paired values. Neither graph has the visual impact it might have had if designed to convey the important story the authors tell in their prose.
A final note: a colleague and I noted the perceptual issues of this bar graph when the article first appeared. With the permission of the first author, Norm Fortenberry, we sent a shorter version of this critique to the Science editor. They responded by posting a scatter plot much like the one shown here in an online addendum.
References
- Fortenberry NL, Sullivan JF, Jordan PN, Knight DW (2007). Engineering education research aids instruction. Science 317(5842), 1175-1176. DOI: 10.1126/science.1143834.
- Kostelnick C (2008). The visual rhetoric of data displays: The conundrum of clarity. IEEE Transactions on Professional Communication 51(1), 116-130.
- Robbins N (2012). Must zero be included on scales of graphs? Another look at Fox News’ graph and Huff’s gee-whiz graph. Forbes (online). http://www.forbes.com/sites/naomirobbins/
Terms and conditions for use.
Reprinted AAAS material: Readers may view, browse, and/or download material for temporary copying purposes only, provided these uses are for noncommercial personal purposes. Except as provided by law, this material may not be further reproduced, distributed, transmitted, modified, adapted, performed, displayed, published, or sold in whole or in part, without prior written permission from the publisher.
Original material in this work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Leave a Reply
You must be logged in to post a comment.