Johnson’s talk at Google regarding his book The Ghost Map celebrates the mid-19th century physician John Snow and a local amateur Henry Whitehead’s effort in finding the reason for cholera outbreaks in the city of London. At the centerpiece of their efforts is the construction of a map — a map of all cholera-related deaths near a neighborhood water pump, bounded by the walking path around the neighborhood. Johnson tells us how this map spectacularly illustrated Snow’s theory that cholera was caused due to drinking contaminated water, going against the commonly accepted miasma theory of diseases being caused by bad smells, or the airborne particles that caused them, and not carriers such as water.
However, in his paper titled “Incorporating Quantitative Reasoning in Common Core Courses: Mathematics for The Ghost Map,” describing quantitative reasoning approaches that could be included while reading or teaching The Ghost Map, the Beloit College Professor John R. Jungck urges Johnson’s readers to ask whether these quantitative tools such as the cholera map actually just spit out the truth as Johnson seems to suggest?
He reminds us of Florence Nightingale, who herself had pioneered in the practice of data analysis and visualization, and is credited to have invented the famous coxcombs to illustrate the mortality causes for British soldiers in the Crimean war and successfully advocated in the parliament for better nursing practices and sanitation. But as a contemporary of Snow, even after seeing the map, Nightingale did not believe in Snow’s theory of cholera being waterborne.
While Johnson blames Nightingale’s disbelief on factors such as ideology, social prejudice, and limited imagination, in essence pointing that she did not understand Snow’s data, Jungck urges us to ask whether this data and its visualizations might actually support multiple clashing interpretations? He argues that the process of finding the truth is not just as straightforward as its revelation using the data but that it involves argumentation, controversy, and reconciliation with multiple alternate interpretations of the same data, a lengthy but robust process.
Jungck’s argument reminds me of my own changing interpretations of the COVID-19 case numbers over time. While the numbers remain the same, I see 1000 daily cases very differently now than I did a month ago. This interpretation can change from person to person, while Nightingale might have found 1000 COVID-19 cases normal (the new normal I mean), maybe Snow might have thought them to be extremely high. And even beyond that, behind these numbers is the story of how they are even generated: How many tests were done that day? What kind of tests were they? Where in the country were they done? Can we actually trust these numbers?, questions that require even further query than just the daily case numbers. Thus, varying interpretations and seemingly endless questions that ask for even more data are sufficient to remind us that a data set and visualizations alone cannot completely represent the truth.
Finally, tying back to the mortality bills we read about in Defoe’s Journal of the Plague Year, at the start of the plague when the deaths are quite low, the narrator H.F. interprets those deaths as being caused by the spread of the infection rather than dismissing them as just normal variation. I want to leave you to think about how much of his interpretation was caused due to his anticipation of the coming of plague?