Although geared primarily toward the production of static graphics for print publications, Dona M. Wong’s The Wall Street Journal Guide to Information Graphics (2010) provides a wealth of salient and time-honored tips and guidelines that any student of data visualization would be well-advised to follow. Continue reading
For a lot of self-indulgent reasons, I secretly love The Huffington Post. But well-designed visualizations and interactive interfaces have never been the news organization’s strong suit. While their live coverage of Tuesday night’s GOP primary in Michigan had all the flavor of a classic HuffPo report – updates faster than you can send a Tweet, snarky comments, and dramatic headlines – what stood out to me was how they integrated real-time election results into a mapping format. And not only was the map visually appealing, with clean lines, distinctive color choices and a refreshing sense of minimalism, but it also did a good job of allowing the user to know what was going on across the state as the results were being tallied. The legend makes it clear which candidates are leading using numbers, while the map allows viewers to see which part of the state Santorum and Romney have claimed.
Having this geographic breakdown is particularly important in Michigan. For one, the notorious swing-state is vastly different demographically from one area to another. People in unionized Detroit vote nothing like the more conservative folks on the Michigan panhandle. Moreover, knowing who received what votes where is even more important in Michigan because of the fact that it’s Romney’s home state. If Romney didn’t come off with big margins in and around his hometown south of Detroit, it would have seriously hurt his momentum going forward. The importance of the Michigan vote to Romney becomes even more important in light of his recent insistence that the auto-companies should not have received a bailout and that the country “should let Detriot go broke.” But it doesn’t seem as though Romney’s comments lost him the urban areas entirely, as he easily carried Detroit and Grand Rapids by huge margins.
So I’ll admit it: I’ve always kind of had a design crush on the Guardian‘s website, and I may or may not have tried to emulate it in various other news websites I’ve developed. What I love most about the Guardian’s design is simply its proprietary typeface. That slightly “Georgia” looking serif with the curbed nodules and cut-off “G’s” instantly alerts the user that they’re interacting with the Guardian brand. Another strong aspect of the site is that it succeeds where many legacy news organizations fail in that it successfully and cleanly integrates an array of different content, from videos, to mugshots for columnists, to vertical celebrity shoots and to landscape scenes of world political affairs and crises. Though it may seem obvious, the coordianted color schemes on the site allow the user to receive visual cues about which section she’s reading or encountering. Color is perhaps the Guardian’s strongest visual element.
What also makes the Guardian site in my view the almost perfect model for for-profit news sites is its interactivity. Designers don’t have to worry about whether the body text of the articles will make the page look visually too distracting, as users can simply hover over a picture to read the excerpt. It also likely increases audience engagement, asssuming that people click or hover on stories who may not have otherwise.
I could go on and on for days about what a groundbreaking model the Guardian’s website is––like how its use of white space around the header gives users a sense of minimalism, or the way in which the site displays its ads. But I won’t. All I’ll say is that it’s so user-friendly that it’s hopped over the pond to circulate in America.
In contrast with Norman –– who argues flatly for programmers to adopt a more immersive, task-centered approach to computer design rooted in cultural conventions ––Manovich contends in his paper on human-computer interfaces that designers should instead seek to embrace the new language of the computer medium, the language of the interface. The failure of programmers to make use of the full power of the interface as a language in and of itself, Manovich argues, can be traced back to two competing impulses: representation and control. The desire to make computing “represent” or “borrow ‘conventions’ of the human-made physical environment” often inevitably limits the full range of “control” or flexibility the computer interface can offer. But although Manovich clearly leaves some room for common ground between the impulses of representation and control, he tends to paint them at times as almost mutually exclusive to each other. While he is no doubt correct in his assumption that “neither extreme is ultimately satisfactory by itself,” he particularly laments the arbitrary shoveling of old cultural conventions onto the role of the computer as a control mechanism.
Rather than seek to imitate pre-existing forms of communication mediums, Manovich asserts that programmers should embrace the “new literary form of a new medium, perhaps the real medium of a computer – its interface” (92). Only when a user has learned this “new language” can he or she have a truly immersive computing experience. This stands in sharp contrast to Norman, who champions s more populist message of usability and rails against the notion that “if you have not passed the secret rites of initiation into programming skills, you should not be allowed into the society of computer users.”
Design is too often designer-centric instead of user-centric, argues Donald Norman in the sixth chapter of his book The Design of Everyday Things. Norman lays out the case that anyone acting as a designer – whether programmer, illustrator or developer – has an unconscious tendency to be device-oriented rather than task-oriented; that is, designers “become experts with the device they are designing,” while users are “often expert at the task they are trying to perform with the device.” Instead, designers should place more attention on usability, which is not an easy task given the many challenges they face in terms of demands from profit-driven clients, users with special needs and users who seek features they don’t needs. Indeed, as Norman admits, there is no one size fits all when it comes to creating user-centric designs, but flexibility helps.
One ever feels the echo of Steve Jobs’ design philosophies echoed throughout Norman’s work, particularly in his description of the “two deadly temptations for the designer.” Designers too often fall prey to the allure of what he calls “creeping featurism” –– the tendency to pile on endless features to a device that needlessly complicate its use –– as well as the “worshipping of false images,” referring to the temptation of valuing technological flashiness over end usability. Particularly in Apple’s’ later consumer entertainment products, beginning with the iPod, we see an acute awareness of these dangers taken into account. Unlike its rival digital music devices at the time, the iPod valued usability over featurism, and prized immersion over control.
Take a look at this fascinating visualization of last weekend’s Superbowl ads created using a new startup tool called Hotspots.io. What’s unique about this visualization is that it provides an interactive, feature-rich multimedia presentation of social media reaction in real time as it relates to live events. The sheer amount of data displayed – from the total reach, to total mentions, to the amount of money each company spent on advertising – is impressive enough in its own right. What’s more, because of the way the data is organized into the drop down menu on the left-sidebar, the user is able to view it in separate bits without having to split attention between multiple data points.
The job of a journalist is to convey the facts. But when the facts conflict with an individual’s preexisting beliefs, they often tend to get pushed aside. That’s where the research of Nyhan and Reifler comes into play. In their 2011 study “Opening the Political Mind,” Nyhan and Reifler conduct a series of experiments to determine whether the process of “self-affirmation” as well as graphical representations can help better break down the user’s inherent biases so as to communicate the facts at hand regarding politically sensitive issues.
The study is particularly relevant as it applies to data journalism. First, the connection between self-affirmation and a more ready willingness to accept uncomfortable facts shows that emotion can often be an effective tool in helping to communicate data. As such, it reminds us that our job as data journalists is not only to convey the facts, but to deliver them in an intuitive, visually-pleasing package that will warm the user emotionally and subconsciously to being more accepting of the data itself. Second, and perhaps most importantly, the study demonstrates the powerful effects that graphical representations can have over text alone. Texts often have subtexts, at least in the perception of the audience, while an accurate graphical representation tends to come across as a more objective and authoritarian source of information. That’s not to say that graphics can’t skew the facts in many of the same ways as text can – indeed, an out-of-scale, poorly-designed chart can often be as unconsciously deceiving as a “he said, she said” news story – but rather that users tend to be more convinced by ‘seeing’ the data than by reading it alone.
Setting the guidelines for the social, political and human consequences of research in the database age is an issue that has yet to be fully explored. On one hand, the champions of publicness and digital democracy argue for absolute transparency and data freedom. On the other, privacy advocates consistently take issue with what they see as a potential threat to individual liberty. In their 2011 paper “Six Provocations for Big Data,” Danah Boyd and Kate Crawford attempt to bridge this divide by laying out the basic principles of what they call ‘Big Data,’ as well as a broad set of principles that should guide researchers seeking to harness the power of that data for social good.
Perhaps most importantly, Boyd and Crawford enumerate the basic misassumption that researchers, academics and media professionals often make when interpreting Big Data: they treat data as an objective and infallible source of knowledge when it is really just a piece of the underlying story. Numbers do not speak for themselves, so we must adapt our methods of research to the demands of new technology. As Latour puts it, “Change the instruments, and you will change the entire social theory that goes with them.”
In the first chapter of his book Data Analysis for Politics and Policy, Yale researcher Edward R. Tufte demonstrates the opportunities as well as the challenges of using data to help inform decisions of public policy. First, Tufte sets forth the various terms and theoretical frameworks he will be using to analyze data. He advocates the use of what he calls a “multivariate analysis” which takes into account several describing variables to understand a problem rather than just one. In scientific settings, it is possible to isolate a single describing (independent) variable from others and provide a control by which to reach a conclusion regarding the cause of a given response (dependent) variable. But in the real world of social problems and political policy, it is often impossible to parse out the effects of the multitude of possible describing variables that may be at play in a given situation. For Tufte, that gives rise to the need for a “statistical technique” that “may help organize or arrange the data so that the numbers speak more clearly to the question of causality.” The numbers cannot answer the question of causality, but they can help shed light on it if they are analyzed in such a way that takes into account as many different variables as possible.