Critique: Michigan GOP Primary Visualization, via HuffPo

For a lot of self-indulgent reasons, I secretly love The Huffington Post. But well-designed visualizations and interactive interfaces have never been the news organization’s strong suit. While their live coverage of Tuesday night’s GOP primary in Michigan had all the flavor of a classic HuffPo report – updates faster than you can send a Tweet, snarky comments, and dramatic headlines – what stood out to me was how they integrated real-time election results into a mapping format. And not only was the map visually appealing, with clean lines, distinctive color choices and a refreshing sense of minimalism, but it also did a good job of allowing the user to know what was going on across the state as the results were being tallied. The legend makes it clear which candidates are leading using numbers, while the map allows viewers to see which part of the state Santorum and Romney have claimed.

Having this geographic breakdown is particularly important in Michigan. For one, the notorious swing-state is vastly different demographically from one area to another. People in unionized Detroit vote nothing like the more conservative folks on the Michigan panhandle. Moreover, knowing who received what votes where is even more important in Michigan because of the fact that it’s Romney’s home state. If Romney didn’t come off with big margins in and around his hometown south of Detroit, it would have seriously hurt his momentum going forward. The importance of the Michigan vote to Romney becomes even more important in light of his recent insistence that the auto-companies should not have received a bailout and that the country “should let Detriot go broke.” But it doesn’t seem as though Romney’s comments lost him the urban areas entirely, as he easily carried Detroit and Grand Rapids by huge margins.

Visualization/design critique:

So I’ll admit it: I’ve always kind of had a design crush on the Guardian‘s website, and I may or may not have tried to emulate it in various other news websites I’ve developed. What I love most about the Guardian’s design is simply its proprietary typeface. That slightly “Georgia” looking serif with the curbed nodules and cut-off “G’s” instantly alerts the user that they’re interacting with the Guardian brand. Another strong aspect of the site is that it succeeds where  many legacy news organizations fail in that it successfully and cleanly integrates an array of different content, from videos, to mugshots for columnists, to vertical celebrity shoots and to landscape scenes of world political affairs and crises. Though it may seem obvious, the coordianted color schemes on the site allow the user to receive visual cues about which section she’s reading or encountering. Color is perhaps the Guardian’s strongest visual element.

What also makes the Guardian site in my view the almost perfect model for for-profit news sites is its interactivity. Designers don’t have to worry about whether the body text of the articles will make the page look visually too distracting, as users can simply hover over a picture to read the excerpt. It also likely increases audience engagement, asssuming that people click or hover on stories who may not have otherwise.

I could go on and on for days about what a groundbreaking model the Guardian’s website is––like how its use of white space around the header gives users a sense of minimalism, or the way in which the site displays its ads. But I won’t. All I’ll say is that it’s so user-friendly that it’s hopped over the pond to circulate in America.

Response to Manovich on “HCI: Representation versus Control”

In contrast with Norman –– who argues flatly for programmers to adopt a more immersive, task-centered approach to computer design rooted in cultural conventions ––Manovich contends in his paper on human-computer interfaces that designers should instead seek to embrace the new language of the computer medium, the language of the interface. The failure of programmers to make use of the full power of the interface as a language in and of itself, Manovich argues, can be traced back to two competing impulses: representation and control. The desire to make computing “represent” or “borrow ‘conventions’ of the human-made physical environment” often inevitably limits the full range of “control” or flexibility the computer interface can offer. But although Manovich clearly leaves some room for common ground between the impulses of representation and control, he tends to paint them at times as almost mutually exclusive to each other. While he is no doubt correct in his assumption that “neither extreme is  ultimately satisfactory by itself,” he particularly laments the arbitrary shoveling of old cultural conventions onto the role of the computer as a control mechanism.

Rather than seek to imitate pre-existing forms of communication mediums, Manovich asserts that programmers should embrace the “new literary form of a new medium, perhaps the real medium of a computer – its interface” (92). Only when a user has learned this “new language” can he or she have a truly immersive computing experience. This stands in sharp contrast to Norman, who champions s more populist message of usability and rails against the notion that “if you have not passed the secret rites of initiation into programming skills, you should not be allowed into the society of computer users.”

Response to “The Design of Everyday Things,” Chapter Six

Design is too often designer-centric instead of user-centric, argues Donald Norman in the sixth chapter of his book The Design of Everyday Things. Norman lays out the case that anyone acting as a designer – whether programmer, illustrator or developer – has an unconscious tendency to be device-oriented rather than task-oriented; that is, designers “become experts with the device they are designing,” while users are “often expert at the task they are trying to perform with the device.” Instead, designers should place more attention on usability, which is not an easy task given the many challenges they face in terms of demands from profit-driven clients, users with special needs and users who seek features they don’t needs. Indeed, as Norman admits, there is no one size fits all when it comes to creating user-centric designs, but flexibility helps.

One ever feels the echo of Steve Jobs’ design philosophies echoed throughout Norman’s work, particularly in his description of the “two deadly temptations for the designer.” Designers too often fall prey to the allure of what he calls “creeping featurism” –– the tendency to pile on endless features to a device that needlessly complicate its use –– as well as the “worshipping of false images,” referring to the temptation of valuing technological flashiness over end usability. Particularly in Apple’s’ later consumer entertainment products, beginning with the iPod, we see an acute awareness of these dangers taken into account. Unlike its rival digital music devices at the time, the iPod valued usability over featurism, and prized immersion over control.

Critique: Superbowl XLVI ads visualization

Take a look at this fascinating visualization of last weekend’s Superbowl ads created using a new startup tool called What’s unique about this visualization is that it provides an interactive, feature-rich multimedia presentation of social media reaction in real time as it relates to live events. The sheer amount of data displayed – from the total reach, to total mentions, to the amount of money each company spent on advertising – is impressive enough in its own right. What’s more, because of the way the data is organized into the drop down menu on the left-sidebar, the user is able to view it in separate bits without having to split attention between multiple data points.

All of that goes without mentioning what’s most unique about this visualization: Its ability to display Twitter data in a chronological timeline and line graph, complete with YouTube embeds of the ads that corresponded to each point on the timeline. From a programming standpoint, the visualization relies mainly on well-written javascript, which in itself is nothing novel. But what really makes this visualization stand out to me as a developer is its ability to tap into the Twitter API to display various statistics that may not be immediately scrapeable on the surface, such as the rate of increase of mentions and reach in real-time. I’m guessing the developers of the tool that created this have built some sort of algorithm into their application that takes basic Twitter stream data, computes it into various user-defined statistics, then spits it back out on command. However the developers did it, though, it’s impressive. I’ve submitted my email address to them in hopes that I can be a beta tester of the new tool. I’ll let you know when I get my hands on it.

Response to “Opening the Political Mind,” Nyhan and Reifler (2011)

The job of a journalist is to convey the facts. But when the facts conflict with an individual’s preexisting beliefs, they often tend to get pushed aside. That’s where the research of Nyhan and Reifler comes into play. In their 2011 study “Opening the Political Mind,” Nyhan and Reifler conduct a series of experiments to determine whether  the process of “self-affirmation” as well as graphical representations can help better break down the user’s inherent biases so as to communicate the facts at hand regarding politically sensitive issues.

The study is particularly relevant as it applies to data journalism. First, the connection between self-affirmation and a more ready willingness to accept uncomfortable facts shows that emotion can often be an effective tool in helping to communicate data. As such, it reminds us that our job as data journalists is not only to convey the facts, but to deliver them in an intuitive, visually-pleasing package that will warm the user emotionally and subconsciously to being more accepting of the data itself. Second, and perhaps most importantly, the study demonstrates the powerful effects that graphical representations can have over text alone. Texts often have subtexts, at least in the perception of the audience, while an accurate graphical representation tends to come across as a more objective and authoritarian source of information. That’s not to say that graphics can’t skew the facts in many of the same ways as text can – indeed, an out-of-scale, poorly-designed chart can often be as unconsciously deceiving as a “he said, she said” news story  – but rather that users tend to be more convinced by ‘seeing’ the data than by reading it alone.

Response to “Six Provocations for Big Data,” Boyd and Crawford (2011)

Setting the guidelines for the social, political and human consequences of research in the database age is an issue that has yet to be fully explored. On one hand, the champions of publicness and digital democracy argue for absolute transparency and data freedom. On the other, privacy advocates consistently take issue with what they see as a potential threat to individual liberty. In their 2011 paper “Six Provocations for Big Data,” Danah Boyd and Kate Crawford attempt to bridge this divide by laying out the basic principles of  what they call ‘Big Data,’ as well as a broad set of principles that should guide researchers seeking to harness the power of that data for social good.

Perhaps most importantly, Boyd and Crawford enumerate the basic misassumption that researchers, academics and media professionals often make when interpreting Big Data: they treat data as an objective and infallible source of knowledge when it is really just a piece of the underlying story. Numbers do not speak for themselves, so we must adapt our methods of research to the demands of new technology. As Latour puts it, “Change the instruments, and you will change the entire social theory that goes with them.”

Response to Tufte, “Data Analysis for Politics and Policy”

In the first chapter of his book Data Analysis for Politics and Policy, Yale researcher Edward R. Tufte demonstrates the opportunities as well as the challenges of using data to help inform decisions of public policy. First, Tufte sets forth the various terms and theoretical frameworks he will be using to analyze data. He advocates the use of what he calls a “multivariate analysis” which takes into account several describing variables to understand a problem rather than just one. In scientific settings, it is possible to isolate a single describing (independent) variable from others and provide a control by which to reach a conclusion regarding the cause of a given response (dependent) variable. But in the real world of social problems and political policy, it is often impossible to parse out the effects of the multitude of possible describing variables that may be at play in a given situation. For Tufte, that gives rise to the need for a “statistical technique” that “may help organize or arrange the data so that the numbers speak more clearly to the question of causality.” The numbers cannot answer the question of causality, but they can help shed light on it if they are analyzed in such a way that takes into account as many different variables as possible.


Data visualization, infographic or illustration?

Check out this interactive graphic on the rise of Google recently produced by the folks over at It’s an innovative example of how developers can use a responsive, single-page interface to convey a broad range of chronological information that would otherwise be crammed into a timeline. The interactivity of the graphic compels the user to click through to see what happens next, and provides a more engaging narrative than a simple linear flow would.

Based upon our various understandings of the terms data visualization, infographic and illustration, which category would this graphic fall into? I’d be most inclined to say it’s an infographic rather than an illustration, given that its primary goal is to convey factual information (the chronological rise of Google) rather than just to provide an illustration, which it also does quite nicely. But I wouldn’t call it a data visualization, because it’s not immediately apparent from first glance what the graphic is trying to illustrate, and the story isn’t data-driven. It has a more conventional narrative.

What do you think?

Response to Ayres, Norman and Wolfe

Response to Ayres and Sweller, “The Split Attention Principle in Multimedia Learning”

Paul Ayres and John Sweller apply the split-attention principle to the design of multimedia instruction, asserting that it is “important to avoid formats that require learners to split their attention between, and mentally integrate, multiple sources of information” (135). This assertion is based on the theory of cognitive load, which refers to the amount of information the brain is able to process at any given point in time. The greater the number of sources of information in multimedia design, the higher the cognitive load required to understand it. An “extraneous” cognitive load, as Ayres and Sweller diagnose it, hinders the learning process by requiring users to split their attention and “mentally integrate the multiple sources of information” (135). As such, it is often necessary for the designer of a piece of multimedia instruction to do as much as possible to present the multiple sources of information in an integrated format.

With practice and expertise, the brain can be taught to expedite the process of piecing together disparate sources of information without need for additional visual aids, such as the algebra student being able to readily identify angle measures of a given shape. But for the novice learner, “substantial cognitive resources will need to be devoted to splitting attention between the disparate sources of information and mentally integrating them” (137). As a way to lessen the cognitive load, then, designers can employ visual cues or “referrents” that help integrate multiple sources of information. Other ways to help learners piece together multiple information sources is what Ayers and Sweller call the “dual mode” of multimedia design, which refers to the use of two different sensory cues, usually sight and sound, to lessen the cognitive load.

Response to Donald Norman, The Design of Everyday Things

In The Design of Everyday Things, Donald Norman looks at the design of familiar items to explain how to construct effective visual aids. At the heart of all user-friendly designs, Norman asserts, lies two fundamental components: a good “conceptual model” and a logical “visible structure” (13). An effective conceptual model should allow us to predict the outcome of our actions by explaining how an item works in a theoretical manner. Ideally, conceptual models should be as simple as possible, but the presiding imperative must always be accuracy and clarity of explanation. The item itself must also employ a logical visual structure, including the intuitive use of what Norman calls affordances, constraints and mappings. It should be clear to the eye from visual cues what an item can do (its affordances), what it can’t do (its constraints) and how to connect the various parts together to perform an operation (mapping).

Response to Wolfe, Visual Search

Jeremy M. Wolfe lays out the basic principles of visual search, shedding light on the often unconscious ways our brains process sensory information based upon certain visual cues. Perhaps most importantly, Wolfe defines the basic structural features of visual search, including color, orientation, motion, size and scale. While many such basic features may appear obvious, Wolfe goes a step further in pointing out the various psychological factors that can come into play in our processing of visual information. For example, in keeping with Plato’s idea of forms, Wolfe asserts that an object is more than just the sum of its parts; it is an object in its own right. As soon as attention arrives, Wolfe contends, “an object is not seen as collections of features. It  is an object having certain featural attributes.” An object carries with it certain mental associations that change the way users perceive it.

On Narrow-Minded Conceptions of What Makes One a “Journalist”

What constitutes a ‘journalist’ is a semantic debate I’ve had dozens of times, particularly in grad school and in my previous full-time job as “Digital Media Manager” (another vague term) at Savannah Morning News. Outside of professional spheres, though, the general public discourse goes something like this:

Random person: So tell me again: Where do you work?

Me: At .

Random person: ‘Oh, cool! So you’re a reporter. What do you write or cover? .

Me: Well, I write code.

Random person: So you’re a developer?

Me: Yeah, in a way, but I build news apps and data projects for editorial purposes, so I’m a journalist, too.

Random person: ‘Oh….”

Typography, design as discourse

In his essay on text and typography, Lupton touches on the fundamental ideological shift of the digital era: that “the dominant subject is neither reader nor writer but user, a figure conceived as a bundle of words and impairments” (73). In other words, we cannot continue to view the way we communicate with audiences in a traditional, two-dimensional form where only the information we communicate is important. We have to think about how it will be perceived by others. As designers, we cannot simply apply our own artistic sensibilities to the material we produce. We must always keep the user in mind. In the case of typography, this user-centric approach requires what Katherine McCoy calls “redefining typography as discourse.” Design is more than a work of authorship. It is a work of communication that challenges “readers to produce their own meanings while also to elevate the status of the designer within the process of authorship” (73). Simply put, design is a conversation, not a sermon.

Response to Manovich on the Database

Manovich crystallizes the nature of the database-driven story and its applications to new media by describing them both as essentially dissolutions of the conventional narrative form. Stories told in database form need not follow a linear narrative structure with a beginning, middle and an end. Instead they are what Manovich calls “collections of individual items, with every item possessing the same significance as the other” (218). For Manovich, this database-driven story might take the form of a Twitter stream, which allows stories to be conveyed in small bits rather than packaged into a predefined narrative. But he also makes sure to address the potential pitfalls of this sort of database-driven story: that we  may “have too much information and too few narrative that can tie it all together”; that we may have too many Tweets but no logical or convenient manner of piecing them into a usable format (217). Finding a balance between the competing impulses of  information and narrative, then, leads Manovich to his undergirding call to action: We must devise a system of what he calls “info-aesthetics,” a sort of theoretical framework that helps us marry the aesthetic component of information access (i.e. the database-driven model) with the aesthetic components of processing or filtering that help us turn raw information into a coherent fashion (217).