Why does YouTube have a longer lifespan than other platforms?

Chart provided by bitly.

When trying to reach a mass audience, what’s the best platform to share your content? Well, the obvious answer is as many places as you can. But according to a post by bitly analyzing traffic patterns, links shared on YouTube have a lifespan of 7.3 hours, compared to 2.8 hours on Twitter and 3.4 hours on Facebook. Why such the disparity? Why does YouTube have such a longer lifespan?

Is it because video has a longer lifespan than all other forms of content? Or is it because YouTube has a different user-experience than other social media platforms? While YouTube content is slower to peak, it lasts far longer in the online ecosystem than content posted on other social media platforms such as Twitter and Facebook. The most obvious answer for the cause of this phenomenon would be that video is a medium that inherently captures our attention for a longer, and slower, period of time. We tend to go back, rewatch and share video more than we do text-based content, causing video to have a longer lifespan.

But there’s also another possible explanation for YouTube’s lengthier half-life. It could just be the nature of YouTube’s network structure. Facebook and Twitter are more of aggregators than YouTube, which is a platform for user-generated content rather than just a portal. So, because of their vast user base and high rate of captivity, Facebook and Twitter by their nature attract attention quicker. But that attention is often only surface attention, which is possibly a reason those networks have a shorter half-life than YouTube. People go to YouTube videos more frequently as a destination, whereas other social media platforms only act as a portal.

On the importance of localism

A decade before the rise of the Internet set in motion the disruption of legacy news business models, Kaniss foresaw the growing need for local and regional news to unite increasingly fragmented, suburbanized communities.

Should data viz be a specialty or a commodity skill in the newsroom?

An interesting question came up at last Wednesday’s Doing Data Journalism (#doingdataj) panel hosted by the Tow Center for Digital Journalism here at Columbia’s J-School: Should there be data specialists in the newsroom, or can everyone be a data journalist? For New York Times interactive editor Aron Pilholfer, who participated in the panel, the question is not so much should everyone do data as will everyone do data. And for Pilholfer, the answer to that question clearly seems to be no:

I kind of naively thought that at one time you could train everybody to be at least a base level of competency with something like Excel, but I’m not of that belief anymore. I think you do need specialists.

I’ve always hated the idea of having technology or innovation ‘specialists’ in a work environment that should ideally be collaborative. So, at first I tended to disagree with Pilholfer’s argument. But what won me over was the reasoning behind his claim. For Pilholfer, it’s not that the technology, human talent or open source tools aren’t there for everyone to scrape, analyze and process data –– in fact, it’s now easier than ever to organize messy data with simple and often free desktop applications like Excel and Google Refine. The problem is that there’s a cultural lack of interest within newsrooms, often from an editorial level, to produce data-driven stories. As Pilholfer says in what appears to be an indictment of upper-level editors for disregarding the value of data,

The problem is that we continue to reward crap journalism that’s based on anecdotal evidence alone . . . But truly if it’s not a priority at the top to reward good data-driven journalism, it’s going to be impossible to get people into data because they just don’t think it’s worth it.

I totally agree, but with one lurking suspicion. As with the top-level editors, many traditional users –– or ‘readers,’ as one might call them –– still at least think they like to read pretty, anecdotal narratives, and tend not to care as much whether the hard data backs them up. In other words, it’s an audience problem just as much as it is a managerial or institutional one. Some legacy news consumers just still aren’t data literate. Because they’re not accustomed to even having such data freely available to them, they don’t even value having it. As the old saying goes, “You can’t miss what you never had.” Yet as traffic and engagement statistics continually confirm, as soon users have open data readily available to them through news apps and data visualizations, they spend more time accessing the data than they do reading the print narrative.

Aron Pilholfer at #doingdataj

Totally agree, but harbor the lurking suspicion that many traditional readers still like to read pretty narratives and don’t care as much if the facts back them up. In other words, it’s an audience problem just as much as it is an editorial one.

Visualization/design critique: Guardian.co.uk

So I’ll admit it: I’ve always kind of had a design crush on the Guardian‘s website, and I may or may not have tried to emulate it in various other news websites I’ve developed. What I love most about the Guardian’s design is simply its proprietary typeface. That slightly “Georgia” looking serif with the curbed nodules and cut-off “G’s” instantly alerts the user that they’re interacting with the Guardian brand. Another strong aspect of the site is that it succeeds where  many legacy news organizations fail in that it successfully and cleanly integrates an array of different content, from videos, to mugshots for columnists, to vertical celebrity shoots and to landscape scenes of world political affairs and crises. Though it may seem obvious, the coordianted color schemes on the site allow the user to receive visual cues about which section she’s reading or encountering. Color is perhaps the Guardian’s strongest visual element.

What also makes the Guardian site in my view the almost perfect model for for-profit news sites is its interactivity. Designers don’t have to worry about whether the body text of the articles will make the page look visually too distracting, as users can simply hover over a picture to read the excerpt. It also likely increases audience engagement, asssuming that people click or hover on stories who may not have otherwise.

I could go on and on for days about what a groundbreaking model the Guardian’s website is––like how its use of white space around the header gives users a sense of minimalism, or the way in which the site displays its ads. But I won’t. All I’ll say is that it’s so user-friendly that it’s hopped over the pond to circulate in America.

Data visualization, infographic or illustration?

Check out this interactive graphic on the rise of Google recently produced by the folks over at OnlinePhD.org. It’s an innovative example of how developers can use a responsive, single-page interface to convey a broad range of chronological information that would otherwise be crammed into a timeline. The interactivity of the graphic compels the user to click through to see what happens next, and provides a more engaging narrative than a simple linear flow would.

Based upon our various understandings of the terms data visualization, infographic and illustration, which category would this graphic fall into? I’d be most inclined to say it’s an infographic rather than an illustration, given that its primary goal is to convey factual information (the chronological rise of Google) rather than just to provide an illustration, which it also does quite nicely. But I wouldn’t call it a data visualization, because it’s not immediately apparent from first glance what the graphic is trying to illustrate, and the story isn’t data-driven. It has a more conventional narrative.

What do you think?