The Pew Research Center does some regular polling on a number of issues. In particular, they like to gauge the public’s interest in various topics and match that up against the total hours of media coverage the topic is given. The idea is to measure the extent to which the media is actually covering what people want to see.
And its a good idea. One of the fall-back excuses for the worst excesses in lurid media coverage of Casey Anthony-type subjects is that “people want to see this, so we have to show it.” The polling data often shows that the stated desire of their audience is often at odds with this assertion.
But that’s not quite the end of the story. Behavioral scientists will tell you that the minute someone is aware they’re being observed, their behavior changes. Polling is inherently observational and requires a human operator to ask questions. So how do we know that what people say they want to watch on the news is the same as what they actually want?
We don’t. In fact, polling science calls this a response bias: the tendency for respondents to answer with what they think the person on the other end of the phone call wants to hear rather than what they actually feel.
With this in mind, its hard to imagine how a poll asking people what they want to hear in the news could possibly be accurate in a literal sense. Nobody wants to read bad news, but many of us feel the obligation to at least appear concerned about things like the economy.
Here at DFE, I recently did a survey of my audience and asked about the various subjects I’ve previously covered. Respondents were asked to tell me whether they’d like to read more or less of a given subject. There are a number of reasons that a poll like this is not representative, starting with the fact that everyone who responded had me in common: they all like the same website/Twitter feed, ergo they have a specific bias that would likely show up in polling.
The poll itself was entirely non-compulsory, allowing respondents to skip any questions they liked. Which means of course that it suffers from the voluntary response bias: the only people who participate are people who really wanted to, therefore have strong opinions on the questions they answered. On the other hand, the poll I conducted was done online, so it didn’t suffer the response bias inherent in person-to-person contact. So, while a poll of this nature is far from scientific, it does I think point pretty clearly at people’s actual opinions more than their perception of the poll’s bias.
And the response was overwhelmingly negative on economic news. This jibes with what I’ve seen in my click-through rate: the rate at which people click on the links I’ve posted to Twitter and FaceBook by day, which showed very weak numbers when I posted economic news. And while following me on Twitter is non-compulsory, responding to the survey was non-compulsory and answering specific questions is non-compulsory, it should be noted that I’ve covered economic news for nearly a year on every single Monday. None of my followers were unfamiliar with what I was posting, but they followed me anyway.
All of this is fodder for plenty of arguments and debates, to be sure. Do people really want to hear the economic bad news? Does my poll shed any usable light on the subject? What about the veracity of the Pew poll? Any way you come down on the subject, I think its important to consider these questions when viewing the results of any poll, let alone the below Pew Research poll. Mainstream news services have the unfortunate tendency to just post the data without critical analysis – or worse, with the invested biases of politicians.
Troubled Economy Top Story for Public and Media | Pew Research Center for the People and the Press.