Categories
Technology

Facebook’s “Emotion Detector”: why doesn’t Cornell U take some of the heat?

By now, the story is everywhere: Facebook chose to edit it’s user’s timelines to experiment with whether predominantly good or predominantly bad news stories would affect their emotions. Not surprisingly, your friends’ funk spreads to you, even over the “innernets.”

But what’s got people really up in arms is that Facebook manipulated users’ feeds without telling them and for the express purpose of scientific experiment. That should upset people, for a lot of reasons. Not the least is: while it may be true that you’ve given your consent to have your data studied and manipulated for reasons other than you might intend, you didn’t give your consent to have your personal emotional state altered, which in this case is exactly what they did.

What is strange to me in all of this is that Facebook was not alone, yet they alone seem to be taking the blame. When first I heard of the story, more than two weeks ago, I heard it directly from the media arm of one of the universities that took part in the study, Cornell UniversityUniversity of California, San Francisco (UCSF) also took part in the Big Data study:

“People who had positive content experimentally reduced on their Facebook news feed, for one week, used more negative words in their status updates,” reports Jeff Hancock, professor of communication at Cornell’s College of Agriculture and Life Sciences and co-director of its Social Media Lab. “When news feed negativity was reduced, the opposite pattern occurred: Significantly more positive words were used in peoples’ status updates.”

The experiment is the first to suggest that emotions expressed via online social networks influence the moods of others, the researchers report in “Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks,” published online June 2 in PNAS (Proceedings of the National Academy of Science) Social Science.

Facebook certainly has a lot to answer for. But this should also serve as a warning to would-be Big Data experimenters that Big Data affects little people. If the results of an experiment are spread out over several hundred thousand unwilling participants, that does not mean that the experiment is consequence free, nor should it be.

Update: someone much more familiar with scientific ethics standards and IRB’s (Institutional Review Boards) than I seems to be echoing my concerns. A key passage:

.. But while much of the uproar about Facebook’s inappropriate manipulation of human subjects has been  (appropriately!) directed at Kramer and his co-authors, missing from the commentary I’ve found on the Web thus far is any mention of the role of the (academic?) reviewers who read the manuscript and ultimately recommended it for publication by the National Academy of Sciences..  (Note: Forbes reports that researchers at Cornell passed on reviewing the final paper, although Cornell researchers did help design the study.)

Thanks go to reader @chelseamdo for the find.

Later Update: The Ithaca Voice finds reason to believe, based on a Mashable article, that the Cornell University study may have also received US Army backing. The Army undeniably funded another study by the same boffin, also concerned with shaping dialog on social. But Cornell denies that the Facebook study in question was funded in any way by any outside contributor.

While Professor Hancock, like many researchers, has conducted work funded by the federal government during his career, at no time did Professor Hancock or his postdoctoral associate Jamie Guillory request or receive outside funding to support their work on this PNAS paper. Initial wording in an article and press releases generated by Cornell University that indicated outside funding sources was an unfortunate error missed during the editorial review process. That error was corrected as soon as it was brought to our attention.