Rumination: Facebook, experiments and morals

TL;DR.


There’s been a great fuss as of late around the so-called “Facebook emotion experiment”, an investigation conducted by Facebook-affiliated scientists which involved manipulating the news feed of around 700’000 FB users in order to learn about the mechanism of emotional response to content posted by others.

The public shock following the publication of the experimental data has been great. Facebook’s conduct has been described by many users as “creepy”, “evil” and “terrifying” and the social media giant has been accused of “emotion manipulation” and even of “communist-style thought control”.

What did Facebook researchers do? Basically, they altered the number and ranking of the news visible to the target user, based on the news’ emotional content, such as showing you for a week only the gloomy updates from your friends network and not the cheerful ones. By then monitoring the social activity of the target users, researchers were able to conclude that “When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred”.

Leaving the breadth and significance of these findings aside, as professionals in the social media industry we might be interested in the question of whether the Facebook experiment was as morally questionable as it has been claimed to be and, by extension, of what is a morally healthy boundary for the use of data gathered through social media. So let us question, what did Facebook researchers do wrong?

First of all, the problem is not so much one of lack of disclosure or of consent. Among the clauses in Facebook’s User Agreement (yes, that long document that almost no one ever bothers to read) there is one explicitely mentioning consent to “data testing, analysis, research”. The problem is not one of violation of privacy either, as the data were presented in aggregated form and “none of the data used was associated with a specific person’s Facebook account”. Actually, neither of these points can be so easily dismissed; however, the real issue with the Facebook experiment lies elsewhere.

In the age of Big Data, we are getting used to having social graphs mined for discovering or confirming all kinds of associations. Facebook experiment, however, differed from such widespread and relatively uncontroversial research projects in that researchers weren’t just observing people’s behaviour and reactions: they were actively manipulating the environment around them. What is more, their intervention was actively making some people feel worse.

Facebook researchers reacted to criticism by claiming that the experiment was actually just a market research like are carried out every day by all kind of companies across the world, and that “sometimes you need to hurt the experience for a small number of users to help make things better for 1+ billion others”. Does this defense stand to test?

If a snack producer experiments with the recipe of their snacks in order to figure out people’s tastes better, it is easy to classify what they are doing as market research. It is their product, and they have a right to do so; if customers are unhappy with the new recipe, they’ll stop buying the snack. On the other hand, if a University enrols participants for a study on memory, it is clear enough that what they are doing is psychological research. In this case, an ethical committee should be in place to make sure participants’ discomfort is not disproportionate relative to the objective sought, which is the production of knowledge.

Facebook experiment, however, lies in the grey zone. The results were published in a scientific journal, two universities participated in the study, and the manipulation aspect clearly shows that users were cast as experimental subjects. As such, it is problematic that participating users were not warranted the kind of ethical protection that generally applies in such situations. It is not like anybody would commit suicide after reading for a week only about dead parrots or people breaking up. Yet it was not on Facebook researchers to decide what level of discomfort was acceptable to inflict on users, and even if inflicting it was justified in the first place.
That’s the difference between being in the social media and being in the snack industry: when you mess up with your product, to an extent you’re messing up with people’s lives as well – and this is not something that can be done light-heartedly.

At this point we might wonder if there is in the end any way to conduct this –undeniably interesting– kind of research without breaking the rules or upsetting people too much.
An interesting possibility is that of offering users a choice between two versions of the social platform: a ‘stable’ release for users who do not wish to be in the front lines, and a ‘beta’ version where tests are run and where the behaviour of the platform evolves according to the tests’ results.

What do you think of this whole affair? You can let us know by leaving a comment – on our facebook page, of course!

Leave a Reply

wpDiscuz