Experimental evidence of massive-scale emotional contagion through social networksAdam D. I. Kramera,1, Jamie E. Guilloryb, and Jeffrey T. Hancockc,d
Author Affiliations
a Core Data Science Team, Facebook, Inc., Menlo Park, CA 94025;
b Center for Tobacco Control Research and Education, University of California, San Francisco, CA 94143; and Departments of c Communication and d Information Science, Cornell University, Ithaca, NY 14853
Edited by Susan T. Fiske, Princeton University, Princeton, NJ, and approved March 25, 2014 (received for review October 23, 2013)
Abstract
Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. Emotional contagion is well established in laboratory experiments, with people transferring positive and negative emotions to others. Data from a large real-world social network, collected over a 20-y period suggests that longer-lasting moods (e.g., depression, happiness) can be transferred through networks [Fowler JH, Christakis NA (2008) BMJ 337:a2338], although the results are controversial. In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks. This work also suggests that, in contrast to prevailing assumptions, in-person interaction and non-verbal cues are not strictly necessary for emotional contagion, and that the observation of others’ positive experiences constitutes a positive experience for people.
A few points to encourage thought and debate among readers here:
1. This is not the first time by any means Facebook has used the platform to conduct experiments. As well as the countless experiments it must be doing to encourage people to stay on the platform and click on advertisements, Nature published a facebook experiment in 2012 on using Facebook to improve voter turnout. It seems there is something particular about mood priming in this context that has annoyed a lot of people.
1. This is not the first time by any means Facebook has used the platform to conduct experiments. As well as the countless experiments it must be doing to encourage people to stay on the platform and click on advertisements, Nature published a facebook experiment in 2012 on using Facebook to improve voter turnout. It seems there is something particular about mood priming in this context that has annoyed a lot of people.
2. One of the key ethical principles for research is harm minimisation. For the Facebook experiment and for field experiments in general, harm minimisation can arguably be achieved in the sense of ensuring participants are not being exposed to stimuli beyond the normal range of what they would be exposed to in any case. For example, I don't think anyone would argue that it would be ethical to experimentally introduce a large number of links about optimal suicide methods to people who hadn't signed up for them. However, in the mood experiment people are simply receiving diluted versions of what they would receive on a normal day. There are still many who would argue that even exposure to such a mild degree of potential harm is unethical and this is worth discussing further.
3. Another point is that Facebook and related platforms are likely conducting a substantial volume of experiments on a regular basis. As pointed out in this blogpost by Tal Yarkoni (h/t @kevindenny on twitter) the Facebook newsfeed is a "completely contrived" environment. In a sense it is a result of many structural and environmental decisions made by facebook designers over years, many of which will have been decided by randomised and other type of trials and customer feedback. Thus the idea that your thoughts and feelings are being manipulated by a company is basically tautological in these environments. Does it make it worse that in this specific instance they are doing it for the purposes of advancing knowledge rather than the usual purpose of gaining revenue share?
4. A general point for many field experiments is that the notions of informed consent and debriefing become more complicated. Many policy field experiments do not elicit consent in the standard way i.e. explaining to participants clearly what the study is about and asking them to sign a consent declaration to say that they are happy to participate. Instead, facebook users sign up to a user agreement governing all manner of things. Most people probably do not read this agreement carefully and arguably would assume it just governs service elements of the platform and not experiments. It is certainly a valid topic for discussion and debate as to how academics should engage in this type of research and how best to ensure, if not consent in the narrow traditional sense, consent in the sense that a person using a platform is aware that such manipulations are taking place. The extent to which the method of telling people they may be part of an experiment might induce different types of behaviours on their part is an interesting aspect of this debate.
4. A general point for many field experiments is that the notions of informed consent and debriefing become more complicated. Many policy field experiments do not elicit consent in the standard way i.e. explaining to participants clearly what the study is about and asking them to sign a consent declaration to say that they are happy to participate. Instead, facebook users sign up to a user agreement governing all manner of things. Most people probably do not read this agreement carefully and arguably would assume it just governs service elements of the platform and not experiments. It is certainly a valid topic for discussion and debate as to how academics should engage in this type of research and how best to ensure, if not consent in the narrow traditional sense, consent in the sense that a person using a platform is aware that such manipulations are taking place. The extent to which the method of telling people they may be part of an experiment might induce different types of behaviours on their part is an interesting aspect of this debate.
5. Several people have pointed out that one negative consequence of this publicity is that facebook will stop publishing results of what they are doing or stop collaborating with academics in making the findings available to publish. As against that, the debate has been useful and doesn't look to have harmed their share price too much (though small movements in facebook's share price off-trend might amount to a lot of money).
2 comments:
When presented with more positive/negative words people use more positive/negative words in their updates. Effect sizes range from d = .001 to .02. Unlikely anyone actually felt any different (emotional contagion should be about feelings right). Replace positive/negative with any other type of word (e.g. politicians, animals, days of the week) and you will see similar small levels of imitation across hundreds of thousands of people with tiny effect sizes and follow with articles dressing these up as profound psychological contagion effects (e.g. political contagion, anthropomorphic contagion, calendar effect contagion). Only difference is these studies will probably have gone through a slightly more rigorous ethics process.
Also share price down 2.4% today and Sanderg apology covered in pretty much all major news outlets. Even if this share price drop is partially a coincidence FB probably won't be rushing into any new social science studies with this potential collateral anytime soon.
Post a Comment