FYI.

This story is over 5 years old.

Here Be Dragons

Stop Freaking Out About Facebook's 'Psychological Experiments'

Other websites have been playing around with data and algorithms for years.

Photos via and via

The last time I wrote about Facebook it was under an image depicting Mark Zuckerberg as a fleece-wearing Satan. Since then, the company’s image has taken a bit of a blow. The "revelation" last weekend that the business performed "psychological experiments" to manipulate the moods of its users led to an outpouring of emotion on Facebook walls up and down the land.

But the truth is there’s nothing new or even particularly interesting about the now infamous study, titled “Experimental evidence of massive-scale emotional contagion through social networks.” Researchers tweaked the algorithm that chooses which stories are displayed in people's feeds, so some users were shown more posts deemed to be "negative" by virtue of the words they contained—sad stuff, break-up stuff, war, famine… that kind of thing, I guess. The subjects—by a barely significant amount—then tended to be more likely to use the same, sad language in their own posts. This is the "emotional contagion" the researchers were looking for.

Advertisement

You could extrapolate this to mean that if you show people their friends’ more depressing posts, they get depressed themselves. Whether you would sign up to that theory depends on how much faith you put in sentiment analysis, which tries to mark pieces of text according to how many positive- or negative-sounding words they contain—love, hate, good, bad, etc.

It’s the sort of thing PR people do to impress bosses who aren’t very tech savvy. It also tends to be less accurate the shorter the text being analyzed is, which means it's not great for things like tweets and Facebook updates.

The media response to this is neatly summed up by the headline of Laurie Penny’s New Statesman article: “Facebook can manipulate your mood. It can affect whether you vote. When do we start to worry?” Of course, the trite answer is that Laurie Penny can manipulate your mood and that Laurie Penny can affect whether you vote, so when do we start to worry about her, or all the other journalists out there doing the same? That’s maybe a little unfair, but it gets to the heart of the issue, which is that Facebook is simply one player in a far larger game of algorithms and data, the implications of which Penny and other non-technical pundits are only just beginning to grasp.

Most online media brands worth anything are doing something similar all the time. Huffington Post was a pioneer of automated A/B testing in the 00s, testing different version of headlines in real time to see which would gain more traction with audiences. “But Martin,” someone who types in all caps is about to email me, “that’s nothing like Facebook’s experiment!” Except it is. Huffington Post experimented on its users, exposing them to different content in an attempt to find the text that created the strongest emotional response. The two major differences are that HuffPo used the results to boost profits in real time, and that nobody really gave a shit.

Advertisement

Should they have? The Coding Conduct blog has a fascinating post looking at the study and the ethical approval the researchers may have needed to obtain. Dave Gorski at Science Based Medicine pointed out that the policies of PNASthe journal that published the research—require that, “Research involving Human and Animal Participants and Clinical Trials must have been approved by the author’s institutional review board.” Since Facebook (like most companies) doesn’t have an IRB, that leaves everyone in a bit of an ambiguous state.

The problem is, it just doesn’t make sense for this kind of study to have to go through ethical approval processes. A vast amount of what you see on the internet is controlled in some way by algorithms similar to those Facebook uses to dictate what appears on your wall, from suggested videos on YouTube to the headlines on Google News. Much of the rest is controlled by humans—the front page of VICE, for example. It’s hard to imagine how you’d bring Facebook’s research into an ethics regime without dragging half the internet into a farce.

Not that Facebook really need the scientific community, in any case. Back in February I wrote about deep learning, warning that companies like Facebook seizing a monopoly over data and expertise would sideline scientists. Here we have a case in point—Facebook can live without PNAS, but the academic community doesn’t have an alternative source of vital social network data. The power in this situation is entirely one-sided.

That’s the real story here. The idea that Facebook should have sought ethical approval to tweak the ranking of stories is a technologically-illiterate fantasy; but people are absolutely right to be skeptical about the sheer brute power that Facebook and its peers wield over our data, and the means of analyzing it.

There’s another big problem with the study that, to my knowledge, nobody else has raised, and that’s repeatability. If you wanted to test the findings by doing the same research yourself, pretty much your only option would be to go back to Facebook and ask them nicely. That should tell you everything you need to know about the health of an information economy, where one company holds all the data and scientists are left begging for scraps.

Follow Martin Robbins on Twitter.