The last time I wrote about Facebook it was under a cover image depicting Mark Zuckerberg as a fleece-wearing Satan. Since then, the company’s image has taken a bit of a blow. The “revelation” last weekend that the business performed “psychological experiments” to manipulate the moods of its users led to an outpouring of emotion on Facebook walls up and down the land.
Videos by VICE
But the truth is there’s nothing new or even particularly interesting about the now-infamous study, “Experimental evidence of massive-scale emotional contagion through social networks”. Researchers tweaked the algorithm that chooses which stories are displayed in people’s feeds, so that some users were shown more posts deemed to be “negative” by virtue of the words they contained – sad stuff, break-up stuff, war, famine, that kind of thing, I guess. The subjects – by a barely significant amount – then tended to be more likely to use the same, sad language in their own posts. This is the “emotional contagion” the researchers were looking for.
You could extrapolate this to mean that if you show people their friends’ more depressing posts, they get depressed themselves. Whether you would sign up to that theory depends on how much faith you put in sentiment analysis, which tries to mark pieces of text according to how many positive or negative sounding words they contain – love, hate, good, bad, etc, etc.
It’s the sort of thing PR people do to impress bosses who aren’t very tech savvy. It also tends to be less accurate the shorter the text being analysed is, which obviously means it’s not brilliant at things like tweets and Facebook updates.
The media response to this is neatly summed up by the headline of Laurie Penny’s New Statesman article: “Facebook can manipulate your mood. It can affect whether you vote. When do we start to worry?” Of course, the trite answer is that Laurie Penny can manipulate your mood and that Laurie Penny can affect whether you vote, so when do we start to worry about her, or all the other journalists out there doing the same? That’s maybe a little unfair, but it gets to the heart of the issue, which is that Facebook are simply one player in a far larger game of algorithms and data, the implications of which Penny and other non-technical pundits are only just beginning to grasp.
Most online media brands worth anything are doing something similar all the time. Huffington Post were pioneers of automated A/B testing in the 00s, testing different version of headlines in real time to see which would gain more traction with audiences. “But Martin,” someone who types in capitals is about to email me, “that’s nothing like Facebook’s experiment!” Except it is. Huffington Post experimented on its users, exposing them to different content in an attempt to find the text that created the strongest emotional response. The two major differences are that HuffPo used the results to boost profits in real time, and that nobody really gave a shit.
Should they have? The Coding Conduct blog has a fascinating post looking at the study and the ethical approval the researchers may have needed to obtain. Dave Gorski at Science Based Medicine pointed out that the policies of PNAS – the journal that published the research – require that, “Research involving Human and Animal Participants and Clinical Trials must have been approved by the author’s institutional review board.” Since Facebook (like most companies) doesn’t have an IRB, that leaves everyone in a bit of an ambiguous state.
The problem is, it just doesn’t make sense for this kind of study to have to go through ethical approval processes. A vast amount of what you see on the internet is controlled in some way by algorithms similar to those Facebook uses to dictate what appears on your wall, from suggested videos on YouTube to the headlines on Google News. Much of the rest is controlled by humans – the front page of VICE, for example. It’s hard to imagine how you’d bring Facebook’s research into an ethics regime without dragging half the internet into a farce.
Not that Facebook really need the scientific community, in any case. Back in February I wrote about deep learning, warning that companies like Facebook seizing a monopoly over data and expertise would sideline scientists. Here we have a case in point – Facebook can live without PNAS, but the academic community doesn’t have an alternative source of vital social network data. The power in this situation is entirely one-sided.
That’s the real story here. The idea that Facebook should have sought ethical approval to tweak the ranking of stories is a technologically-illiterate fantasy; but people are absolutely right to be sceptical about the sheer brute power that Facebook and its peers wield over our data, and the means of analysing it.
There’s another big problem with the study that, to my knowledge, nobody else has raised, and that’s repeatability. If you wanted to test the findings by doing the same research yourself, pretty much your only option would be to go back to Facebook and ask them nicely. That should tell you everything you need to know about the health of an information economy, where one company holds all the data and scientists are left begging for scraps.