We all know that social media sites use algorithms to know what you buy, who your best friends are, and when you’re about to get hitched. Apparently Twitter can even be used to predict crimes, elections, epidemics, and successful TV shows. But now, according to research published last month, it can also reveal a user’s likelihood of being struck down by heart disease.
A University of Pennsylvania team of psychologists, doctors, and computer scientists analyzed 826 million tweets from 1,300 countries and found that particular words can predict the health condition, which is the leading cause of death in the US and Australia.
Videos by VICE
“High risk was associated with a lot of very negative emotion, aggressive words, and expletives. Words like hate, drama, bored,” Dr. Margaret Kern told ABC News, while “lower risk was actually associated with a lot more positive things. Words like wonderful, friends, drink, company.”
Last year Facebook drew criticism when it published the findings of a mass psychological experiment that proved it could make users happier or sadder. In doing so they caused many of us to express precisely the kind of negative emotions and words researchers now claim indicate a greater chance of fatal illness.
But does this mean the gods of social media (guys wearing “Keep Calm and Code On” T-shirts) can now use algorithms to break our hearts, and then calculate when those hearts will stop beating?
Nobody doubts that unprecedented amounts of data are currently being collected. But technology writer and commentator Stilgherrian told VICE that little is being achieved beyond individuals being targeted for specific advertising.
“This whole concept is now reaching a height of masturbatory buzz because this is the era of Big Data,” he says. “As soon as you have numbers it gives the illusion of understanding, and with that comes the illusion of power.
“It’s the technologist’s dream. Our data has given us insight! Yeah, but hey, Silicon Valley kid, how do you use that to improve somebody’s health? You don’t. You sit there smugly and go, ‘Oh, this person over here has 0.7 or whatever rate of heart disease.’”
Data is often referred to as the new oil . Companies understand its importance but they don’t necessarily comprehend what they’re sitting on. So while talk of the powers of social media’s predictive analysis might be masturbatory buzz for now, nobody can say for sure what will be extracted from it in the future.
“Perhaps that’s why it’s dangerous. The people collecting it don’t understand this stuff, let alone us,” says Stilgherrian. “When this second dotcom bubble bursts people will be trying to offload these databases for as much as they can get. What happens after that?”
Studies like Pennsylvania’s heart disease research, which made headlines around the world, demonstrated that when it comes to large scale numbers analysis, the new trend involves looking at huge amounts of social data and extracting patterns. It’s not just academics who are doing this—companies and governments are also in on the act.
Even in our current primitive understanding of what all this data means and whether the research is worth a dime, it’s not difficult to imagine how some of it could be used.
Futurist and academic Mark Pesce told VICE that health-based services delivering recommendations by correlating information from multiple streams and devices (your social networks, smart watches, phones, fitness doodads, etc.) aren’t far away.
“They will produce some set of feedback that will look like suggestions around your own behavior, like ‘Don’t have that second coffee or that second piece of cake,’ or ‘Maybe you should walk to work,’” he says. “It’s going to be all that stuff and most of the time it’s going to seem quite banal. A set of suggestions to help us become healthier or happier.”
But with the good will come the bad. Just as these streams can theoretically improve our lives, they can also theoretically make us worse. What’s to stop a social media network covertly making us ill so they can sell us a remedy? This sort of concern ultimately comes down to who has access.
“Data security is something we think of because we don’t want our credit card data out there,” Pesce says. “But it’s my well being data I don’t want shared around. If we start having large pools of well-being data—individually and en masse—and they aren’t secured then, well, that’s a prescription for disaster.”
Follow Luke on Twitter.
Image by Ben Thomson.