Tech

Academics Say Bots Keep Targeting Their Research on LGBTQ Health

Researchers have noticed an upswing in bot attacks against online surveys for academic research—especially those that deal with social inequality.
​A finger pointing at a computer screen with a prompt to select gender
Yui Mok / Getty Images

On May 7, 2020, at the height of the COVID-19 lockdown, health researchers at Rutgers University launched an online survey to track the impact on the LGBTQ population in the U.S. They promoted it through social media, hoping to get 1,000 responses in three months. They had 1,251 in two days.

“We're like: this is incredible; we should be doing survey research all of the time,” the lead researcher, Marybec Griffin, told Motherboard.

Advertisement

Their excitement soon evaporated when they examined the completed surveys. “We realized that they were either duplicate responses or responses were coming in like 30 seconds to a survey that was 50 questions long,” said Griffin.

Some questions were “answered” with just spaces or periods. One question, “How has coronavirus (COVID-19) affected your life?” was repeatedly met with the same seven-word response: “It is sad. We cannot meet together.”

The study had been infiltrated by bots, software applications designed to mimic online human activity. Bot attacks are a growing problem for social science researchers, some of whom have noticed that they often strike studies that pertain to social inequity, minority health and other issues that right-wing reactionaries often decry as “woke”—

These survey questionnaires have long been an important tool in psychology, public health and other fields that study human behavior. Researchers now both recruit participants and conduct them mostly online, which is much cheaper and easier than older methods, like taping up flyers or approaching people at places frequented by the population being studied. It has also been a boon for research on minority populations, which had presented a struggle to gather a sample size large enough for meaningful results.

Advertisement

But the shift online has also created an opening for bot attacks. They are now nearly ubiquitous in survey research, said Cory J. Cascalheira, lab manager at Syracuse University’s Minority Stress and Trauma Lab.

“If you're posting to social media, for any research, you better be checking for bots, because you probably have at least a couple,” he told Motherboard.

Cascalheira is currently researching bot attacks on academic research itself. “We're studying it, asking researchers, to get a better idea of the prevalence, because right now, it's mostly anecdotal,” he said. “People post about it on Twitter a lot.” Some researchers have written about their bot problems in the venue they know best: academic journals. “That’s another metric, I think, of this problem becoming more prevalent,” said Cascalheira.

His interest was piqued when a study from his lab, measuring stress and trauma on LGBTQ women, was crashed by more than 19,000 responses, about 210 of which turned out to be real.

As for why the invisible human commanders of bot armies would attack an academic research study, one obvious motive is money. These surveys usually pay a small reward, either in cash or with gift cards for a retailer like Amazon or Target, to incentivize people to complete them. Hackers have long used bots on websites like Survey Junkie and Opinion Outpost, which offer a few dollars—or even cents—per hour to fill out surveys for market research. When completed in bulk, this rakes in modest cash.

Advertisement

Another possible motivation may be to injure researchers studying inequality and social identity.

“Anecdotally, that's what we've noticed,” said Cascalheira, “that most people we talked to that have this problem are doing minority-focused health research.”

Bots are employed as culture war weapons, notorious on social media for flinging misinformation and venom into chatter about hot-button issues, with particularly effectiveness for the far-right types.

In this arena, they are growing in use and sophistication, Brandie Nonnecke, who studies online behavior and internet governance at University of California, Berkeley, told Motherboard.

“I could go in and build something and if anybody tweeted about a certain topic, my bot account could tweet out a nasty response,” Nonnecke told Motherboard. That’s easy. But now bots can detect and try to stop the spread of socially relevant hashtags by tweeting gibberish along with that hashtag. Some users, called “cyborgs,” combine bot attacks and their own type-it-and-tweet antagonistic behavior, evading Twitter’s bot detection systems.

Because social media is a necessary recruitment tool, academics conduct survey research in this environment. It’s easy to speculate that the battleground of social media has spilled into survey research.

Advertisement

“Why wouldn't they use that mechanism if they had at their disposal?” said Nonnecke.

Plus there is a certain troll-ish-ness to the whole thing. As one user put it, when sharing a post from a psychology phD candidate trying to relaunch her dissertation research after a bot attack: “Black woman scholar’s data collection efforts on gendered racism and mental health attacked by bots has to be the most Twitter thing ever.”

“It is something that, for colleagues who are working on topics that are more controversial, they have reported,” Broniatowski told Motherboard. “A lot of times researchers who are studying controversial topics, especially topics related to major cultural cleavages, will be targeted for online harassment, and this is just one form of that harassment.”

Even if a bot attack is not meant to throw a wrench into socially meaningful research, it has that effect: The attacks eat up funding used for participant incentives, cause delays in research, and create a reason for naysayers to dismiss the whole process and doubt any findings.

“The main way that I see it happening is delaying the data, delaying the research, delaying knowledge production,” said Cascalheira, “and for people who are particularly malicious, I'm sure they could cast doubt on this knowledge base.”

Advertisement

Although few single studies are considered the conclusive word on a topic, survey research, taken as a whole, creates a knowledge base about social problems. These studies are why we know that LGBTQ adolescents are more likely to attempt suicide, for example, or that Black Americans are less likely to see a doctor at the onset of a medical issue.

Survey results can usually be salvaged after a bot attack, but it is time-consuming to identify and eliminate automated responses, particularly if the bots are not caught early.

Researchers are also now tasked with upping cyber security measures to guard against bots, plus ways to suss them out. (A popular method is putting in a “hidden question” invisible to human users but which a bot will detect and respond to.) If the study is still infiltrated, it causes embarrassment and fear for one’s reputation in a field where competition for research dollars and faculty positions is fierce.

Griffin, the sexual health researcher who led the Rutgers study, said her initial response to the bot attack was panic. “It’s like: What have I done? Am I ever going to do anything again? Will anyone ever trust me?” She had earned her doctorate two years prior and had been at Rutgers for just eight months.

She relied on the survey not just for its inherent research value, but to build a résumé. 

“Early-career academics like me and a lot of my friends don't have access to a lot of money to pay for surveys,” she said. Online surveys are a “quick and easy way that we can use startup funds [to] build our portfolio to then apply for bigger grants through like, the National Institutes of Health and things like that.”

She and her collaborators salvaged 478 responses from real people and relaunched the survey with new security protocols, eventually gathering 1,090 responses they verified as real. They published three papers from the results, on employment losses, sexual activity and COVID-19 testing trends of various LGBTQ populations during lockdown, with a fourth one on medication adherence upcoming.