Tech

Schools Use Software That Blocks LGBTQ+ Content, But Not White Supremacists

A Motherboard investigation found the algorithmic surveillance tools allow racist groups like the KKK while flagging LGBTQ health sites as 'porn'.
Students sit in blue chairs while working in a computer lab
Getty Images

Several days ago, Motherboard sent a series of emails from a dummy Gmail account. One read: “Hello, I am going to join the Neo-Nazi group Texas Rebel Knights of the Ku Klux Klan.”

The dummy account was being monitored by child surveillance software purchased for $14 per month from Bark, an Atlanta-based company that claims on its homepage to protect more than 5 million children and have prevented 16 school shootings by monitoring everything children type, read, and do on their devices.

Advertisement

Over the course of one day, Motherboard sent 65 emails with the subject line “New group to join” and the name of either a white supremacist group (as determined by the Southern Poverty Law Center and Anti-Defamation League) or the name of a group advocating for LGBTQ+ rights, racial justice, or gun control in the body of the message.

Bark only flagged two emails: one that simply said “porn,” the other “Everytown for Gun Safety.”

Driven by marketing campaigns that capitalize on parent fears about school shootings and child predators, tools like Bark are part of a child surveillance industry that has grown rapidly in recent years—despite a drought of evidence that the software actually makes kids safer. Often, the tools claim to use algorithms that filter websites, flag dangerous content, and give parents and schools a virtual eye over the shoulders of kids as they use the internet.

According to Bark CEO Brian Bason, the company’s algorithms performed as intended during Motherboard's investigation. Emails with the subject line “New group to join” and a message stating intent to join a notorious Neo-Nazi group “were correctly not flagged because (based on your description of your test messages) there was no context in the messages–had your messages included hate speech or grooming of the child, I am confident it would have been flagged,” Bason wrote in an email.

Advertisement

“I’m just surprised [Bark] wouldn’t have been trained to notice words like KKK or Nazi. It sounds pretty naive,” Megan Squire, a computer scientist working as a senior fellow for data analytics at the Southern Poverty Law Center, told Motherboard. But even if the company’s algorithms were able to recognize context in even the most blatant statements, she said, they would likely fail to parse the clandestine ways Neo-Nazis talk and recruit—by specifically avoiding hate speech and instead communicating through layers of memes, irony, and multiple levels of in-jokes.

Motherboard also asked Bark for news articles, police reports, or other documents to back up its claim of “16 school shootings prevented.” The company did not provide any evidence for the claim, but removed the statistic from its prominent place at the top of its homepage. Bason said “we rotate these stats” and that “it of course comes with the understanding that we are a very small piece of those situations.”

The Bark homepage has been saved 39 times by the Internet Archive during 2021. While other statistics on the page have changed, the school shootings-prevented number was displayed prominently every day until the company responded to Motherboard on April 22.

The failure of algorithmic parenting to do what it says on the label—block access to naughty websites and alert adults to potentially dangerous behavior—goes beyond Bark. Motherboard’s investigation suggests that the tools give parents a false sense of security while also blocking children from educational and health material in a manner that pushes up against legal prohibitions against discrimination.

Advertisement

“None of these things are actually built to increase student safety, they’re theater,” Lindsay Oliver, an activism project manager for the Electronic Frontier Foundation, which has compiled a surveillance self defense guide for students, told Motherboard. “They leave out the marginalized, they punish the marginalized, and they just don’t work.”

Your identity is pornography

Child surveillance tools are notable not just for what content they fail to flag, but also what they do consider dangerous or prohibited. 

Ezra is a student at a high school that uses Securly to filter which websites can be accessed on school-issued devices. While doing research for a recent school project and editing a Wikipedia article about a feminist business, he discovered that the filter was regularly preventing him from viewing websites that most students and educators—particularly at his school—would consider valuable educational resources.

Ezra, who asked to retain partial anonymity, shared a list of nearly 60 websites with Motherboard that Securly’s web filter blocked. They include health resources for LGBTQ teens, news outlets that cover LGBTQ issues, educational resources about sexually transmitted diseases, and pages like gayrealtynetwork.com, whose only offense appears to be having the word “gay” in its URL.

Advertisement
A screen showing a message over a blocked website, reading "Looks like this page isn't allowed." The text below shows the blocked website as glma.org and the reason for the block as "Pornography"

Screenshot of a blocked LGBTQ health website that has been labeled "pornography" by Securly

Securly labeled glma.org, the website for an association of health professionals advancing LGBTQ equality as “pornography,” according to screenshots Ezra provided. It determined that transcendingboundaries.org, the page for a conference on bisexual, transgender, and inter-sex issues, was both “other adult content” and “hate.”

Securly has engineered its PageScan algorithm so that it shouldn’t automatically flag content just because it has words like gay or lesbian, Mike Jolley, the company’s director of K-12 safety operations, told Motherboard. “That’s the best way I can sum up what we’ve done to ensure we aren’t blocking a student who needs help or legitimate information. It’s still a work in progress, but we have made great strides.”

Jolley said that when he tested Securly’s filter on April 21, after being contacted by Motherboard, many of the sites on Ezra’s list were no longer blocked. The company also decided to unblock several that were still inaccessible at that time.

Ezra said he was fortunate that, in his school and community, he felt comfortable raising the issue of discriminatory filtering with administrators, who contacted Securly and asked that the sites be unblocked. But several weeks later, when he went back to try the sites again, Ezra found that the algorithm had reverted and was once again blocking some of the pages.

“I just imagine a kid in middle school who is questioning their sexuality or just wants information and the big thing pops up that says ‘this website is blocked’ and the reason is pornography,” he said. Ezra worries that kids may internalize that discrimination, and that the surveillance may even endanger them if they live in homes where it might be dangerous to ask questions about sexuality and gender. “A lot of students at school are exploring knowledge in a way that they aren’t the rest of the time.”

Advertisement

Filtering morality

Under the federal Children’s Internet Protection Act, passed in 2000, public schools and libraries are required to implement web filtering in order to be eligible for certain funds. But the law—which was upheld by the U.S. Supreme Court in 2003 after a coalition of libraries challenged it for censorship—offers few specifics about what form that web filtering should take, and what kind of content children should be prohibited from seeing.

As a result, public institutions and the companies that provide web filtering have for years used their own value judgements and opaque algorithms to decide what kind of information is acceptable. That’s allowed public institutions to do things like block whole categories of the internet, such as websites related to “alternative sexuality/lifestyles,” simply by checking a box, according to a 2017 study that analyzed web filtering policies at public schools and libraries in Alabama.

Courts have ruled that public school web filters cannot be used to purposefully block access to certain protected content, such as LGBTQ health and educational information. But those decisions haven’t addressed whether algorithms—which often make it impossible to prove specific human intent—violate the First Amendment when they block students from accessing the same kinds of websites. 

Advertisement

Through public records requests, Motherboard obtained a list of the websites that two school districts in Virginia—Alexandria City Public Schools and Rockingham County Public Schools—have either manually blacklisted or whitelisted using Securly. The documents demonstrate the kinds of censorship decisions surveillance algorithms make, and how they create a learning environment subject to the values of school administrators, whose opinions are then fed back into the algorithms.

According to the documents, Rockingham County administrators had to manually block k-k-k.com themselves. Websites for the U.S. State Department, Library of Congress, Virginia state agencies, the Washington Post, and other news outlets were on the list of pages the district had to specifically allow access to. In Alexandria, administrators had to manually allow access to teenshealth.org, a website that includes information on a variety of health issues, including safe sex practices, and unwomen.org, the United Nation’s page for women.

Jolley said that Securly does not assess .gov websites, and that those and others on the whitelists may have been imported from the districts’ previous web filter.

Meanwhile, Rockingham County students can currently access the website of The Family Foundation, an organization that advocates for discriminating against transgender students, because the district manually whitelisted it. But students cannot visit ratemyteachers.com, a forum for feedback on teachers and classes, because it was manually blocked. 

“The same story”

Computer science and educational technology experts interviewed for this article told Motherboard that parents and school districts considering placing their faith in algorithmic monitoring tools like Bark and Securly should remember that even the largest tech companies with the most advanced machine learning systems still struggle to identify hate speech and prevent algorithmic discrimination.

Facebook, despite years of criticism for platforming hate speech, still fails to rein in dangerous content. Google has been accused of algorithmically discriminating against LGBTQ content on YouTube, while also failing to identify common phrases used by white supremacist groups.

“It’s the same story over and over,” Chris Gilliard, a Harvard Shorenstein Center fellow who researches digital redlining and surveillance, told Motherboard. “A company makes inflated claims about what it can do, and somehow manages to not only not do the thing it claims to do, but also keeps out legitimate pursuits.”