For Muslims across the country, surveillance is part of everyday life. From smart street lights focused on San Diego’s mosques to the entrapment of three young Somali men in Minneapolis, the surveillance is ever present. Under President Obama, the U.S. government began the Countering Violent Extremism (CVE) program, inspired by a similar counter-terrorism initiative in the UK called Prevent. More recently, the program has extended beyond the physical and into the digital landscape.
Despite CVE's record as a civil rights disaster, top presidential candidates are proposing that we continue to fund and expand the program. 2020 presidential hopefuls Kamala Harris and Pete Buttigieg have publicly expressed support for the CVE program. Harris’ domestic terrorism plan promises $2 billion over the course of 10 years, while Buttigieg pledged $1 billion. Although both Harris and Buttigieg promote the idea that CVE will be used to counter white supremacist violence, the program has historically focused on the entrapment and surveillance of Muslims.The internet and social media have led to an evolution in surveillance, Nicole Nguyen, an assistant professor and researcher at the University of Illinois, told Motherboard. “The security industry views social media, and the internet more generally, as a key tool in how so-called terrorists recruit and radicalize.”Intended to promote a community policing strategy to "prevent violent extremism," CVE’s pilot programming launched in three cities in 2014: Los Angeles, Boston, and Minneapolis. Those targeted for surveillance were overwhelmingly Black Muslims, including Somali youth in both Minneapolis and Boston. The Department of Homeland Security awarded the first CVE grants, totaling $10 million per year, in 2016.During those two years, the Boston Police Department (BPD) utilized social media monitoring software called Geofeedia, according to documents obtained by the American Civil Liberties Union. The program allowed BPD to scan posts collected from platforms like Facebook, Twitter, Instagram, YouTube, YiKYak, and Flickr. BPD began with monitoring hashtags like #BlackLivesMatter and words associated with protest.
However, the Boston Regional Intelligence Center (BRIC) targeted what it referred to as “Islamic Extremism Terminology.” That included monitoring the use of basic Arabic phrases in regular conversations along with the hashtag #MuslimLivesMatter. Both the BRIC and BPD’s surveillance borrowed from CVE logic that paints Muslims as inherently suspicious and, the ACLU wrote, raised “serious civil liberties concerns.”“Even in cases where BPD searched for keywords actually related to terrorist groups, like ‘ISIS,’ a review of the posts BPD collected pursuant to that search term revealed that the surveillance turned up nothing criminal or even suspicious. The posts mentioning ISIS were either jokes or references to current events,” Privacy SOS wrote.Although most CVE discussions focus on DHS funding, money flows through multiple sources. In 2018, Operation 250, a program started by University of Massachusetts Lowell students and faculty in the Center for Terrorism and Security Studies, received a $1 million grant from the National Institute for Justice (NIJ). While Operation 250 claims to focus on all forms of extremism and online radicalization, Fatema Ahmad, deputy director of the Muslim Justice League, noted that the organization’s own name, a nod to the claim that 250 American citizens have left to join ISIS, betrays its focus.“I worry about youth because, I think, just broadly, society accepts surveillance of youth,” Ahmad told Motherboard. “For youth here in Boston, they are just experiencing every level of being surveilled in every moment.”
In addition to nonprofits and law enforcement, academia also plays a key role in justifying these efforts. Operation 250 partners with both Harvard and Georgia State University.BDP stopped using Geofeedia in 2016 after the ACLU of Northern California revealed that it marketed itself as a tool to monitor protesters. But the program’s name can also be found in another city where CVE has taken root: Chicago, where police contracted with Geofeedia between 2014 and 2016.In 2014, Chicago Public Schools received a $2,197,178 grant from the NIJ to start Connect and Redirect To Respect (CRR), a social media monitoring program that utilizes the CVE framework to target street gangs. Both counterterrorism and counter-gang strategies assume that people vulnerable to extremism, or gang violence, can be identified through risk factors. Children identified as vulnerable to gang violence are subject to interventions that may include Chicago Police Department’s gang school safety team, “composed of gang enforcement police officers with no specialized training for working with children,” Nguyen said.“For us, identifying ‘at-risk’ students and then conducting interventions to ‘off-ramp’ them from the pathway to violence, uses CVE logics to address gang violence,” Nguyen added. “Connecting anti-Black and anti-Muslim racisms (and their intersections) is important for social movement work, even if we know that these social formations employ different narratives and create different outcomes for communities. And, importantly, many Black Muslims live at these intersections.”
In addition to CRR, Chicago is also home to Life After Hate, an organization that brands itself as helping people leave white supremacist groups. Life After Hate gained support as an “anti-racist” organization after President Donald Trump cut the organization’s CVE funding in 2017, something referenced in both Harris and Buttigieg’s campaign strategies. However, In These Times noted that the group has a “troubling history of collaborating with Islamophobic ‘war on terror’ federal programing.”
In 2016, Life After Hate applied for a DHS grant with plans to expand its work to target “jihadism”. The organization wanted to use a program developed by Moonshot CVE, a company describing itself as using technology to "disrupt violent extremism." Its program "Digital Shepherds", developed in partnership with the UK’s Home Office, would “automate the process of identifying individuals at risk of radicalization.” Life After Hate planned to do so by using “publicly available data posted on Facebook to identify individuals at risk of falling into the orbit of extremist organizations” and assigning each user a risk score.Algorithms like Digital Shepherds that derive risk scores by weighing variables such as identification of violent extremism ideology and frequency of engagement rely on debunked radicalization theories. A discredited 2007 New York Police Department report, “Radicalization in the West,” provided the ideological foundation for the NYPD Intelligence Division’s mass surveillance of Muslims, the ACLU noted.Using algorithms furthers the idea that potential terrorists can be identified through a set of checklists —even when the criteria are skewed to target Muslims. Issues of bias in algorithms have popped up in similar systems designed to detect hate speech, which multiple studies have found to be biased against Black people.Social media companies themselves are not quiet bystanders but often active participants in the programs. YouTube, Google’s Jigsaw, and Moonshot CVE have all collaborated to develop the Redirect Method, which finds users searching for keywords like “ISIS” and redirects them to videos debunking the extremist group’s narratives. Facebook works with both Life After Hate and Moonshot CVE, too.“There’s a push for social media companies to identify speech that would radicalize people or even figure out algorithms that would take people away from this path,” said Ahmad. But when algorithms are built using CVE logics that frame Muslims, and Black communities in particular, as suspicious, all that happens is anti-Black Islamophobia becomes embedded in code.Through CVE programming and logic, social media surveillance has taken on many forms. While CVE is often referred to as a “soft” approach to counterrorism that connects youth to resources, there’s nothing soft about surveillance. As Nguyen pointed out, “This approach ignores how children receive these services on the understanding that they might be ticking timebombs or budding violent gang members, rather than children deserving of such services as children.”
Using algorithms furthers the idea that potential terrorists can be identified through a set of checklists—even when the criteria are skewed to target Muslims