Less than a month after the mass shooting at a high school in Santa Fe, Texas, left 10 people dead, Gov. Greg Abbott unveiled a new app that encourages students to report each other’s suspicious behavior – online or offline – with just a few taps of a button.
The app is called iWatch, and tips can be submitted anonymously, which could be a screenshot of a Facebook post, a link to a blog post, or a photo of someone in real life. It recommends keeping “Location Services” turned on so authorities can accurately identify the location of the report. Users are also asked to rate the incident’s level of suspiciousness, “Slightly Suspicious,” “Moderately Suspicious,” or “Highly Suspicious,” and specify whether the subject of the report was attempting to conceal his or her behavior.
The tips are sent to the Texas Department of Public Safety’s Intelligence and Counterterrorism Unit. From there, the data can be shared with a little-known 9/11-era intelligence network called Fusion Centers. Fusion Centers were designed as hubs where local, state and federal authorities could share intelligence in order to stop the next big terror attack. Six of the 78 fusion centers across the United States are in Texas, and Abbott wants to repurpose them to help prevent the next school shooting by monitoring iWatch tips and using software to comb through teens’ public social media for potential threats.
“Everybody in the state of Texas never wants to see another occasion where innocent children are gunned down in their own schools," Abbott said when he unveiled the app as part of a bigger school safety plan in May.
In the absence of limiting access to guns, which is a hot-button issue, especially in red states like Texas, legislators are looking for other ways to catch would-be school shooters, and have zeroed in on monitoring teens’ social media. But experts say that relying on social media mining tools and anonymous tips submitted through an app, as Texas is proposing, could lead to an onslaught of false information and make it harder to identify tips related to genuine threats.
The attention to social media and reporting is a response to the fact that both Dimitrios Pagourtzis, the 17-year-old Santa Fe shooter, and Nikolas Cruz, the 19-year-old shooter in Parkland, Florida, gave strong hints about their plans online before they committed the shootings. For example, two out of three photos on an Instagram account linked to Pagourtzis were firearm-related. Cruz’s threats were more explicit: he posted comments like “I wanna shoot people with my AR-15” and photos of himself with his face covered, brandishing long knives.
States across the country are increasingly embracing smartphone technology, apps and social media surveillance in a bid to stop mass shootings — before they happen. Sandy Hook Promise, a non-profit set up in the wake of the mass shooting at Sandy Hook Elementary School in 2013, created an app called “Say Something” which allows people to submit tips about potential school shooting threats to their local police.
Florida’s Attorney General Pam Bondi says she is currently developing an app based on Michigan’s Okay2Say reporting system. Meanwhile, in Ohio, the director of the Greater Cincinnati Fusion Center has recently floated the idea of partnering with school districts to share information about disturbing student behavior. And there’s a number of private data mining companies that cater to school districts seeking to monitor students’ social media.
The iWatch app and repurposing of the Fusion Centers are just two components of Abbott’s proposal, which also include improving access to mental health services and ramping up security at schools, but civil liberties experts are concerned that the focus on social media could lead to over-policing of normal adolescent behavior – which is often not at all normal.
“Exhorting people to constantly report each other can create an atmosphere of mutual suspicion and surveillance,” said Jay Stanley, senior policy analyst for the American Civil Liberties Union’s Speech, Privacy and Technology Project. “We’re talking about adolescents here. They do all sorts of unstable things.”
According to Abbott’s proposal, at least one of Texas’ “Fusion Centers” uses artificial intelligence to comb through public-facing social media to flag problematic language or words. “When large amounts of data are examined by software algorithms, keywords associated with potential threats are identified, and law enforcement is able to quickly intervene to prevent school violence,” Abbott wrote in the report.
This, he suggests, is a cheaper and more effective alternative to some of the private companies out there, like Social Sentinel, which contracted with University of Virginia after the deadly white supremacist rally in Charlottesville last August, to monitor students’ social media feeds. Katy Independent School District, not far from Houston, Texas, is currently mulling a three-year contract, also with Social Sentinel, which comes at the price tag more than $80,000 per year. The company would “scan 12 different social media sites for words like ‘kill,’ ‘gun,’ and similar threatening words,” Abbott said.
While private companies like Social Sentinel are limited only to public data, fusion centers could access private data, like chat groups or texts with a warrant. “They would have to convince a judge that there was probable cause that a crime had been or was going to be committed,” explained Faiza Patel, co-director of the Brennan Center’s Liberty and National Security Program. “From what I’ve seen of school shooting cases, there is rarely this level of evidence available.”
A 2013 Brennan Center report found that fusion centers had not only been a very expensive enterprise, costing the federal government a total of $1.4 billion by that point, but they’d never produced concrete evidence that it had been money well spent. Instead they raised concerns from lawmakers on both sides of the aisles about racial and ethnic profiling.
“Fusion centers gather information about 'suspicious activity,' which is really broadly defined,” said Patel. “They got a lot of information coming into them, which was basically a lot of junk.”
Then there’s also the limitations of artificial intelligence when it comes to reading social media. “There’s a high need for context in social media postings,” said John Hollywood, a senior operations researcher focusing on criminal justice, homeland security, and information technology at the RAND Corporation, a global policy think tank. “Today the only way you can get at the underlying situation is through knowing the people involved.”
Stanley pointed out that half the kids in America are currently playing the video game Fortnite: it gained more than 125 million registered in less than a year. Players often discuss tactics in forums or groups online while trying to kill each other. The AI could pick up on these discussions of guns or shooting, for example, and flood the reporting system with false flags.
In another example, Michael Schmitt an 18-year-old high school senior in West Caldwell, New Jersey, could face up to ten years behind bars for posting what he thought was an innocent freestyle rap, and authorities interpreted as a credible threat to shoot up his school. When another student heard the song, which Schmitt posted to Soundcloud and then promoted on his Twitter and Snapchat, she reported him to the authorities. His highschool went on lockdown, a SWAT team was called, and Schmitt was later arrested and charged with creating a ‘false public alarm.”
“We’re going to plug this infrastructure we built for catching terrorists into teenage angst,” said Stanley. “It’s going to flood the authorities with false positives and it could have an enormous chilling effect where children feel like they can’t say anything unusual or offbeat, for fear they might come to the attention of the authorities.”
Cover image: People visit a memorial for the victims of the Santa Fe High School shooting at the high school on May 21, 2018 in Santa Fe, Texas. (Photo by Brendan Smialowski / AFP)