FYI.

This story is over 5 years old.

Tech

This Algorithm Will Try to Predict Which Gang Threats on Twitter Turn into IRL Violence

Desmond Patton of Columbia University thinks that using an algorithm to identify which gang members are in distress or making threats could give social workers and others a chance to help them.

Photo via Flickr user Chris Yarzab

Spend a few minutes with Desmond Patton, an assistant professor of social work at Columbia University, and his research starts to sound like the premise of a sci-fi thriller. But when he explains how young people and gang members are increasingly threatening each other on social media platforms, it actually seems possible to identify violent crimes before they happen.

Patton has devoted much of his academic life to youth violence and understanding how and why gang members use social media platforms. Now, for the first time, Patton and a few data-science collaborators are trying to build on that research to create an algorithm that can read tweets and determine the likelihood that the 140-character messages will lead to physical violence. The project's collaborators include Kathleen McKeown, director of Columbia's Data Science Institute, and Jamie Macbeth, a computer scientist at Fairfield University in Connecticut. They're working with Patton to develop an algorithm to decode gang member language and plan to eventually test it in New York and Chicago, cities that know a thing or two about gang violence.

Advertisement

I sat down with Patton to learn more about the project, and how it could change the way we address violence in marginalized communities.

VICE: How did you get interested in analyzing gang activity on social media?
Desmond Patton: I've been studying youth violence for some time and did my PhD and initial studies in Chicago. I was following youth about how they navigate violence; social media kept coming up in conversation.

About three years ago, there was a big beef between really prominent rappers in Chicago, and they were beefing on Twitter. The lesser-known rapper tweeted out his location, and he was killed in that precise location just hours after he sent out a threat. And then Chief Keef came on Twitter and was making fun of his death, laughing at it. That really showcased to me that social media can be a vector for violence, particularly for people living in marginalized spaces.

Is it your sense that the kind of activity you're seeing on social media is changing the way gangs behave, or is it just making gang activity available for study in a way that it wasn't before?
I think it's a little bit of both. I think the difference is now youth who are part of gangs are born digital. For anybody who is a millennial, social media is a part of their life. They have different mediums to express themselves versus 20, 30 years ago [when] gangs [did] a lot of [graffiti] tagging—perhaps social media allows them to tag electronically. I also think the presence of social media does change the way in which gangs can communicate with one another. They communicate outside of their gangs [and] they can enact criminal activity as well.

Advertisement

It is a game changer for them in terms of how they can do business and communicate threats.

Is social media making it easier to recruit gang members?
A lot of people ask me about whether this is a recruitment tool, and I don't have evidence of that. I don't think gangs need social media to recruit people, 'cause at the end of the day, it's really about the neighborhood and what's happening in the neighborhoods—that's why youth join gangs. But the lived experiences in urban neighborhoods are colliding with how we use technology. I think youth can express everything that's happening in the neighborhood, but it also allows them to posture, to get attention. And I think sometimes it gets away from them because it's so fast, so quick. Retweet tools, the ability to post messages [that] reach millions of people, is exciting for a lot of people, but can be part of the problem, too.

One of the main things you're working on now is developing an algorithm that can actually determine when a threat is likely to lead to violence. Tell me about how that works.
Really what we're trying to do is say, "Hey, these are some conditions and factors that you should pay attention to, let's teach a machine to be able to pick up on these particular conditions, and then be able to detect when they're happening online." If we can automate that process, then perhaps we can develop ways of getting that information back to the community—to social workers, violence prevention workers, people who are really hands-on in this space. Maybe we can give you an alert on your phone or computer, and so when Chris or John are in a high-tension back-and-forth on Twitter, you can call up John or Chris and say, "Hey, come in for a minute." Or I can go to you and you can interact in their space before the behavior that happened online becomes a criminal thing.

Advertisement

Do you have ideas yet about how you might teach an algorithm to do that?
One of the things I think is more direct threats—when someone is calling out an individual or group online. So, if I'm pissed off and tag your Twitter handle, then I think you should pay attention to that. Emjois are really important and I think they get left out of the conversation a lot. They are amplifiers to texts and can also be used independent of text as threats. So if you see a gun emoji, or an angry face, or a bomb, those are things that you should really pay attention to. In addition to that, locations—when people are tweeting out very specific locations online, or putting in an address or a neighborhood, those are things you should really pay attention to as well.

I get how this algorithm could potentially be a better information filter, but it also seems like depending on how widely it's deployed, it could be a Minority Report tool where police can act on potential future crimes.
I think they're already doing that. What we already know is that the police use social network analysis to identify high-risk individuals and groups in urban areas. What I'm proposing is to say, "Well let's take a deeper look. OK, you think you know which users or individuals are high risk—well let's really take a closer look at the language to make sure that what we're seeing is not just violence, but could be other things."

Advertisement

The issue is that, at face value, everything that comes from a black and brown person could be identified as being highly violent or aggressive. And it might be, but let's take a deeper look. All I'm saying is, if we have an opportunity to do a more in-depth approach, why not do it—especially if we're interested in an era of better community policing or building trust and relationships with black and brown communities around policing.

I'm always hearing the police saying, "Oftentimes we're doing social work and we really want to prevent children [from entering] the criminal justice system." Well, this might be a way to do that.

How do you code very specific vernacular to ideas about when people are likely to commit violence or are in pain? It seems like people aren't tweeting, "I'm having trauma."
That is a challenge and one of the things that we do to intervene in that is to work with youth to be interpreters. There's never a situation where we are pulling tweets, analyzing tweets, and not having them validated by youth in those contexts. So they are really interpreters to help us understand, "This tweet is more violent than this tweet," and, "This is what we think is going on."

Can you think of a tangible case you studied and how an algorithm like this might have changed the outcome?
I've been doing a lot of work on one gang member in Chicago named Gakirah Barnes, and she made headline news because she was a female gang member who was a shooter in her gang—and that's highly unlikely to be female and a shooter. She had allegedly up to 15 to 20 [shootings] associated with her at 17. So she was no joke.

But it came to my attention because she really started to ramp up some of her Twitter communications after one of her good friends was killed. And this was not the first time a good friend was killed. It was the tenth or 15th—just trauma after trauma after trauma. But during this time, we looked at her communications in a two-week period after her friend was killed—allegedly by Chicago police—and then two weeks until [Barnes] was actually killed by a rival gang. And right after the death of her best friend, she tweeted out: The pain is unbearable.

To me, that is a very provocative and emotional text that just kind of sat in space. And then two weeks later, she's dead. So what if someone was able to detect that text and support her? My understanding of her is that she was tough, she was a gang member, she wasn't very approachable. But she was not tough online—she was quite vulnerable online. How can we intervene in that vulnerability and talk to her, do something to help her in that moment? That's where an algorithm could have been helpful.

Do you think a gang member with that kind of street rep would've been receptive to intervention?
I don't know. But I'm a social worker, so I would say, "Who cares, let's try." Everyone is worthy of trying.

Follow Alex Zimmerman on Twitter.