FYI.

This story is over 5 years old.

News

Facebook legally can’t roll out its suicide prevention AI in Europe

Facebook wants to leverage its huge investment in artificial intelligence (AI) to automatically detect when its users are exhibiting suicidal behavior — except in Europe, where the region’s strict privacy laws won’t allow the company to offer the new technology.

Facebook has been testing its proactive detection system on text posts in the U.S. to automatically detect when people are expressing thoughts about suicide. Now, it’s ready to go global and significantly expand the system to include live video streaming — a platform that many have used to publicly highlight suicides and suicide attempts.

Advertisement

There’s a suicide somewhere on Earth every 40 seconds.

But in Europe, Facebook won’t be able to offer the service. Strict privacy rules there require that Facebook get express consent from each of its 250 million users in the European Union. New laws coming into effect in 2018 will further restrict the use of AI to give users the right to “insulate themselves from the effect” of Facebook’s new technology.

And the initial version, which the company has been testing in the U.S. since March, is working, the company says. In the last month alone, Facebook’s AI automatically identified over 100 instances that required attention and were then flagged to human first responders.

The updated system monitors for various indicators of worrying behavior, including comments like “Are you OK?” and “Can I help?” on someone’s page, which allows it to flag videos that might have otherwise have gone unreported.

Now, Facebook is ready to roll out the system globally.

“We are starting to roll out artificial intelligence outside the U.S. to help identify when someone might be expressing thoughts of suicide, including on Facebook Live,” Guy Rosen, vice president of product management said in a blog post announcing the initiative.

But, not quite to the entire world: “This will eventually be available worldwide, except the EU,” Rosen added.

READ: YouTube kills ads on 50,000 channels as advertisers flee over disturbing child content

Advertisement

Tightly regulated

Facebook would not confirm the exact reason the company isn’t bringing its new proactive detection tool to Europe. “This is a sensitive issue,” a spokesperson told VICE News, adding that the company had spoken to experts and other stakeholders in Europe before making the decision.

The European Commission did not immediately respond to a request for comment on Facebook’s decision.

According to one legal expert, however, Facebook’s decision is rooted in Europe’s strict privacy laws.

“In Europe, health information is much more tightly regulated and so any data processing which concludes that the individual suffers depression or other medical condition would require explicit consent,” Ashley Winton a partner at the law firm McDermott, Will & Emery, told VICE News in an email.

While several loopholes exist in the current law and the upcoming General Data Protection Regulation (GDPR) that will come into effect in May 2018, none of those would allow Facebook to roll out its automated tool without first getting the express consent of every single user.

Facebook regularly talks to people in Europe about the situation, but “at this time,” a spokesperson said, the company won’t be rolling out the new tools in the region.

Hugely dangerous

There’s a suicide somewhere on earth every 40 seconds, according to World Health Organization figures. And it’s the second-leading cause of death for people ages 15 to 29.

With the rise of live-streaming video on Facebook, the company has had to come to terms with multiple cases where users have live-streamed their suicides or suicide attempts live on the platform.

Advertisement

Despite its efforts to help prevent suicide, the company has faced backlash for failing to remove videos of suicides posted on its site. In December 2016, 12-year-old Katelyn Nicole Davis streamed her suicide live on another social media platform, Live.me, which was reposted to Facebook. The 43-minute video remained easily accessible on the social for two weeks.

Despite a problematic past, Facebook says its new tools will augment the systems already in place to allow users to manually report what they think may be suicidal behavior, although not everyone is convinced of the benefits.

“From both my individual and professional opinion, I think it’s hugely dangerous with potentially disastrous outcomes,” Paul Scates, a longtime mental health campaigner who works with the U.K. National Healthcare Service, told VICE News. He worries about who will be entrusted to “safeguard the process and ensure people secure appropriate support and guidance.”

Several suicide charities including the Samaritans and Young Minds declined to comment on the effectiveness of Facebook’s new tool, due to the sensitive nature of the topic.

Still, Facebook says its team includes a dedicated group of specialists who have specific training in suicide and self-harm. The company will also use AI to prioritize the order in which reported posts, videos, and live streams are viewed.

READ: North Korea’s bitcoin crash-course has experts worried

Cover image: Gina Alexis, center, mother of 14-year-old Nakia Venant, who’d live-streamed her suicide on Facebook the week before, pauses as she talks about her daughter during a news conference, Wednesday, Jan. 25, 2017, in Plantation, Fla. (AP/Alan Diaz)