Twitter is expanding its policy on posting private information with the goal of limiting the risk of harassment, intimidation, and doxing. While the intent of the policy is good, the wording of the policy itself has led to confusion about what is actually banned, and some free speech experts worry it will make it harder to share newsworthy videos and photos on Twitter.
Specifically, Twitter says that "sharing personal media, such as images or videos, can potentially violate a person’s privacy, and may lead to emotional or physical harm" and it is thus banning "media of private individuals without the permission of the person(s) depicted."
Twitter wrote that this update will allow its moderators to “to take action on media that is shared without any explicit abusive content, provided it’s posted without the consent of the person depicted.”
“When private information or media has been shared on Twitter, we need a first-person report or a report from an authorized representative in order to make the determination that the image or video has been shared without their permission,” Twitter wrote.
In regards to news media or journalism, Twitter wrote that there are cases where users may share pictures or videos “as part of a newsworthy event due to public interest value, and this might outweigh the safety risks to a person.”
In that case, Twitter will assess the context and “may allow” it.
“We would take into consideration whether the image is publicly available and/or is being covered by mainstream/traditional media (newspapers, TV channels, online news sites), or if a particular image and the accompanying tweet text adds value to the public discourse, is being shared in public interest, or is relevant to the community,” the blog post read.
This is what worries experts: the policy appears to be too vague, and makes Twitter the judge of what is newsworthy, who is a public figure, and what is in the public interest.
“They're trying to caveat by saying that it doesn't apply to content shared ‘in the public interest’ but that's...pretty vague,” Evan Greer, the director of digital rights group Fight for the Future, told Motherboard in an online chat. “It's going to create a very difficult job for human moderators to assess the context in each instance, and seems likely to lead to over-moderation or removal of legitimate content. If activists protest outside, say, the new CEO of Twitter's house and tweet a photo of the protest, is that covered? If a trans person films someone verbally harassing them, is that covered? Without more transparency and safeguards in place, it just seems like this is a policy that will be abused by people with power to censor legitimate online criticism.”
Greer argued that Twitter needs to clarify and list what is allowed in order to make it harder for people to abuse the new policy.
Twitter did not immediately answer a request for comment.
Jeff Jarvis, a journalism professor at the City University of New York, told Motherboard in a phone call that “we need to protect the notion of what is public” and that he does not think “there should be a presumption of privacy in public.” Generally, photographers are allowed to take photos and videos of people if they are in public (meaning they are walking down the street, at a protest, in a park, or are otherwise not on privately-owned land.)
Referring to the video of the murder of George Floyd, Jarvis said he is concerned about how the new policy would apply to something like that, given that neither Floyd, nor his killer Derek Chauvin were public figures at the time the video was recorded.
Ultimately, Jarvis said that Twitter is putting itself in the position of judging the intent of the user who posts a video or picture, “and that is hard.”
Cathy Gellis, a lawyer who specializes in civil liberties and technology, said that ultimately Twitter has the right to decide what content is allowed, or not allowed, on its platform, but that doesn’t mean their policy right now is right. It's also worth noting that all of this is highly subjective and thus the decisions will be made by human content moderators who may differ on decisions of newsworthiness or who is a public figure.
“It just doesn't put them in a very sensible position, they're going to now decide who's a private person and who's a public person,” Gellis said in a phone call. “And they're going to get this right? And they're going to get this right at scale, and not make any mistakes?”