This story is over 5 years old.


Facebook Is Rating Users' Trustworthiness, But It Won't Say How

In an effort to fight fake accounts and misinformation, Facebook is implementing a scale that ranks users’ trustworthiness from zero to one.
Image: Shutterstock; Composition: Motherboard

Ranking systems are a cornerstone of the social internet. They help us decide where to eat on Yelp, what to watch on Netflix, and which vendors to buy from on Amazon. Now, it looks like they’ll help us decide who to trust on social media, too.

According to the Washington Post, Facebook is implementing a new ranking system that will rank users’ trustworthiness on a scale from zero to one. The company claimed that this ranking system is an effort to fight misinformation spread by fake accounts.


Until now, the development of the ranking system has been kept a secret to prevent bad actors from figuring out how to game the system, Facebook product manager Tessa Lyons told the Post. According to Lyons, the ranking system is only “one measurement among thousands of new behavioral clues” that Facebook uses to assess the legitimacy of users and their content.

Read More: How to Permanently Delete Your Facebook Account

Previously, Facebook has relied on users to report misinformation and content that violates the site’s terms of service, which it would then forward to fact checkers outside the company. Earlier this year, it rolled out a program that let users rank the credibility of news items in their feed. The problem was that this system was abused by users who report legitimate items as untrue, which required an automated filter to assess whether posts were in fact likely to be false.

“If people only reported things that were [actually] false, this job would be so easy,” Lyons told the Post. “People often report things that they just disagree with.”

Now the company appears to be using more sophisticated analyses to understand harmful content, but the criteria it uses to establish these metrics is largely kept under wraps.

“One of the signals we use is how people interact with articles,” Lyons told the post. “For example, if someone previously gave us feedback that an article was false and the article was confirmed false by a fact checker, then we might weight that person’s future false news feedback more than someone who indiscriminately provides false news feedback on lots of articles, including ones that end up being rated as true.”

According to the Post, Lyons declined to offer more examples of the signals they use to determine the trustworthiness of Facebook users lest this information is used to game the system.

At a time when many major Silicon Valley companies like Google and Twitter are struggling to deal with misinformation on their sites, Facebook’s ranking system is at least an attempt at a solution, even if its an opaque one. On the other hand, the creepiness of an algorithm judging your trustworthiness might just be enough to cause some users to leave Facebook once and for all.