Type to search

HEALTH/SCI/TECH

Facebook Gives Users Credibility Rating Scores

Facebook has developed a users’ rating system which assesses a users trustworthiness and rates them on a scale of zero to one. The move is an attempt to stop the spread of fake news and misleading information. Facebook’s announcement comes just after Facebook made headlines for removing popular Facebook pages for controversial news host Alex Jones and other organizations like Telesur.

According to Tessa Lyons, Facebook’s product manager charged with the task of ending fake news on the platform, fake news is as much a problem as people who flag stories that aren’t necessarily untrue but rather stories they don’t like.

It’s “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher,” Lyons said.

Companies in Silicon Valley Are Increasingly Rating Users

Lyons made it clear that there are thousands of behavioral clues that Facebook uses to determine a user’s credibility. So the trustworthiness score is only one of many credibility rating tools which Facebook uses to determine users’ reputations. What is uncertain at this point is whether everyone on Facebook has been assigned a reputation score, and to what use Facebook intends to put the scores.

So two things are in play here – Facebook is fighting fake new, and also fighting users who spread fake news or flag credible content as untrue. Meanwhile, the system will identify users who are in the habit of reporting others’ stories as untrue, while also monitoring publishers who users love their content.

Since reports of Russian interference in the 2016 U.S. presidential election, many technology companies have been developing newer systems to identify users who pose security risks to their platforms. Twitter has begun to analyze the behaviors of other accounts within a user’s network to determine if that user’s tweets should be automatically retweeted to million other users.

The exact criteria determining how a user is rated is unknown, and tech companies are not in a hurry to reveal these since they believe revealing how they operate would corrupt the assessment process. This creates another dilemma for the companies because it makes it seem as if they are not transparent to their general audiences.

Users’ Behavior on Social Media Will Be Used To Determine Their Credibility

“Not knowing how [Facebook is] judging us is what makes us uncomfortable,” said Claire Wardle, director of First Draft, a research lab within the Harvard Kennedy School. “But the irony is that they can’t tell us how they are judging us — because if they do, the algorithms that they built will be gamed.”

First Draft is a fact-checking partner of Facebook among other third-party fact-checkers. When any Facebook user flags any content as problematic, the social media site forwards the content to fact-checkers who check whether the content is actually problematic or not. With time, the fact-checkers get to know if the individuals flagging and reporting fake news are credible themselves or not.

If a post flagged as untrue turns out to be true, Facebook and the fact-checkers secretly rate the user as incredible and his future false-news feedback might be taken with a grain of salt. If a post reported as false turns out to be false, then the user might be rated as credible in his future false-news feedback and more weight will be given to his alerts, Lyons explained.

 

Twitter Hits Unfollow on Twitter Bots, Fake Accounts

Tags:

You Might also Like

1 Comment

  1. Josh Schwien August 31, 2018

    Let me guess. Lib sites will be 100% credible according to them and any conservative page has none. Am I right. Yeah I thought I was right

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *