Facebook turns the tables and starts measuring your credibility
Attack is sometimes the best form of defence, and with Facebook’s credibility being heavily question, the social media giant has decided to start tracking the trustworthiness of users.
August 22, 2018
Attack is sometimes the best form of defence, and with Facebook’s credibility being heavily question, the social media giant has decided to start tracking the trustworthiness of users.
Some might find the concept of being evaluated by Facebook somewhat uncomfortable, especially considering recent events which have made CEO Mark Zuckerberg and his cronies as trustworthy as a child-snatcher in a playground, but it is a necessary step to clean up the platform. In a sense, Facebook is building the foundations to crowdsource its fight against fake profiles and misinformation.
While Facebook does now employ a team of reviewers to judge whether posts fall outside the platforms rules, the battle against misinformation and hate speech starts with the user flagging content which they deem inappropriate. Of course, people’s standards vary, which is the main difficulty in judging what should be appropriate for the world and what shouldn’t, but the credibility score seeks to identify those who are trying to abuse the system.
According to the Washington Post, users will be scored between one and ten dependent on the reliability of their feedback when flagging content as inappropriate. Details on how this are done are thin on the ground right now, this is done intentionally, but the aim is to find those who are intentionally flagging content as inappropriate when it isn’t. Political opponents for example, or perhaps those who would benefit financially from market confusion.
There are of course those who just find enjoyment in trolling others, and ideological warriors who simply don’t want to accept certain truths, or promote lies. After introducing the flagging feature in 2015, Facebook noticed there were certain people abusing the system, flagging content which they simply didn’t agree with. Disagreeing with an opinion is fine, that is the users choice, but that users opinion should not impact the credibility of the post when the judgement is not based on hard fact. By identifying those who are flagging content as inappropriate when it is not, the fact-checking team in Facebook can become much more efficient.
Unfortunately for Facebook, the task is much more complicated as there will be some who simply promote or flag content incorrectly who do not fall into the standard fake news profile. Take eco-warriors who are trying to save the planet by attacking the reputation of oil companies. They might promote content which is inappropriate or flag something simply because the company does not sit well with their principles. While they might be doing it for what they consider good reasons, it is still misinformation and in the same category as more nefarious means. Fake news is fake news, there is no such thing as justification.
Such a strategy from Facebook just shows how complicated it has become to battle against misinformation and maintain credibility. The algorithm will aim to identify these individuals and assess the risk associated with their activities. Twitter already does this to a degree, assessing the risk of a profile factors into how much the posts should be spread across the platform. It seems the algorithm will be used to aid Facebook’s reviewers assessments of flagged content, but also containing the risk of nefarious actors.
As mentioned before, how the algorithm actually works is hazy right now. While this might make people uncomfortable, not knowing how they are being judged, it is completely necessary. If Facebook publicises the rules and how it is coming to such conclusions, the same nefarious actors will find a way to beat the system, making it completely redundant.
Although the idea of having human fact-checkers will make Joe and Jane Bloggs feel safer on the platform, it is completely unpractical. As the tsunami of misinformation continues to grow, artificial intelligence is increasingly looking like the only option to keep such platforms honest and trustworthy.
About the Author
You May Also Like