By Sarah Katz, Cyber Security Specialist
Provided the controversy surrounding foreign fake news that allegedly influenced the 2016 presidential elections in countries such as France and the United States, concern over the ubiquity of this misinformation has skyrocketed in the past two years. In particular, many social media platforms, such as Facebook and Twitter, have received major backlash for failing to effectively protect their users by not monitoring fake news more closely.
Global Information Sharing
Over the past two years, social media giants have placed particular emphasis on protecting users from fake news as well as graphic violence and pornography.
However, the well-known social media platforms, such as Facebook, Instagram, and Twitter, sport billions of users worldwide. Therefore, the practices employed toward fake news mitigation must account for the emergence of said news in a multitude of different languages. The trouble is, there are hardly enough content moderators to spot all of this unsavory material.
Unfortunately for the 2016 election era and potentially its upcoming 2020 counterpart, social media content moderation’s prioritization of extreme violence and sex leaves ample opportunity for misinformation to slip through the cracks – especially when said information appears within seemingly legitimate news stories and oftentimes in languages that the majority of moderators do not read.
For Content Reviewers
After verifying sources for any known blacklisted websites, a content moderator should take the next step of ensuring they can read the language in which the material appears. Even in regard to reviewing content in a language unknown to moderators, content moderation policy should implement the practice of checking for buzzwords in article titles using Google Translate.
Due to the clickbait nature of the following buzzwords, content moderators should search the text in question for such terms in a variety of languages – namely, Chinese and Russian – so that a foreign-language article will not pose such a moderation barrier. For languages written in different character systems, it is recommended that reviewers take note of how they appear and use their notes as a reference point when perusing news story headlines in shared articles.
How to spot Misinformation
As with determining the legitimacy of any webpage, moderators are encouraged to check the news source websites with the following precautionary checklist in mind:
1) Pop-ups: Do multiple pop-ups and ad banners appear when trying to navigate the webpage?
2) HTTP vs. HTTPS: In the address bar at the top of the webpage, does the far left read http or https? Tip: While not set in stone, https websites tend to be more secure, as their data is encrypted.
3) URL Redirect: Does the article link seem to redirect to multiple different URLs before the actual destination page loads? Two optimal free tools for verifying the safety of URLs are VirusTotal and Urlquery.
4) Hyperlink Match: Does the link in the address bar match the name of the website you are trying to reach?
5) Typosquat: Examine the link for a typosquatted domain or URL. For example, consider facbook.com, rather than the proper facebook.com.
When links become quite dangerous is when they lead to a login portal that prompts you to enter your email or social media login credentials, after which point the hacker will have access to your actual email or social media account. Once again, VirusTotal and Urlquery are invaluable free resources to safely test suspect domains and URLs before opening them on one’s machine.
Finally, when presented with a foreign-language news article while browsing either social media or the Internet in general, Google Translate must be used with a grain of salt.
Once more, content reviewers should always cross-reference any search terms with articles from reliable news sources in their native language to ensure as best as possible that nothing is taken out of context. Ascertain idioms and slang can be easily lost in translation, one should never rely on Internet translation for more than one search term at a time.
Conclusion: Head above Water in Cyberspace
The good news is, thwarting these risks does not always require a technical pedigree – simply close attention to detail and, most importantly, understanding what one is looking at and for.
Cyberspace is tricky terrain. New breeds of hackers and threats are constantly evolving and many flocks to social media platforms to spread fake news and malicious links. The aforementioned tips remain just a few pointers in a vast array of ever-changing techniques needed to ensure that both users and content moderators alike stay safe and properly informed.
About the Author
Sarah Katz is a UC Berkeley alumna, cybersecurity specialist and award-winning fiction author. She earned a nomination for the 2018 Women in IT Security Champion of the Year Award for being one of a select few former Facebook content moderators willing to speak on the issue of user privacy on social media. Updates on Katz’s work in security and writing can be found at www.facebook.com/authorsarahkatz on Facebook and @authorsarahkatz on Twitter.
 Christopher, Nilesh. “Facebook’s Fake News Clean-up Hits Language Barrier.” The Economic Times, 13 Apr.