Fake news, fake readers, and real manipulation Should online networks start censoring?
November 6, 2017
There is one place in the West where one can still call out for various religious groups to be murdered, African-Americans to be enslaved, and elections to be rigged:
Too often, the Internet, and more specifically social networks have turned into a space where racial hatred, defamation and other crimes are met with total impunity. There are many factors to explain this phenomenon.
First, anonymity: on social media, users get verbally aggressive pretty fast, as they don’t expose their real identities.
Second, there is that general feeling that everything can be said on the Internet – who will check anyways the veracity of what’s posted, and where it comes from? No one, because as Professor Alan Dershowitz of Harvard University brilliantly puts it:
On the Internet, all truths are born equal. There is no Editor in chief on the Web
Third, even if someone was to check and report controversial, fake or racist content what would be the outcome? Legally, it is very hard to sue over content published on social media as legal rules on incitement or defamation vary from State to State while social networks are global by nature.
The list of issues related to the lack of content supervision on social media is impressively long: there are online threats, shaming, defamation, hate speech, incitement, fake news, fake readers and followers (bots), and eventually elections manipulation. We are talking about one of the major challenges of the century. Social media have emerged in our lives relatively recently, and we are now standing at that precise time in history when governments and regulators have to come up with check and balances if they don’t want social media to become lawless areas.
Similarly, when the print press emerged in Europe in the 15th century, one of the first texts printed and published was a pamphlet on blood libel calling to punish (understand : kill) Jews for their so-called crime. It took centuries for press writers to have clear fact checks and professional rules. Now is the time to apply, adapt, and create rules to guarantee information remains reliable on the Internet as well.
With the American elections’ scandal on fake Facebook accounts, paid for by Russia, it is now clear that although online manipulation may sound like an online game, it’s not. It has very real consequences. Here is an additional example: in late 2015, responding to the rising social media incitement against immigrants, German prosecutors opened an investigation into the possibility of criminal liability of senior Facebook executives. Following this move, an agreement was reached between the German Government, Facebook, Google and Twitter to see content that violated German law removed within 24 hours Facebook has since gone further and announced a project to tackle online hate in Europe. Yet, this move did not prevent Facebook from letting advertisers target users interested in anti-Semitic topics, as an investigative study showed just a month ago.
The problem with social networks is that their revenue mainly comes from ads, and the unwritten rule here is that whoever pays for ads get the exposure they look for. And while Facebook repeatedly committed to take down offensive content within 24 hours, this simply has not happened. There are many reasons for this failure. The first challenge is a lack of a clear definition of what “offensive content” looks like. Too often, Facebook has argued that “While it may be vulgar and offensive, distasteful content on its own does not violate our policies”, which protect free speech. The second challenge is a technology loophole where time to process hate speech reports and investigate into them takes much more than the stated “24 hours” reaction.
However, there are grounds to think that social networks will soon come up with solutions to take down illegal content on their “digital territories”:
It’s in their own benefit to restore users’ trust and safety
Lawmakers in Europe and now in the United States, indicate that they will go after social networks corporate responsibility if necessary
Facebook already took down content when it really caused damage to its reputation: in 2013, the withdrawal of advertising from Facebook by 15 companies, including Nissan UK, pushed Facebook to update its policies on gender-based violence and take down a group calling for violence against women in the UK. If this was done, it can be done again. However, Facebook (and other social network giants) do need apparently a reminder that withdrawal of advertising is pending if offensive content is not removed.