Facebook moderators can inspect private messages of users suspected of terror links
Pressured by European governments, Facebook, Twitter and Google are trying to tackle the extremist propaganda and recruitment on their social networks and sites.
It’s an endeavour for which the companies have no good, pre-existent, overarching plan – the relative newness of social media and its use for propaganda means they have to come up with innovative ideas on how to clamp down on the most extreme cases of it. They are effectively winging it – trying out new approaches, and attempting to react quickly to the changes implemented by the perpetrators.
They have called in former government agents and other expert advisors to create those plans, but on a day-to-day basis, it’s up to artificial intelligence to flag suspicious posts, and human moderators to get to the bottom of each flagged post or message.
Human operators have the final say on whether there is a need to remove content or a made threat is credible and law enforcement needs to be notified.
They have special clearance to investigate user accounts suspected to belong to users having links to terrorist groups and, according to The Guardian, this clearance means that they are authorized to rifle through flagged profiles (including private messages) to see who these individuals are talking to and about what, and to check where they have been traveling.
“The team’s highest priority is to identify ‘traveling fighters’ for Isis and Al-Qaida,” The Guardian reports. “Someone would be categorized as such if their profile has content that’s sympathetic to extremism and if they had, for example, visited Raqqa in Syria before traveling back to Europe. When a traveling fighter is identified – which according to one insider takes place at least once a day – the account is escalated to an internal Facebook team that decides whether to pass information to law enforcement.”
While, at first glance, this seems like the most logical thing to do, there are many things about it that can trouble privacy and human rights advocates.
For example, Facebook’s efforts are guided, for better or for worse, by US State Department’s list of identified terrorist groups.
Secondly, Facebook has still not explained how they make sure that the human moderators and, in general, its entire Community Operations team, are not overstepping their mandated boundaries and are not making mistakes that could have grave consequences for some users.