Orlando Shows the Limits of Facebook’s Terror Policing
Both leading up to and during his deadly assault on Orlando night club Pulse, Omar Mateen accessed his Facebook account. He posted threatening status updates, and searched for key words relating to the tragedy, according to a Senate committee head briefed by law enforcement. Now, that Senator wants to know if Mateen’s social media activity could have been used to prevent the attack.
Facebook declined to comment about the Orlando incident, which itself is unsurprising given that it’s an ongoing investigation. But the company has been clear in the past about what it does and does not do to assist law enforcement in potential terrorist cases. Its systems are among the best, analysts say, at flagging and removing content that’s potentially terrorism-related. But the balance between vigilance and censorship isn’t easily attained. And even a more extreme posture, at least in this case, likely wouldn’t have helped.
In his letter Thursday to Mark Zuckerberg, Senator Ron Johnson (R-WI) detailed Mateen’s Facebook activity the night of June 12, when he would go on to take 49 lives with a Sig Sauer MCX assault rifle and Glock 17 handgun. The content of his posts, and trajectory of his searches, are both alarming and upsetting.
“The real muslims will never accept the filthy ways of the west,” Mateen wrote. “You kill innocent women and children by doing us airstrikes..now taste the Islamic state vengeance.” He finished his string of posts with a particularly chilling missive: “In the next few days you will see attacks from the Islamic state in the usa.”
Johnson, who heads the Senate Committee on Homeland Security and Governmental Affairs, further reveals that Mateen conducted searches on the site for a speech by ISIS leader Abu Bakr al-Baghdadi in the weeks prior to the attack, and had previously “used Facebook to conduct frequent law enforcement and FBI searches, including searching for specific law enforcement offices.”
It’s quite a revelation, one that has prompted Johnson to ask Facebook to provide all of the data the company has about Mateen’s online activities, as well as “the information available to Facebook prior to and during this terrorist attack.”
The implication is clear: Perhaps Facebook could have done something to stop this. Like most things, it’s not quite that simple.
A Global Neighborhood Watch
Johnson’s letter raises several questions about what Facebook could and could not have done, but it’s important to understand what the company actually does on a daily basis to quash terrorism on its site. As it turns out, a lot. In fact, says Steven Stalinksy, Executive Director of the Middle East Media Research Institute, Facebook is the best at this.
“Facebook is the leader in dealing with this issue of cyber-jihad,” says Stalinksy. “They’ve been on top of this, probably for a year and a half, more than any of the large social media companies.”
Part of what sets Facebook apart is the way it identifies potential terrorist activity and removes it. As with other contentious content types, it relies on its 1.6 billion users to report posts, which are then vetted by a team that determines what further action to take. It’s “see something, say something” applied across the world’s largest digital commune.
When it comes to terrorism, Facebook has a strict zero-tolerance policy. Terrorist organizations or offshoots that try to form Facebook pages are shut down, as are individuals who are reported as having praised acts of terror. Any reported post that implies imminent harm—such as terrorism or other types of violence, including self-harm—move to the front of the queue.
All of which might still sound like a slow-moving process, but in practice Facebook manages to act with surprising speed. Stalinksy, whose group monitors terrorist activity on social media, says that MEMRI will often notice a page for an official arm of ISIS or al-Qaeda appear on Facebook, only to have it disappear before they can circle back to include it in a report.
That’s especially impressive, says Stalinsky, because Facebook has been under siege of late, a byproduct of other social networks finally taking terrorist activity more seriously.
“Recently when Twitter started shutting down accounts, a lot of these people have tried to go back to Facebook,” says Stalinksy. “There’s kind of a migration going on.”
Deleting an account doesn’t make a cyber-jihadist go away, any more than covering up an ant hill destroys a colony. They just find somewhere else to dig. Right now, jihadists are digging in Facebook. So the Facebook anti-terrorism machine is faster and more effective than its peers. Why, then, was it neither in Orlando?
Case by Case
What’s not clear in Senator Johnson’s letter is the timing of Mateen’s Facebook posts. However, the New York Daily News reports that Mateen posted within minutes of his attack. If that’s the case, then it’s difficult to argue that Facebook was in a position to prevent anything, regardless of how quickly the cogs of its review system turn. This is the same reason Facebook is unable to prevent atrocities like real-time murder on Facebook Live.
Regardless of how much lead time there may have been in Orlando, though, Facebook could make its system faster with an approach that’s either more brute, or automated, or both. Automatically flagging the word “ISIS,” for instance, or other high-correlation verbiage, would certainly cast a wider net than relying on user reports.
A system like that presents several obvious problems of its own. Automatically flagging “ISIS” would require Facebook to review many, many people decrying the activities of the group on the site. It would also make the queue of posts to review that much longer, and that much harder to parse the posts that pose an actual threat. Facebook could avoid that slow-down by automatically suspending people saying anything terrorist-related, but that would result in a large number people being locked out of accounts for no reason other than protesting the very threat Facebook wants to shut down.
And that’s before you even get into civil liberties. Privacy advocates at the Electronic Frontier Foundation have derided Facebook’s speech policing even as it is.
“Companies should resist government pressure because they are incredibly bad at policing their users’ content,” wrote the EFF’s Amul Kalia and Aaron Mackey earlier this year. “Rather than enacting rational policies, they tend to over-censor.”
Facebook, in particular, regularly draws the ire of privacy and security advocates for how it treats its users. Advancing its terrorism policing policies would, correctly, raise concerns about collateral damage.
“If they did more, probably everyone would criticize them for saying they’re going overboard,” says Stalinsky. “It’s a fine line between enforcing free speech and dealing with terrorists using their platform.”
For now, Facebook hasn’t figured out the perfect system. That doesn’t mean, though, that it could have stopped Omar Mateen. Will it ever be able to stop every terrorist on its network?
“That’s probably close to impossible,” Stalinsky says.