Mandated Reporting of “Terrorist Activity” to the Government is a Terrible Idea

Share |

A bill under consideration in the U.S. Senate would create an obligation for social media companies and others to report undefined “terrorist activity” to the U.S. government.  This terrible idea would bring innocent people under government surveillance for protected expression, while doing nothing to make us safer.  The Senate should drop the provision.

Legal scholar Jeffrey Rosen once urged the “deciders” of social media to act as if they were required by the First Amendment to permit all legal speech on their platforms.  Fortunately, the moderators at social media companies have gone beyond this “free-speech imperialism” to establish and maintain complex, nuanced and evolving policies and practices that allow them to act responsibly in the face of enormous challenges.  The Senate proposal treats social media as if they had adopted Rosen’s bad advice and had refused to adopt policies to deal with an urgent social problem.

Social media platforms regulate themselves.  Their general polices allow them to block or remove content after it has been posted and to cancel accounts of those who do not live up to their standards. As Twitter puts it: “We reserve the right at all times (but will not have an obligation) to remove or refuse to distribute any Content on the Services, to suspend or terminate users, and to reclaim usernames without liability to you.”

It is a legitimate concern that terrorist organizations like Al-Qaeda, ISIS and Al-Shabaab use social media to spread propaganda and recruit fighters. Government officials have properly inquired into what social media platforms are doing to keep their systems free of this activity and looked to them to work with law enforcement and others to help thwart terrorist plots.

It turns out that social media companies are doing their part. As I’ve said in earlier posts, they already have specific policies in place to keep their systems free of terrorist propaganda.  Facebook prohibits “dangerous organizations” that are engaged in “terrorist activity or organized criminal activity,” removes content “that expresses support for” violent or criminal behavior, and bars “supporting or praising leaders” of these organizations, or “condoning their violent activities.”  Twitter has a similar policy, saying its users “may not make threats of violence or promote violence, including threatening or promoting terrorism.”

These social media policies are not just words on paper.  They are effective in limiting the reach of terrorist groups. As the Brookings Institute reported earlier this year, “Thousands of accounts have been suspended by Twitter since October 2014, measurably degrading ISIS’s ability to project its propaganda to wider audiences.”

Platforms are also vigilant about actual terrorist plots or other conduct that would create a serious and imminent danger to public safety, and they disclose this information to law enforcement or other authorities who are in a position to lessen the danger.  Facebook prohibits the use of its system to “facilitate or organize criminal activity that causes physical harm to people, businesses or animals, or financial damage to people or businesses. We work with law enforcement when we believe there is a genuine risk of physical harm or direct threats to public safety.”

This too has been effective. At a recent Senate hearing, FBI head Comey said, “Our experience is that Twitter has been very cooperative… They don’t want people using Twitter to engage in criminal activity, especially terrorism.”

So, a new requirement to report “terrorist activity” to the government is not needed to keep social media free of this noxious material or to encourage them to work with law enforcement in connection with crimes or serious danger to the public.

A mandated reporting requirement would be harmful.  It would create substantial risks to the vitality of public discussion of issues of public importance and to the rights of citizens to be free from unreasonable government-mandated surveillance.

The new reporting requirement would confuse criminal activity with speech that doesn’t belong on social media.  Social media platforms remove content or delete accounts to ensure that their users have a safe and rewarding experience.  They also report activity to law enforcement when criminal activity is likely. But the two activities are distinct and the two standards are not the same:  praising ISIS is not the kind of thing social media want on their systems, but it is not a crime.  A new reporting requirement on “terrorist activity” might well mean that everyone who posts something that runs afoul of social media content rules on terrorism will be reported to the government, putting lawful, constitutionally protected speech under a cloud of suspicion.

Valuable speech in this area could be put at risk. Suppose someone redistributes a terrorist video, noting that it is a remarkably sophisticated piece of propaganda, analyzing the video, audio and editing techniques and psychological appeals at work in the video, and ending with an appeal for a more effective response to the challenge.  This kind of speech is allowed on social networks now and is not reported to the government. The Senate proposal might discourage it by creating the fear that it will be reported to the government as terrorist activity.

Another difficulty is that the proposed reporting requirement law would circumvent established legal methods for the government to obtain information that it thinks might be relevant to a crime.  It contains no requirement for the government to serve a warrant, subpoena, or even a National Security Letter.  Instead, it requires social media to simply turn over information on its users to the government when it knows of “terrorist activity.” At a time of continuing national and international concern about U.S. government surveillance, this new demand for warrantless surveillance is astonishingly tone deaf.

Comparisons to mandated reporting of child pornography are mistaken.  Current law requires companies to report child pornography to the National Center for Missing and Exploited Children, which then passes the information on to law enforcement. But child abuse images are not the same as “terrorist activity.”  For one thing, they are intrinsically criminal.  Ernie Allen, former President and CEO of the NCMEC, says they are “crime scene photos.” In addition, they are so objectively recognizable that the process can be automated using systems such as Microsoft’s PhotoDNA that are astonishingly accurate. According to security experts, the software “accurately identifies images 99.7 percent of the time and sets off a false alarm only once in every 2 billion images.” It is widely used by social media platforms. Nothing comparable can be done for “terrorist activity.”

Finally, the financial industry’s experience with suspicious activity reports (SARs) should warn policy makers that reporting requirements can lead to wasteful defensive filings. Financial institutions are required “to report any suspicious transaction relevant to a possible violation of law or regulation.”  Despite good intentions, the program is widely viewed as a bureaucratic monster, creating mountains of reports that no one reads, filed in an effort to avoid expensive fines for non-compliance. The program has clearly spun out of control.  Suspicious activity reports were 62,473 in 1996; by 2005 that number had grown to almost 1 million and financial institutions spent $7 billion to comply. Despite efforts to deter defensive filings, financial institutions are still advised: When in Doubt, File a SAR.

As the SARs experience makes clear, companies react defensively to legal liability.  Greg Nojeim at the Center for Democracy and Technology noted that if reporting terrorist activity becomes mandatory, the “natural tendency will be to err on the side of reporting anything that might be characterized as ‘terrorist activity’ even if it is not. And their duty to report will chill speech on the Internet that relates to terrorism.”

The current emergency with respect to ISIS should not blind us to the long term problems with putting in place a legal requirement to report an indefinitely broad range of non-criminal comment and speech to the government as “terrorist activity.”

As legitimately concerned as we are about ISIS activity, we do not face an emergency where a recalcitrant industry must be coerced to act in the public interest. Social media companies are not pretending terrorism is someone else’s problem. They are already reporting what needs to be reported and taking down what needs to be taken down. So, the only responsible government reporting program would be no program at all. At best, such a program would just continue the status quo, but now with an overlay of wasteful compliance activity. At worst, it would lead to intrusive over reporting that will flood law enforcement and intelligence agencies with information they cannot use, while at the same time creating threats to protected speech and activity and validating the international impression that U.S. tech companies are simply conduits for their government’s surveillance activities.  Just say no.

Mark Mark MacCarthy, Senior Vice President, Public Policy at SIIA, directs SIIA’s public policy initiatives in the areas of intellectual property enforcement, information privacy, cybersecurity, cloud computing and the promotion of educational technology. Follow Mark on Twitter at @Mark_MacCarthy.