What are the Responsibilities of Tech Companies in an Age of International Terrorism?

Share |

Yesterday, at George Washington University, an energetic panel of government officials, scholars and policy advocates from business and civil society discussed the role of tech companies in an age of international terrorism.  There is more to this thorny issue, but the panel began with a good outline of the issues at stake.

The panel met at a sad time when the world is mourning the loss of life from the attacks in Brussels.  Coming after attacks in Paris, San Bernardino and Istanbul, we are clearly at a critical junction in the struggle against violent extremism.  And that made today’s topic tragically timely and relevant.

So what are the responsibilities of tech companies in an age of international terrorism?  I’d say that they have three:

  • They have take-down responsibilities.
  • They have countervailing responsibilities to foster free speech and association.
  • And they have affirmative responsibility to take steps to counter violent extremism.

These responsibilities are not legal requirements, at least under US law

One of the three pillars of U.S. Internet regime is Section 230, which makes it clear that Internet platforms are not the publishers of the third-party material that appears on their systems.  This legal principle has allowed Internet companies to grow, providing substantial free speech and associational benefits to the public and enhancing the discussion of controversial issues of public importance in the public sphere.

But the lack of legal liability does not mean the lack of social responsibility.  After all Section 230, which recently celebrated its 20th anniversary, was named the Good Samaritan provision because it was designed to allow internet actors to take action against problematic material on their services without thereby becoming liable for damages for allowing it to appear there in the first place.

The last decade has seen the rise and public acceptance of the socially responsible Internet intermediary.  Search engines, online payment systems, social networks, online marketplaces, web hosting companies, ISPs all have policies and procedures in place reasonably designed to stop the use of their systems for harmful conduct or speech.  Each company has the flexibility to craft the policy that works best for its users and for the public, and to vary it depending on the type of speech or conduct.  It takes time and judgment to apply these policies to particular cases.

We have seen these internal policies at work in the case of ISIS. Facebook has zero tolerance of ISIS accounts and support, saying “We don’t allow praise or support of terror groups or terror acts, anything that’s done by these groups and their members.”

Twitter has a similar internal rule stating that “The use of Twitter by violent extremist groups to threaten horrific acts of depravity and violence is of grave concern and against our policies, period.”

Moreover, the policies are effective in limiting the reach of terrorist groups. As the Brookings Institute reported earlier this year, “Thousands of accounts have been suspended by Twitter since October 2014, measurably degrading ISIS’s ability to project its propaganda to wider audiences.”

There must be limits to these take down responsibilities, however.  Companies also need to keep their systems to be open enough so that they can be successfully used as platforms for free speech and association.

Another responsibility is the participation of tech companies in partnership with the government to craft a counter-terrorism narrative.  The solution to bad speech is always more speech, and these initiatives seem to fit that description.

The State Department has had a program like this of its own since the mid-2000s, and just recently rebranded it as the Global Engagement Center under the purview of the Under Secretary for Public Diplomacy and Public Affairs.  In the words of the undersecretary the goal is “to help optimize these kinds of third-party messages and help optimize these groups and do some seed funding with those groups and help them figure out how to message against this pernicious message that’s out there.”

Michael Lumpkin, the head of the new group, says that instead of trying to teach government employees all sorts of social media tricks he wants to bring “people from Silicon Valley and Madison Avenue who can help us work through our approach and the information battle space.”  They are in an “information battle” with ISIS and they want to move beyond tweeting at ISIS, and they want the help of tech companies to do it.

There have been several meetings between White House personnel and tech leaders as well to discuss how best to work together.

The panel debated the extent to which tech companies should take down content on their own and how responsive they should be to working with government to craft messages to counter extremist violence.

Mark Mark MacCarthy, Senior Vice President, Public Policy at SIIA, directs SIIA’s public policy initiatives in the areas of intellectual property enforcement, information privacy, cybersecurity, cloud computing and the promotion of educational technology. Follow Mark on Twitter at @Mark_MacCarthy.