Social Media as a Terrorist Platform? #ISIS #TwitterTerror

Entrepreneurs and social media businesses beware! If your communications platform is used to support terrorism, you could potentially be held liable. Twitter may be learning that lesson the hard way in light of a recent lawsuit filed against the social media giant.

Lloyd Fields was killed in Amman, Jordan during an ISIS-sponsored attack on November 9, 2015, while training Jordanian police. Tamara Fields, Lloyd’s widow, filed suit in the Northern District of California on January 13, 2016, claiming that Twitter knowingly permits ISIS to recruit new terrorists, fund terrorism, and spread propaganda.

ISIS’s use of Twitter is well-documented. The Brookings Institute published a study in 2015 focusing on the spread of the militant Islamic terrorist group on the worldwide social media platform. The Brookings study estimated that between September and December 2014, ISIS supporters used at least 46,000 unique Twitter accounts, although not all were concurrently active. The ISIS-supporting accounts had 1,000 followers each (on average) and were much more active than non-supporting accounts, some authoring as many as 150 to 200 tweets a day. The study found that most of ISIS’s successful Twitter activity came from between 500 and 2,000 accounts that tweet in high-volume amounts.

Tamara Fields’ suit presents a unique challenge to the rising tide of social media as a global communications platform. In an effort to promote social media innovation and “promote the continued development of the Internet and other interactive computer services,” Congress passed 47 U.S. Code § 230, entitled “Protection for Private Blocking and Screening of Offensive Material.” Colloquially, section 230 has become known as the Computer Decency Act. The Act states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Essentially, the Act precludes social media sites such as Twitter from liability for the content of user posts. This would seem to bar suits brought against Twitter for what ISIS posts on the social media platform.

However, Mrs. Fields’ suit is brought under 18 U.S. Code § 2339A and § 2339B of the Anti-Terrorism Act — “Providing Material Support to Terrorists” and “Providing Material Support or Resources to Designated Foreign Terrorist Organizations,” respectively. Under these sections, whoever knowingly or with willful blindness provides material support or resources to a terrorist organization is liable. Normally a purely criminal statute, Mrs. Fields’ case seeks to hold Twitter liable under these sections via § 2333A of the same act, which allows the criminal statutes to be applied in civil suits.

In a unique application of law, the suit is not attempting to hold Twitter accountable for ISIS’s speech, but rather for providing material support to terrorism. This would mean that Twitter potentially moves from merely a social media tool to an active participant in global terrorism use simply by providing its platform.

Many who have written about Fields’ case believe that the case either lacks merit or would be extremely difficult to prove. However, her claim may have more success than it at first appears. The Anti-Terrorism Act defines “material support and resources” as (among other things) a “service.” Twitter is a communication service provided by Twitter Inc. The lynchpins of this case are: 1) whether Twitter knowingly (or with willful blindness) provided this service to ISIS; and 2) whether Twitter’s activity was a substantial factor in Lloyd Fields’ death. With regard to the latter, it is possible from the Brookings’ Institute findings that Twitter contributed to the spread of ISIS by providing both a propaganda platform for recruiting and a way to privately message recruits. As to the former, Twitter may not be able to in good faith claim that they do not provide a service to a specific set of people (ISIS), when they provide the platform for free to everyone in the world. Anyone can sign up for a Twitter account, and the ISIS-led accounts may have apparently contributed to the spread of the terrorist group. Mrs. Fields’ arguments may have much more merit than many give them credit for. It remains to be seen whether the facts will support her claim enough to show that Twitter should be liable under the circumstances.

Should Fields’ suit prove successful, it could have far-reaching consequences for social media and the technology industry. If hosting terrorism-related content itself is considered the same as providing material support to terrorism, the enormous burden of policing user accounts and content will shift to the hosting site. Instead of relying on users to report terrorist activity, sites like Twitter and Facebook would likely need to actively screen such activity in order to prevent hosting the content and facing liability. This could lead to limited user expression, if certain posts must be “approved” before they are posted. Further, direct-messaging applications in social media would potentially have to be monitored for terrorist recruiting and communication.

In what may be an effort to stay ahead of public criticism, Twitter updated the “Twitter Rules” on December 31, 2015, to include a section addressing “hateful conduct.” This section delineates Twitter’s stance that while Twitter believes in freedom of expression and “speaking truth to power,” such ideals mean little if certain voices are silenced because of fear. Therefore, the Twitter Rules now prohibit speech that directly promotes violence, directly threatens other people, or incites harm against other people on the basis of race, national origin, and religious affiliation (among other bases).

Interestingly, Twitter’s tracking abilities may give the company an unprecedented ability to police their users’ accounts and posts. The identification of current popular subjects through features including hashtag trending (globally, regionally, and locally), hashtag searching, and suggested postings based on trending hashtags appears easy. Although these abilities seem to make it easy for Twitter to identify harmful content and remove it, in practice, complications exist. Take, for example, the “hashtag” feature on Twitter — should Twitter automatically flag all accounts that post “#ISIS” as a potential terrorist threat? Many news outlets use similar hashtags to bring attention to the news they post on social media. In fact, one prevalent use of Twitter is “hashtag activism,” where individuals who otherwise could not contribute to awareness of a subject use hashtags to bring attention to issues. Screening hashtags such as #ISIS may cut down on ISIS recruiting efforts, but also would limit the ability to increase the dialogue about ISIS from concerned global citizens. Plus, once terrorist groups figure out how a social media platform is tracking them, they will likely employ methods that attempt to avoid such tracking. If Twitter is found liable for providing material support to terrorism through its platform, tracking such activity may prove a daunting task.

Although the potentially massive burden of policing user communication will likely factor into the outcome of this case, it could represents a landmark decision in social media liability and should be closely monitored by users and businesses alike.

UPDATE: On February 5, 2016, Twitter announced that it had suspended over 125,000 accounts since mid-2015 for promoting or threatening terrorism. The announcement coincides with multiple reports that the Obama administration has begun to pressure social media companies to counter terrorist activity online. It is unclear whether this announcement is in response to the lawsuit filed against Twitter discussed above, or what effect it will have upon the pending litigation. — Noah Downs