Devil is in the Details: Interpreting Counterterrorism Legislation to Avoid an Unconstitutional Result

Patrick Griffo
Volume 29
,  Issue 3

Introduction

While delivering remarks to the Washington Institute for Near East Policy, then National Counterterrorism Center Deputy Director Russ Travers described the 2009 attempted bombing of Northwest Airlines Flight 253 to illustrate the challenge of identifying true threats.1Russell E. Travers, Acting Dir., Nat’l Counterterrorism Ctr., Counterterrorism in an Era of Competing Priorities, Remarks before the Washington Institute for Near East Policy (Nov. 8, 2019), https://perma.cc/MY3J-V59T. A diplomatic cable reported that a father in Abuja, Nigeria was concerned that his son might be affiliated with Yemini extremists who were planning a terrorist attack, but the cable went unnoticed.2Id. At the time, counterterrorism analysts did not “connect the dots” between the son’s statements and any imminent danger.3Id. A month later, the son attempted to blow up a flight over Detroit, Michigan.4Id. Reflecting on this experience and others, Deputy Director Travers commented, “Terrorism information is unlike anything I’ve ever seen. It’s always ambiguous. It’s invariably incomplete. It’s often completely bogus.”5Greg Myre, With ISIS and Al-Qaida Weakened, U.S. Faces an Evolving Anti-Terror Mission, NPR (Mar. 27, 2019, 7:20 AM), https://perma.cc/59S9-KPB8. His comments emphasized the challenges of not only preempting acts of terrorism but also defining terrorist speech.6See id. Separating truly threatening speech from bluster is anything but straightforward.7See id.

The 2009 attempted bombing in Detroit was an act of international terrorism, but prominent attacks by domestic extremists have shifted the national security community’s focus to domestic terrorism.8See id.; see also Meryl Kornfield, What Happened at the Capitol ‘Was Domestic Terrorism,’ Lawmakers and Experts Say, Wash. Post (Jan. 7, 2021, 3:21 PM), https://perma.cc/H7YF-ZXNZ (describing initial reactions to the storming of the U.S. Capitol by political protestors on January 6, 2021, and debating whether to label attackers as domestic terrorists). Domestic terrorism’s overlap with domestic politics presents a unique challenge for defining extremism.9See Seth G. Jones, Catrina Doxsee, Nicholas Harrington, Grace Hwang & James Suber, Ctr. for Strategic & Int’l Stud., The War Comes Home: The Evolution of Domestic Terrorism in the United States 1 (2020), https://perma.cc/9TL8-AGTA. Domestic violent extremists (“DVEs”)10A DVE is an individual within the United States operating without direction from a foreign terrorist group or foreign power and who seeks to further political or social goals through violence. Strong rhetoric or advocacy for violent tactics alone does not constitute extremism. U.S. Dep’t of Homeland Sec., Homeland Threat Assessment 17 n.6 (2020). mobilize members based on political and societal issues, ranging from immigration to the environment.11Id. at 18; see also Myre, supra note 5. These groups do not exist in a political vacuum; extremist organizations use political messaging on social media platforms as vehicles for spreading information and recruiting new members.12Jones et al., supra note 9, at 7–8. Online political commentary reinforces DVE ideology, and groups amplify political messaging that aligns with their mission.13See U.S. Dep’t of Homeland Sec., supra note 10, at 19. The point at which otherwise harmless political speech becomes extremist speech is not always obvious.14See id.

Congressional efforts to address the threat of online domestic terrorist content have encountered First Amendment issues.15See, e.g., Emily Birnbaum, Freshman Dem Finds Voice in Fight Against Online Extremism, The Hill (Mar. 13, 2020, 6:00 AM), https://perma.cc/A2MA-R9SK (“[Drafting a counterterrorism statute] is further complicated by the fact that there is no one global definition of ‘terrorism,’ let alone ‘extremism’ or ‘hate speech,’ terms that are treated differently by every social media platform and country.”). Congress and legal academics have proposed legislation that requires private companies to either report or regulate “terrorist activity” appearing in social media and online forums.16Requiring Reporting of Online Terrorist Activity Act, S. 2372, 114th Cong. (2015) (lacking any definition of “terrorist activities” found in online content); see also Alexander Tsesis, Terrorist Speech on Social Media, 70 Vand. L. Rev. 651, 688 (2017) (describing Senators Richard Burr and Dianne Feinstein’s proposed legislation requiring social media companies to turn over public information related to terrorist activity). Some draft legislation has used broad definitions of international and domestic terrorism,17See, e.g., See Something, Say Something Online Act of 2020, S. 4758, 116th Cong. (2020) (relying on the definitions of international and domestic terrorism in 18 U.S.C. § 2331). while other proposals omitted definitions of terrorism altogether.18See, e.g., S. 2372. In either case, a vague definition of terrorism invites First Amendment constitutional challenges to proposed counterterrorism legislation.19See S. 4758; S. 2372; Birnbaum, supra note 15.

Regulatory and reporting proposals that vaguely define online “terrorist activity” content ignore the First Amendment presumption against content-based speech regulation and risk being void for vagueness.20See Tsesis, supra note 16, at 688, 690 (discussing First Amendment limits to counterterrorism legislation). Courts presume any government regulation on First Amendment speech is invalid; and to rebut this presumption, the government must prove that speech incites imminent harm, poses a true threat, or materially supports a terrorist organization.21Holder v. Humanitarian L. Project, 561 U.S. 1, 26 (2010); Virginia v. Black, 538 U.S. 343, 359–60 (2003); Brandenburg v. Ohio, 395 U.S. 444, 447 (1969) (per curiam); Watts v. United States, 394 U.S. 705, 708 (1969) (per curiam). Should existing counterterrorism proposals become law, courts would likely hear First Amendment challenges to these laws, requiring them to interpret these statutes’ vague definitions of terrorism and weigh these definitions against First Amendment jurisprudence.22When faced with statutes containing vague definitions, the Supreme Court has adopted a limiting construction. The Court has noted that outright invalidation of a statute for vagueness should not be “casually employed.” United States v. Williams, 553 U.S. 285, 293, 304 (2008). Courts could avoid ruling that a counterterrorism statute is facially invalid and instead interpret two alternative definitions of terrorist activity.23Defining terrorism is a legislative issue, but courts, when faced with declaring whether a counterterrorism statute is facially unconstitutional, can sidestep this “strong medicine” and apply a limiting construction that avoids a ruling on unconstitutionality. Williams, 553 U.S. at 293 (“Invalidation . . . is ‘strong medicine’ . . . .” (quoting L.A. Police Dep’t. v. United Reporting Publ’g Corp., 528 U.S. 32, 39 (1999))); see also Eric S. Fish, Constitutional Avoidance as Interpretation and as Remedy, 114 Mich. L. Rev. 1275, 1282 (2016). When reviewing a regulatory statute with a vague definition of terrorist activity, courts could adopt the “active indoctrination” interpretation; when reviewing a reporting statute, courts could adopt the “passive indoctrination” interpretation.24See Gabriel Weimann, Terrorism in Cyberspace, Fathom (2015), https://perma.cc/CAH5-V9FX (describing terrorists’ online tools for the marketing and communicating messages, which fuel recruitment directly and indirectly); Tsesis, supra note 16, at 654–55 (describing terrorist methods for incitement and planning attacks, both via the internet); see also discussion infra Part II. Active indoctrination refers to a principal forming a conspiratorial agreement with another party, which involves speech that the First Amendment does not protect.25Seediscussion infra Part II. Passive indoctrination refers to a principal who generally inspires but does not directly recruit another party.26See discussion infra Part II. By relying on the longstanding legal principle that conspiratorial speech does not enjoy First Amendment protections, these alternative definitions avoid the Supreme Court’s strong presumption against content-based speech regulation and vague restrictions on speech.27See discussion infra Part II.

Using “active indoctrination” to define terrorist activity, the government can regulate speech directly because conspiratorial speech—which includes imminent threats and true threats of harm—does not enjoy First Amendment protections.28Seediscussion infra Part II. Using the “passive indoctrination” definition, the government can require companies, once they are aware of terrorist-generated online content, to report public terrorist content to law enforcement.29See discussion infra Part II.

Part I of this Comment provides background information on the rise of online terrorist activity, jurisprudence on extremist speech, and recent legislative proposals to curb the spread of online terrorist information.30See infra Part I. Part II proposes active indoctrination and passive indoctrination as alternative legal constructions that courts could adopt when addressing counterterrorism statutes’ constitutionally suspect definitions of terrorism.31See infra Part II. To support this proposition, this Part explains how active indoctrination and passive indoctrination target only threatening and conspiratorial speech, without implicating broader forms of protected speech.32See infra Part II. This Part concludes by responding to legal counterarguments for more sweeping, constitutionally suspect government regulation of online content.33See infra Part II.

I.     Background

Section A provides an overview of online terrorist activity, including trends in domestic terrorist violence and technology companies’ attempts to curb terrorist-generated online content.34See infra Section I.A. Section B includes a brief background on criminal definitions of terrorism and these definitions’ inherent subjectivity.35See infra Section I.B. Section C explains the Supreme Court’s strong presumption against content-based speech regulation and outlines speech that the First Amendment does not protect, including imminently harmful speech, truly threatening speech, speech materially supporting terrorism, and conspiratorial speech.36See infra Section I.C. Section D explains the circumstances under which a court may find a statute void for vagueness.37See infra Section I.D. Section E explains the Supreme Court’s use of the constitutional avoidance canon to interpret alternative constructions of statutes.38See infra Section I.E. Section F concludes this Part by describing recent legislative proposals to address online terrorist speech.39See infra Section I.F. This Comment focuses solely on speech directly conspiring to commit or advocate for acts of terrorism.40Some legal scholars have described online terrorist propaganda as a subset of misinformation, also known as “fake news.” T. Nobel Foster & David W. Arnesen, Legal Strategies for Combating Online Terrorist Propaganda, 21 Atlantic L.J. 45, 46–47 (2019). However, in the context of the First Amendment, this Comment considers terrorism as a call to commit violence, not spread disinformation. This aligns with the Supreme Court’s analysis that addresses speech advocating for violence. See generally Virginia v. Black, 538 U.S. 343, 347–48, 359–60 (2003) (outlining true threat doctrine); Brandenburg v. Ohio, 395 U.S. 444, 447–48 (1969) (per curiam) (outlining incitement doctrine).

A.     Terrorism and Online Terrorist Activity

Though the nation’s counterterrorism efforts in the early 2000s focused largely on international terrorist organizations, the U.S. government more recently described domestic terrorist organizations as the most serious threat facing the homeland.41U.S. Dep’t of Homeland Sec., supra note 10, at 18–19; see also Myre, supra note 5. Over the last two decades, websites and social media have become terrorist organizers’ primary tools for planning and inciting acts of violence.42See, e.g., Weimann, supra note 24 (describing lone wolf actors becoming radicalized through online messaging campaigns). Technology companies have self-initiated efforts to remove terrorist content, but DVEs’ relationship with domestic political activity complicates efforts to separate non-violent political speech from extremist content.43See Jones et al., supra note 9, at 3, 7–8; Evelyn Douek, Verified Accountability: Self-Regulation of Content Moderation as an Answer to the Special Problems of Speech Regulation, Lawfare (Sept. 18, 2019, 1:47 PM), https://perma.cc/Z3NV-CF89.

DVE operations are distinct from international terrorist groups.44U.S. Dep’t of Homeland Sec., supra note 10, at 17 n.6. The attacks across the United States on September 11, 2001, involved a relatively large group of organizers affiliated with and financed by an international terrorist organization.45Nat’l Comm’n on Terrorist Attacks upon the U.S., The 9/11 Commission Report 172 (2004), https://perma.cc/MMA3-VLMZ; see also Weimann, supra note 24 (describing the increase in online content radicalizing lone wolf actors). Conversely, DVEs operate on a smaller scale.46See Weimann, supra note 24. Counterterrorism researchers have described the “narrowcasting” domestic terrorist recruitment method as using targeted propaganda to reach sub-populations based on age, location, or ideology.47Id. By using social media platforms like Facebook and Twitter, domestic terrorists can tailor messages to vulnerable demographics, such as financially or racially marginalized groups.48See id.

The Federal Bureau of Investigation (“FBI”) has acknowledged the increasing threat posed by DVEs.49Worldwide Threats to the Homeland: Hearing Before the H. Comm. on Homeland Sec., 116th Cong. 17 (2020) (statement of Christopher Wray, Director, Federal Bureau of Investigation). FBI Director Christopher Wray stated before a congressional committee that as of 2020, DVEs are responsible for a larger number of deaths than extremists linked to foreign terrorist organizations.50Id. 2019 marked the deadliest year for DVE attacks since the Oklahoma City Bombings in 1995.51Id. Director Wray noted that DVEs are motivated by a variety of political factors: far-right politics, far-left politics, and reactions to perceived government abuse are all drivers.52See id. at 17–18. Both the FBI and the Department of Homeland Security affirmed that racially motivated violent extremists have been the most lethal threat of all domestic extremists since 2001.53Id. at 17; U.S. Dep’t of Homeland Sec., supra note 10, at 18.

Social media and online forums are a key resource for DVE recruitment, especially as the COVID-19 pandemic compounds some effects.54Inst. for Econ. & Peace, Global Terrorism Index 2020: Measuring the Impact of Terrorism 88 (2020) [hereinafter Global Terrorism Index 2020]. Data analysis by the National Consortium for the Study of Terrorism and Response to Terrorism (“START”) suggests that extensive social media use may accelerate the radicalization of future DVE recruits.55Nat’l Consort. for the Study of Terrorism and Responses to Terrorism, The Use of Social Media by United States Extremists 3 (2020) [hereinafter START, Use of Social Media]. Roughly seventy-three percent of extremists in START’s database passively consumed terrorist-generated propaganda, participated in extremist dialogue, or communicated with other extremists.56Id. Yet, extremists rarely use social media to carry out attacks; in fact, only nine percent of domestic attackers in START’s database used social media to facilitate attacks.57Id. at 6. Online subcultures linked to spreading DVE propaganda have flourished during the COVID-19 pandemic.58Global Terrorism Index 2020, supra note 54, at 88; see also START, Use of Social Media, supra note 55, at 3. Social isolation, work disruptions, and unexpected layoffs have created an environment where vulnerable individuals seek solace in DVE-driven propaganda and conspiracies.59U.S. Dep’t of Homeland Sec., supra note 10, at 17.

Technology companies are the main gatekeepers for moderating online speech, including terrorist-generated content.60Douek, supra note 43. In the wake of several high-profile attacks, including an incident where a man posted an anti-immigrant manifesto on Facebook before killing twenty-three people in a Texas shopping center,61In August 2019, a gunman killed twenty-three people and injured several others at a Walmart in El Paso, Texas. Angela Kocherga, Two Years After Walmart Mass Shooting, El Paso Leaders See Inaction and Betrayal by Texas Officials, Tex. Trib. (Aug. 3, 2021, 6:00 AM), http://perma.cc/UZ3T-TWAP. Before initiating the shooting, the man posted an anti-immigrant manifesto on Facebook calling for the elimination of Hispanic people in the United States. Tim Arango, Nicholas Bogel-Burroughs & Katie Benner, Minutes Before El Paso Killing, Hate-Filled Manifesto Appears Online, N.Y. Times (Aug. 3, 2019), https://perma.cc/WR4J-CGLG. After the attack, media reports suggested that the attacker likely drew inspiration from an earlier attack by a white supremacist in Christchurch, New Zealand. Id. counterterrorism experts criticized social media companies for not doing enough to respond to online extremism.62Davey Alba, Catie Edmondson & Mike Issac, Facebook Expands Definition of Terrorist Organizations to Limit Extremism, N.Y. Times (Sept. 18, 2019), https://perma.cc/C9ML-34RB. In response, Facebook expanded its definition of “terrorist organizations” and strengthened its control over hate and extremist speech.63Id. Facebook also expanded its self-regulation initiative, which involved hiring moderators to detect and remove speech that violates its terms and conditions.64See Douek, supra note 43. Terrorist analysts have welcomed the move but note that without law enforcement coordination, Facebook’s efforts will have a limited impact on curbing DVE violence.65Alba et al., supra note 62.

Another challenge in addressing domestic terrorist activity is its relationship with domestic politics.66See Jones et al., supra note 9, at 2. The line separating extreme speech from political speech is not well defined.67See id. DVEs have aligned messaging with far-right and far-left movements, and extremists have exploited protests and political rallies to accelerate their messaging campaigns.68Id. at 4. Counterterrorism analysts have acknowledged that highly publicized political events like elections provoke noticeably large spikes in DVE online recruitment.69See id. at 7. Perhaps the most troubling illustration of DVEs exploiting domestic politics occurred during the 2021 storming of the U.S. Capitol while Congress certified the electoral college vote.70Laurel Wamsley, On Far-Right Websites, Plans to Storm the Capitol Were Made in Plain Sight, NPR (Jan. 7, 2021, 10:29 PM), https://perma.cc/N9S2-DPVU. Before the attack, media outlets reported that DVEs were using social media to encourage violent attacks against members of Congress during the vote certification.71Id. Political commentators dismissed these statements as mere political bluster, yet the attack at the Capitol proved otherwise.72Id. Genuine threats to commit violence against the U.S. government were intermingled with online political debate leading up to the riot.73Id.

B.     Defining Terrorism

DVEs do not exist in a political vacuum, which complicates the process of defining terrorist content.74See Jones et al., supra note 9, at 7. “Terrorist activity” is an inherently vague term.75See, e.g., Requiring Reporting of Online Terrorist Activity Act, S. 2372, 114th Cong. § 2(a) (2015); see also Tsesis, supra note 16, at 653 (summarizing that terrorists communicate both political ideology and incitement of violence). Legal definitions regularly describe committing an act of “terrorism,” but these definitions categorize conduct and spoken threats under the same “terrorism” label.76See, e.g.,Terrorism, Black’s Law Dictionary (11th ed. 2019). Further, terrorism’s definition varies between legal and social contexts.77See Louise Richardson, What Terrorists Want: Understanding the Enemy, Containing the Threat 3 (2007).

Legal dictionaries define terrorism in a way that collapses both conduct and speech under the same definition.78See Terrorism, Black’s Law Dictionary (11th ed. 2019). Black’s Law Dictionary provides a scholarly definition of terrorism: “the use or threat of violence to intimidate or cause panic, [especially] as a means of achieving a political end.”79Id. It further separates domestic and international terrorism as geographic subsets of terrorism.80Id. Black’s includes a notation that explains a “terroristic threat” is a “threat to commit any crime of violence with the purpose of (1) terrorizing another, (2) causing the evacuation of a building, place of assembly, or facility of public transportation, (3) causing serious public inconvenience, or (4) recklessly disregarding the risk of causing such terror or inconvenience.”81Threat, Black’s Law Dictionary (11th ed. 2019); see also Model Penal Code § 211.3 (Am. L. Inst. 1985).

The U.S. Code uses similar language for its criminal definitions of terrorism, but no federal law expressly prohibits “domestic terrorism.”82See 18 U.S.C. §§ 2331(1), (5); Peter G. Berris, Michael A. Foster & Jonathan M. Gaffney, Cong. Rsch. Serv., R46829, Domestic Terrorism: Overview of Federal Criminal Law and Constitutional Issues 1 (2021) (“Although defined in federal law, there is no federal criminal provision expressly prohibiting ‘domestic terrorism,’ as the terms defining domestic terrorism are not elements of criminal offenses.”); Nicholas J. Perry, The Numerous Federal Legal Definitions of Terrorism: The Problem of Too Many Grails, 30 J. Legis. 249, 257 (2004) (“Despite being located within the federal crime code, the § 2331 definitions of terrorism are not elements of criminal offenses.”). The Code’s primary criminal definition divides terrorism between its domestic and international forms.8318 U.S.C. §§ 2331(1), (5). Domestic terrorism emphasizes “acts dangerous to human life” coupled with a knowledge requirement: acts must “appear to be intended” to support some coercive end, such as intimidating or coercing a civilian population, influencing a policy, or affecting government through destructive means.84Id. § 2331(5). Domestic terrorism also requires that the event “occur primarily within the territorial jurisdiction of the United States.”85Id. § 2331(5)(c). Despite falling under the criminal title of the Code, the “domestic terrorism” definition is not a criminal offense.86Perry, supra note 82, at 257. Conduct under this definition may nonetheless be a crime under other substantive criminal laws prohibiting violence, or be relevant to criminal sentencing.87Berris et al., supra note 82, at 1–2. Generally, prosecutors use broader offenses (e.g., killing, maiming, or committing assault) or more specific offenses (e.g., material support to terrorists, illegal firearms transactions, etc.) to prosecute alleged terrorists.88Perry, supra note 82, at 255, 258.

In a social science context, defining terrorism is even more nuanced. The social science definition of terrorism involves six elements: (1) political inspiration driving (2) the threat of or actual violence (3) to send a message (4) that is usually symbolically significant (5) conducted by “sub-state groups, not states” (6) against victims who are different from “the audience the terrorists are trying to reach.”89Richardson, supra note 77, at 4–5. Counterterrorism scholars have warned that these elements involve some level of subjectivity because of the term’s subtext.90See id. at 3. The historic use of “terrorism” as a pejorative term has impacted its objective use in academic settings, with some scholars arguing that overuse has deprived it of its original meaning.91Id.

C.     The Supreme Court’s Presumption Against Content-Based Speech Regulation

Vague definitions of terrorism are problematic because the Supreme Court has a strong presumption against content-based speech regulation, and recent counterterrorism legislation proposals directly confront this presumption.92See infra Section I.C.1. The Supreme Court’s treatment of First Amendment issues has, over time, developed a general presumption against content-based regulation.93See Brandenburg v. Ohio, 395 U.S. 444, 447 (1969) (per curiam) (citing a general principle that advocacy for violence does not permit the state to proscribe speech, but if advocacy is directly inciting violence, the state may proscribe speech). The Court applies the strict scrutiny standard to any content-based regulation.94Reed v. Town of Gilbert, 576 U.S. 155, 163–64 (2015); Williams-Yulee v. Fla. Bar, 575 U.S. 433, 443 (2015); see also Tsesis, supra note 16, at 672. On matters related to terrorist speech, the Supreme Court permits content-based restrictions in the following instances: speech constituting imminent lawless action, speech constituting true threats, and speech serving as material support to a terrorist organization.95Holder v. Humanitarian L. Project, 561 U.S. 1, 26 (2010); Virginia v. Black, 538 U.S. 343, 359–60 (2003); Brandenburg, 395 U.S. at 447; Watts v. United States, 394 U.S. 705, 708 (1969) (per curiam). Likewise, conspiratorial speech often falls outside First Amendment protections because conspiratorial agreements are ingredients to an independent crime.96See United States v. Rahman, 189 F.3d 88, 115 (2d Cir. 1999) (“To be convicted under [the seditious conspiracy statute], one must conspire to use force, not just advocate the use of force.”).

  1. Presumption Against Content-Based Speech Regulation

Any government effort to regulate online terrorist activity runs the risk of being deemed an unconstitutional content-based speech regulation. Over the last century, the Supreme Court migrated from deferring to national security interests to adopting stricter First Amendment protections.97Compare United States v. Alvarez, 567 U.S. 709, 716–17 (2012) (plurality opinion) (“[T]he Constitution ‘demands that content-based restrictions on speech be presumed invalid . . . .’” (quoting Ashcroft v. Am. C.L. Union, 542 U.S. 656, 660 (2004))), and Brandenburg, 395 U.S. at 447 (“[C]onstitutional guarantees of free speech . . . do not permit a State to . . . proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.”), with Abrams v. United States, 250 U.S. 616, 623 (1919) (“[T]he plain purpose of their propaganda was to excite . . . .”), and Schenck v. United States, 249 U.S. 47, 52 (1919) (“[Proscribable] words . . . are used in such circumstances and are of such a nature as to create a clear and present danger . . . .”). Today, content-based speech regulation is presumptively unconstitutional unless speech crosses the threshold into unlawful conduct.98See United States v. Stevens, 559 U.S. 460, 468 (2010); Brandenburg, 395 U.S. at 447; cf. Reed, 576 U.S. at 163–64 (“[L]aws that cannot be ‘justified without reference to the content of the regulated speech’ . . . must also satisfy strict scrutiny.” (quoting Ward v. Rock Against Racism, 491 U.S. 781, 791 (1989))); Williams-Yulee, 575 U.S. at 443 (“[I]n our only prior case concerning speech restrictions on a candidate for judicial office, this Court and both parties assumed that strict scrutiny applied.”); Tsesis, supra note 16, at 672 (describing that courts almost always review content-based restrictions under strict scrutiny).

Before the 1960s, the Court was more accepting of government speech regulation and focused on the “plain purpose” of threatening words.99See, e.g., Abrams, 250 U.S. at 623 (arguing that the plain purpose of the defendant’s propaganda was to incite sedition and impair the nation’s war effort); Schenck, 249 U.S. at 51 (arguing that the tendency of the pamphlet was to obstruct military recruiting and hamper the war effort). Cases in the early twentieth century set a low bar: words “of such . . . a clear and present danger that they will bring about substantive evils” fell within the scope of unlawful speech.100Schenck, 249 U.S. at 52. Courts had deference to interpret “the effect” of words without specifying the clear and present danger they posed.101See id. The Supreme Court decided its then-most influential First Amendment cases in the shadow of war, likely influencing those cases’ outcomes.102See Dennis v. United States, 341 U.S. 494, 516–17 (1951) (plurality opinion); Abrams, 250 U.S. at 623; Schenck, 249 U.S. at 51. Though some lower courts broke with the inclination of approving government speech regulation,103Masses Pub. Co. v. Patten, 244 F. 535, 542–43 (S.D.N.Y. 1917), rev’d, 246 F. 24 (2d Cir. 1917). it was not until the 1960s that the Supreme Court began to scale back this deference.104See Brandenburg v. Ohio, 395 U.S. 444, 447 (1969) (per curiam). With its decision in Brandenburg v. Ohio,105395 U.S. 444 (1969). the Supreme Court marked a clear departure from earlier cases: “mere abstract teaching” of lawless conduct is not sufficient to permit government regulation of speech.106See id. at 448–49; Tsesis, supra note 16, at 666.

At the turn of the twenty-first century, the Court reaffirmed a strong presumption against content-based speech regulation.107United States v. Stevens, 559 U.S. 460, 468, 470 (2010). The Court in United States v. Stevens108559 U.S. 460 (2010). held that the state and federal governments have no general power to restrict speech based solely on “its message, its ideas, its subject matter, or its content.”109Id. at 468. The Court reaffirmed that any content-based regulation is presumed invalid and that the government bears the burden of rebutting this presumption.110Id.

To rebut the presumption against content-based regulation, the government must prove that speech falls into a category where the judiciary has traditionally permitted restrictions.111Id. at 468–69 (describing historic and familiar categories under which the government has the authority to regulate speech). These exceptional categories include obscenity, defamation, fraud, and incitement, as well as speech integral to criminal conduct.112Id. Though not exhaustive,113Id. at 472 (“Maybe there are some categories of speech that have been historically unprotected, but have not yet been specifically identified or discussed as such in our case law.”). this list of historically recognized classes of unprotected speech presents a common theme: a proximity between speech and harmful conduct.114See Stevens, 559 U.S. at 468; Eric T. Kasper & Troy A. Kozma, Absolute Freedom of Opinion and Sentiment on All Subjects: John Stuart Mill’s Enduring (and Ever-Growing) Influence on the Supreme Court’s First Amendment Free Speech Jurisprudence, 15 U. Mass. L. Rev. 2, 15 (2020) (noting English philosopher John Stuart Mill’s argument that the only appropriate limit on the freedom of speech was when one’s speech might serve as an immediate incitement to harm others).

  1. Strict Scrutiny Review of Content-Based Speech Regulation

The Supreme Court applies a strict scrutiny standard to any content-based speech regulation, so courts analyzing a statute regulating online terrorist content would apply the strict scrutiny standard.115Reed v. Town of Gilbert, 576 U.S. 155, 163–64 (2015); Williams-Yulee v. Fla. Bar, 575 U.S. 433, 443 (2015); see also Tsesis, supra note 16, at 672. When reviewing a challenged statute, a court first determines whether the regulation is content based or content neutral.116Some legal scholars argue that Supreme Court in Humanitarian Law Project used an intermediate scrutiny standard to evaluate whether the material support statute violated the First Amendment, but in later cases, the Court clarified that it has consistently used the strict scrutiny standard for any content-based speech regulation. See Tsesis, supra note 16, at 672. But see Williams-Yulee, 575 U.S. at 444 (citing Humanitarian Law Project as an example of a narrowly tailored statute under strict scrutiny review). If government regulations target speech because of its “communicative content,” including its subject or function, courts apply the strict scrutiny standard.117Reed, 576 U.S. at 163. For a statute to enjoy intermediate scrutiny, it must be content neutral and advance an important governmental interest.118Ward v. Rock Against Racism, 491 U.S. 781, 791 (1989). Content-neutral regulations do not refer to the content of the regulated speech,119Id. and the government’s compelling interest must be unrelated to the suppression of free speech and not burden speech more than the compelling interest requires.120Humanitarian L. Project, 561 U.S. at 26–27.

Paths to withstand strict scrutiny review are narrow: content-based restrictions must serve a compelling state interest and be narrowly tailored to achieve that interest.121Ward, 491 U.S. at 791 (“Government regulation of expressive activity is content neutral so long as it is ‘justified without reference to the content of the regulated speech.’” (emphasis omitted) (quoting Clark v. Cmty. for Creative Non-Violence, 468 U.S. 288, 293 (1984))); Tsesis, supra note 16, at 672. The Supreme Court has acknowledged that content-based restrictions rarely satisfy both requirements.122Williams-Yulee v. Fla. Bar, 575 U.S. 433, 444 (2015). The Court has held that the grave danger of terrorist activity may indeed be a compelling state interest, but the Court has been scrupulous in deciding that a regulation is narrowly tailored.123See Humanitarian L. Project, 561 U.S. at 45 (Breyer, J., dissenting). But see Williams-Yulee, 575 U.S. at 442–43. If a restriction is too broad, it intertwines informative speech with extremist speech, threatening the exercise of “vital” democratic rights.124Williams-Yulee, 575 U.S. at 442–43.

Courts would use a strict scrutiny standard to analyze a regulatory counterterrorism statute restricting online terrorist content.125Reed v. Town of Gilbert, 576 U.S. 155, 163–64 (2015); Williams-Yulee, 575 U.S. at 443; see also Tsesis, supra note 16, at 672. The government would have to prove that a counterterrorism statute seeking to regulate speech is aimed at a legitimate governmental interest and is narrowly tailored to achieve that goal.126See Humanitarian L. Project, 561 U.S. at 26–27. A court would almost certainly consider the threat posed by online terrorist activity as a compelling state interest but would be less likely to find that a content-based regulation is narrowly tailored.127See id. at 45 (Breyer, J., dissenting). But see Williams-Yulee, 575 U.S. at 442–44 (finding that a content-based speech regulation was not narrowly tailored). Even precise language in a counterterrorism statute could nonetheless be overly broad if it stifles legitimate political speech.128Grayned v. City of Rockford, 408 U.S. 104, 114 (1972).

  1. Speech Constituting Imminent Lawless Action and True Threats

The First Amendment does not protect online terrorist speech inciting imminent lawless action or speech espousing a true threat. The “incitement doctrine” allows the government to prohibit speech that constitutes a clear and present danger of imminent lawless action, and the “true threat” doctrine permits the government to proscribe “fighting words” likely to provoke violent action.129Virginia v. Black, 538 U.S. 343, 359 (2003); Brandenburg v. Ohio, 395 U.S. 444, 448–49 (1969) (per curiam). In tempting violence, these forms of speech graduate to substantive criminal conduct.130See Black, 538 U.S. at 359; Brandenburg, 395 U.S. at 448–49.

The incitement doctrine proscribes speech inciting imminent lawlessness.131Brandenburg, 395 U.S. at 447–49. The Supreme Court established the modern standard for the incitement doctrine in Brandenburg, overturning the government’s efforts to ban “dangerous” speech.132Id. This case was a pivot toward restraining content-based speech regulation133See id. at 447. and mirrored lower court rulings.134Masses Pub. Co. v. Patten, 244 F. 535, 542–43 (S.D.N.Y. 1917) (refusing to endorse the government’s position that the government can regulate speech that generally advocates for violence), rev’d, 246 F. 24 (2d Cir. 1917). In Brandenburg, a leader of the Ku Klux Klan appealed his conviction under an Ohio law criminalizing speech that advocated criminal conduct.135Brandenburg, 395 U.S. at 445. In overruling the Ohio law, the Court drew a sharp distinction between the abstract teaching of criminality, which is protected, and preparing a group for violent action, which is not.136Id. at 447. More recently, the Court clarified that speech advocating for the use of force is protected unless there are signs of imminent force.137See, e.g., United States v. Stevens, 559 U.S. 460, 468–69 (2010) (reaffirming that the government has no general power to restrict speech because of its content and that it cannot create new categorical exceptions to the presumption against content regulation); Virginia v. Black, 538 U.S. 343, 347–48 (2003) (holding that the government may regulate speech that is a “true threat,” meaning the speaker has an intent to commit a violent act). The incitement doctrine is fact-sensitive. As legal scholars note, a speaker’s environment may influence the likelihood that speech incites lawlessness.138Tsesis, supra note 16, at 666–67 (arguing that had the Klan member been directly inciting a crowd instead of delivering filmed statements, Brandenburg’s holding may have been different).

“True threats” are unprotected forms of speech.139Black, 538 U.S. at 358–60 (“‘True threats’ encompass those statements where the speaker means to communicate a serious expression of an intent to commit an act of unlawful violence to a particular individual or group of individuals.”). True threats are statements in which a speaker communicates a serious intent to commit unlawful violence.140Id. A true threat brings speech and conduct closer together.141Id. The Court in Watts v. United States142394 U.S. 705 (1969) (per curiam). laid the foundation for the modern true threat standard, holding that crude political language in isolation does not constitute a true threat.143Id. at 707–08. The case involved a protestor who, during an anti-war speech, expressed a willingness to shoot the President if given the opportunity.144Id. at 706–07. The Supreme Court considered his statement in the context of a political speech.145Id. at 707–08. The Court found that while the speech itself was crude, it was at worst “political hyperbole.”146Id. at 708. Given the conditional nature of the statement and the speech’s political setting, the Supreme Court held that the defendant’s statements did not qualify as a “true threat” against the President’s life.147Id. (internal quotation marks omitted).

At the turn of the twenty-first century, the Supreme Court in Virginia v. Black148538 U.S. 343 (2003). clarified that true threats cross a threshold beyond First Amendment protections.149Id. at 347, 359. Cases involving Ku Klux Klan members burning crosses were consolidated before the Virginia Supreme Court, which held that a state statute banning cross burning was an unconstitutional content-based regulation of speech.150Id. at 360. The Supreme Court overturned the state court’s ruling.151Id. at 361. The Court held that true threats—including “fighting words” that are likely to provoke a violent action—are not protected under the First Amendment because they are precursors to illegal conduct.152Id. The burning of a cross, an action historically associated with such violent crimes as lynching, communicated an intent to commit an unlawful act of violence.153Id. at 353–54.

  1. Speech Constituting Material Support for Terrorism

Like speech expressing imminent harm or true threats, speech that materially supports a terrorist organization does not enjoy First Amendment protection.154See 18 U.S.C. §§ 2339A–2339B; Holder v. Humanitarian L. Project, 561 U.S. 1, 28, 39 (2010). Two federal material support statutes enable the government’s prosecution of individuals supporting terrorists or terrorist organizations: (1) 18 U.S.C. § 2339A, material support for offenses that might be committed by terrorists; and (2) 18 U.S.C. § 2339B, material support for (usually foreign) designated terrorist organizations.15518 U.S.C. §§ 2339A–2339B; see also Charles Doyle, Cong. Rsch. Serv., R41333, Terrorist Material Support: An Overview of 18 U.S.C. § 2339A and § 2339B, at 1 (2016). 2339A predates 2339B by two years, and 2339B adopted 2339A’s definition of “material support.”156Antiterrorism and Effective Death Penalty Act (AEDPA) of 1996, Pub. L. No. 104-132, §§ 303(a), 323, 110 Stat. 1214, 1250 (codified as amended at 18 U.S.C. §§ 2339A–2339B); Violent Crime Control and Law Enforcement Act of 1994, Pub. L. No. 103-322, § 120005, 108 Stat. 1796, 2022 (codified as amended at 18 U.S.C. § 2339A). The statutes prohibit individuals from providing presumed terrorists or a designated terrorist organization with any “material support or resources,” including support in the form of “training” and “expert advice or assistance.”15718 U.S.C. §§ 2339A–2339B. Congress passed sections 303(a) and 323 of the Anti-Terrorism and Effective Death Penalty Act—also known as the “material support” statutes—to criminalize providing material support to a designated foreign terrorist organization and to modify the criminalization of material support to presumed terrorists.158Antiterrorism and Effective Death Penalty Act (AEDPA) §§ 303(a), 323. The current definition of material support is any “property,” “training,” or “advice or assistance” provided to a presumed terrorist or a designated terrorist organization.15918 U.S.C. § 2339A–2339B. Under § 2339B, the U.S. Secretary of State has the authority to designate terrorist organizations.1608 U.S.C. § 1189.

When the § 2339B material support statute faced a First Amendment challenge in Holder v. Humanitarian Law Project,161561 U.S. 1 (2010). the Supreme Court held that the statute did not ban pure political speech, but only speech assisting violent foreign terrorist organizations.162See id. at 39. Congress narrowly drew the statute to target speech providing material support to foreign terrorist organizations.163See id. at 36. Opponents of the law argued that the prohibition on “advice or assistance,” like technical knowledge or consultations, impacted political advocacy.164See id. at 10–11; Reply Brief for Cross-Petitioners at 10–11, Holder v. Humanitarian L. Project, 561 U.S. 1 (2010) (No. 09-89). However, Chief Justice John Roberts reasoned that the statute, as applied to the plaintiffs, did not violate First Amendment protections because the particular forms of support that the plaintiffs sought to provide, including training and expert knowledge, could enable a foreign terrorist organization to commit future attacks.165See Humanitarian L. Project, 561 U.S. at 27 (“The Government is wrong that the only thing actually at issue in this litigation is conduct, and therefore wrong to argue that [a previous case adopting intermediate scrutiny] provides the correct standard of review.”). If a person provides terrorists with training or expert knowledge, this aid transcends mere advocacy and ultimately supports those organizations’ violent objectives.166See id. at 38. While Humanitarian Law Project examined § 2339B material support to designated terrorist organizations, lower courts have applied a similar rationale to rebut claims that § 2339A restrains protected speech while acknowledging distinctions between the two statutes.167See, e.g., United States v. Amawi, 545 F. Supp. 2d 681, 683–84 (N.D. Ohio 2008).

  1. Conspiratorial Speech

The conspiratorial agreement, which binds parties to a conspiracy, is a key element of conspiracy.168See Dennis v. United States, 341 U.S. 494, 511 (1951) (plurality opinion); Daniel Hoffman, Online Terrorism Advocacy: How AEDPA and Inchoate Crime Statutes Can Simultaneously Protect America’s Safety and Free Speech, 2 Nat’l Sec. L.J. 200, 203 (2014); David B. Filvaroff, Conspiracy and the First Amendment, 121 U. Pa. L. Rev. 189, 191 (1972). First Amendment protections do not apply to speech that facilitates a criminal conspiracy because conspiratorial speech is merely an ingredient of criminal activity.169See Dennis, 341 U.S. at 511. Prosecutors often charge terrorist suspects with conspiracy because preparatory speech is not protected by the First Amendment.170See 18 U.S.C. §§ 371, 373; see also Thomas Healy, Brandenburg in a Time of Terror, 84 Notre Dame L. Rev. 655, 669 (2009) (“The most prominent theme is that Brandenburghas been limited to advocacy of unlawful conduct and has not been applied to related categories of speech, such as threats, solicitations, criminal instructions, or words amounting to conspiracy.”); Elizabeth M. Renieris, Note, Combating Incitement to Terrorism on the Internet: Comparative Approaches in the United States and United Kingdom and the Need for an International Solution, 11 Vand. J. Ent. & Tech. L. 673, 682 (2009) (“The three classic inchoate offenses, especially prominent in the context of terrorism prosecutions, are attempt, conspiracy and solicitation.”).

Common law conspiracy requires proof of an agreement between two or more persons to commit a crime.171See United States v. Alvarez, 610 F.2d 1250, 1255 (5th Cir. 1980) (“The conspirator must knowingly agree to join others in a concerted effort to bring about a common end.”). This agreement can be either express or implied.172See id.; 16 Am. Jur. 2d Conspiracy § 13 (2022). An express agreement involves direct communication between parties assenting to commit the act, while an implied agreement relies on acts and infers whether these acts further the conspiracy.17316 Am. Jur. 2d Conspiracy § 13 (2022). The agreement is the bedrock of criminal conspiracy; it establishes the communion of purpose uniting all participants.174Id.; Filvaroff, supra note 168, at 191. In addition to requiring an agreement, federal conspiracy offenses require that the overt act is in furtherance of an agreement.175The overt act need not be criminal; any overt act in furtherance of the agreement is inherently supporting the conspiracy. Courts rely on these overt acts to infer whether an implied agreement had been reached. See Alvarez, 610 F.2d at 1255; 16 Am. Jur. 2d Conspiracy § 13 (2022). When a terrorist conspirator is bound by an implied agreement, any overt acts—such as preparing conspirators for the attack—are in furtherance of this agreement.176See Alvarez, 610 F.2d at 1255 n.5; see also 16 Am. Jur. 2d Conspiracy § 13 (2022).

Prosecutors rely on conspiracy statutes to prosecute terrorism suspects whose preparations either fail or whose attacks are thwarted by law enforcement.177See Healy, supra note 170, at 669 n.90; Renieris, supra note 170. Using conspiracy law, the government can target “ingredients” to a crime—such as speech—instead of waiting until the crime is executed.178Dennis v. United States, 341 U.S. 494, 511 (1951) (plurality opinion) (“If the ingredients of the [crime] are present, we cannot bind the Government to wait until the catalyst is added.”). Both terrorist advocacy and criminal conspiracy involve an express or implied “agreement” to commit an illegal act.179See Hoffman, supra note 168, at 203. Prosecutors rely on this agreement element to prosecute alleged terrorists because agreements rise to the level of criminal conduct, which falls outside First Amendment protections.180See Healy, supra note 170, at 669; Renieris, supra note 170.

Conspiracy is an effective prosecution tool for online terrorist recruitment because it separates criminal speech from mere advocacy.181See Hoffman, supra note 168, at 246–47. The Supreme Court considers general advocacy to be protected speech under the First Amendment, and the government may only prohibit advocacy if it poses imminent harm, constitutes a true threat, or provides material support to a terrorist organization.182See Holder v. Humanitarian L. Project, 561 U.S. 1, 26 (2010); Virginia v. Black, 538 U.S. 343, 359–60 (2003); Brandenburg v. Ohio, 395 U.S. 444, 447 (1969) (per curiam); Watts v. United States, 394 U.S. 705, 708 (1969) (per curiam). Unlike mere advocacy to use force, conspiring to use force does not implicate constitutionally protected speech.183United States v. Rahman, 189 F.3d 88, 115 (2d Cir. 1999) (per curiam) (“To be convicted under [federal conspiracy statutes], one must conspire to use force, not just to advocate the use of force. We have no doubt that this passes the test of constitutionality.” (emphasis omitted)). In reaching an agreement to commit a crime, an individual engages in criminal conduct that is more dangerous than speech alone.184Id. (“[A] line exists between expressions of belief, which are protected by the First Amendment, and threatened or actual uses of force, which are not.”). More simply, “[i]t is the existence of the conspiracy that creates the danger.”185Dennis v. United States, 341 U.S. 494, 511 (1951) (plurality opinion). Criminalizing conspiracy does not overly burden speech because charges are directed at agreements to use force, not general discussions.186See Rahman, 189 F.3d at 115; see also Dennis, 341 U.S. at 511.

D.     Statutes Void for Vagueness and Overbreadth Under the First Amendment

Any proposal targeting terrorist-generated content online must clearly define prohibited forms of speech or it risks being deemed “void for vagueness.”187See Grayned v. City of Rockford, 408 U.S. 104, 108–09 (1972). The overbreadth doctrine is a similar but distinct concept that deems a statute facially invalid if it prohibits a sizeable amount of protected speech.188This Comment addresses the vagueness doctrine more thoroughly than overbreadth because recent counterterrorism legislative proposals use vague definitions of terrorist activity more so than overly broad categorizations. Both the vagueness doctrine and overbreadth doctrine overlap slightly, but the former focuses on precise legal definitions and prosecutorial interpretations while the latter focuses on practical issues with overly broad speech categorizations. See United States v. Williams, 553 U.S. 285, 292–93, 305 (2008).

A vague law does not provide individuals with fair warning and violates basic due process principles.189Id. at 292–93. Due process requires that laws give a person of ordinary intelligence a reasonable opportunity to know what conduct is and is not prohibited.190Id. Vague laws do not provide necessary guidance to those who execute laws and, consequently, over-delegate authority, leading to arbitrary and discriminatory enforcement.191Id. at 305.

When a statute imposes limitations on speech, the void for vagueness principle applies with particular strictness.192See NAACP v. Button, 371 U.S. 415, 432 (1963). But see Holder v. Humanitarian L. Project, 561 U.S. 1, 20–21 (2010); United States v. Rahman, 189 F.3d 88, 115 (2d Cir. 1999) (per curiam). As applied to speech, a vague statute is predisposed to chill otherwise protected speech.193Button, 371 U.S. at 433. The unclear guidance and the threat of sanctions deter the public from engaging in legal speech, and the resulting chilling effect limits speech in the same manner as the actual application of sanctions.194Id. The First Amendment “needs breathing space to survive.”195Id. The government must be explicit when outlining prohibited forms of speech.196Id.

Even in the context of national security, legislation aimed at terrorist speech must be narrowly tailored to survive review under the “void for vagueness” doctrine.197Humanitarian L. Project, 561 U.S. at 20–21; Rahman, 189 F.3d at 115. In United States v. Rahman,198189 F.3d 88 (2d Cir. 1999) (per curiam). the defendant challenged his prosecution under laws prohibiting seditious conspiracy by alleging that the statute’s caption, which used the term “seditious,” was vague.199Id.at 116. The court held that the main text of the statute was sufficiently specific because it targeted only preparatory actions: “conspiracy to levy war against the United States and to oppose by force.”200Id. (emphasis added). The court acknowledged that the “seditious” caption may indeed have been vague, but the main text provided sufficient clarification.201Id. (“There is indeed authority suggesting that the word ‘seditious’ does not sufficiently convey what conduct it forbids to serve as an essential element of a crime.”). Similarly, the Supreme Court in Humanitarian Law Project held that a counterterrorism statute prohibiting material support to terrorist organizations was not void for vagueness.202Humanitarian L. Project, 561 U.S. at 21 (2010). However, the material support statute included exhaustive definitions of the types of proscribed conduct and specific knowledge requirements.20318 U.S.C. § 2339A; Humanitarian L. Project, 561 U.S. at 21. Congress amended the material support statute multiple times, some of which occurred during litigation challenging the material support statute’s constitutionality.204Humanitarian L. Project, 561 U.S. at 8, 13.

Relying on a similar but distinct principle, courts use the overbreadth doctrine to determine whether a law prohibits too broad a category of conduct and is therefore facially invalid.205United States v. Williams, 553 U.S. 285, 292 (2008). Overbreadth analysis involves two considerations: firstly, the court construes the challenged statute, and secondly, whether the statute, as the court has construed it, criminalizes a substantial amount of protected conduct.206Id. at 292–97. In a First Amendment context, courts compare the statute’s incidental capture of legitimate speech with its intentional capture of unprotected speech.207Id. at 292. A statute can contain precise language but still be overbroad if it reaches constitutionally protected speech.208Grayned v. City of Rockford, 408 U.S. 104, 114–15 (1972). The overbreadth doctrine focuses on how a government executes and applies a statute to conduct, and courts will commonly avoid sustaining an overbreadth challenge in favor of reading a more narrow application of a statute.209Victoria L. Killion, Cong. Rsch. Serv., R45713, Terrorism, Violent Extremism, and the Internet: Free Speech Considerations 35–36 (2019) (describing how courts have used the overbreadth doctrine sparingly, and providing examples of when courts adopted alternative applications of a law to avoid holding a statute facially overbroad). Courts have called the overbreadth doctrine “strong medicine,” applicable in a narrow range of cases.210See, e.g., Williams, 553 U.S. at 293 (2008).

In the context of imprecise criminal definitions in counterterrorism legislation, the overbreadth doctrine is less applicable than the void for vagueness principle because courts generally avoid ruling that a statute is facially overbroad.211See Todd M. Gardella, Note, Beyond Terrorism: The Potential Chilling Effect on the Internet of Broad Law Enforcement Legislation, 80 St. John’s L. Rev. 655, 675 (2006) (“Thus far, the courts dealing with [Patriot Act cases] have been more amendable to the vagueness argument than the overbreadth argument . . . .” (footnotes omitted)). A law prohibiting extremist speech may be overly broad if it captures both political advocacy and extremist content,212Grayned, 408 U.S. at 114–15. but courts will construe a statute in a more limited manner if it does not specifically address speech.213Killion, supra note 209. A generally overbroad law is distinct from vague definitions within the law, and courts are more open to considering void for vagueness challenges because these vague definitions fail to provide the public with fair warning and are predisposed to arbitrary enforcement.214Compare Williams, 553 U.S. at 292–93 (analyzing a statute’s overall breadth and its impact on collateral speech), with Grayned, 408 U.S. at 108–09 (analyzing a statute’s predisposition to arbitrary enforcement).

E.     Constitutional Avoidance and the First Amendment

Courts can rely on the constitutional avoidance canon and apply a limiting construction to avoid ruling that a law is facially invalid.215Fish, supra 23, at 1282. Especially in the context of the First Amendment, the Supreme Court has been hesitant to issue broad constitutional rulings on vague statutes when an alternative constitutional interpretation is available.216See id.; Lisa A. Kloppenberg, Avoiding Serious Constitutional Doubts: The Supreme Court’s Construction of Statutes Raising Free Speech Concerns, 30 U.C. Davis L. Rev. 1, 54–55 (1996).

The modern avoidance canon provides that when a court has doubts about a statute’s constitutionality, the court first determines whether a constitutional construction is possible and, if so, adopts this interpretation.217Fish, supra note 23, at 1282. Courts either engage in “tiebreaking” between two plausible interpretations or “rewriting” the statute’s language to conform with the constitutional interpretation.218Id. at 1285. Some Justices consider the tiebreaking approach to be a more reasonable tool than striking down a vague law.219Id. at 1285–87. Even so, the distinctions between tiebreaking and rewriting are nebulous; when a court adopts an interpretation that Congress did not intend, the court often prefaces that it is merely choosing between two plausible alternatives.220Id. at 1288.

On First Amendment matters, the Supreme Court has adopted a minimalist approach that avoids broad holdings that strike down laws as unconstitutional.221See Andrew Nolan, Cong. Rsch. Serv., R43706, The Doctrine of Constitutional Avoidance: A Legal Overview 19 (2014). The Roberts Court has consistently issued narrow First Amendment rulings that preserve statutes containing vague language.222Id. On matters of national security like counterterrorism, the Court has shown a willingness to interpret statutory language in a manner that avoids directly confronting constitutional prohibitions against content-based speech regulation.223See Holder v. Humanitarian L. Project, 561 U.S. 1, 35–36 (2010). In Humanitarian Law Project, Chief Justice Roberts avoided engaging in a content-based speech regulation analysis.224Id. at 35–36. Instead, he interpreted the material support statute as a restriction on aiding future terrorist activity.225Id. at 35.

F.     Legislative Proposals Addressing Online Terrorist Speech

Congress and legal academics have responded to the threat of online terrorist propaganda with a variety of legislative approaches, including requirements for technology companies to regulate or report content embodying “terrorist activity.”226See See Something, Say Something Online Act of 2020, S. 4758, 116th Cong. §§ 4(a), 5 (2020) (threatening revocation of liability protections for social media companies that fail to report terrorist content); Raising the Bar Act of 2019, H.R. 5209, 116th Cong. (2019) (proposing a voluntary terrorist content reporting program); Requiring Reporting of Online Terrorist Activity Act, S. 2372, 114th Cong. § 2(a) (2015) (requiring technology companies to report online terrorist activity once those companies are made aware). Criticism has focused on vague or absent definitions of terrorism.227Press Release, Ron Wyden, U.S. Senator for Or., Wyden Statement on Social Media Reporting Bill (Dec. 8, 2015), https://perma.cc/LBB5-3WLL. Three examples of counterterrorism proposals illustrate this challenge: (1) proposals requiring technology companies to remove content that is “imminently dangerous, truly threatening, and materially” supporting terrorism;228Tsesis, supra note 16, at 685. (2) proposals requiring technology companies to report content once companies are aware; and (3) proposals that remove liability protections for technology companies that fail to report content.

  1. Three-Doctrines Proposal

The first proposal example, the “three-doctrines proposal,” would require a technology company to remove any online content that is “imminently dangerous,” “truly threatening,” or that materially supports a terrorist organization.229Id. A free speech scholar put forward this legislative proposal as a method for removing terroristic speech while avoiding an infringement on protected forms of speech.230Id.

Despite being tailored to First Amendment exceptions, the three-doctrines proposal nevertheless encounters the Supreme Court’s presumption against content-based speech regulation.231Healy, supra note 170, at 712 (“[W]e should instead view criminal advocacy as fully protected speech subject only to the qualifications of strict scrutiny.”). And the proposal does not make rebutting that presumption any less challenging.232See id. The proposal prohibits three categories of unprotected speech—imminently harmful terrorist speech, truly threatening terrorist speech, and speech providing material support to a terrorist organization—but imminent harm and true threat are nuanced categories.233Id.; see also Tsesis, supra note 16, at 707. The Supreme Court continues to cite Brandenburg’s imminent harm standard and has been reluctant to accept government judgment calls.234See Elonis v. United States, 575 U.S. 723, 738–39 (2015); United States v. Alvarez, 567 U.S. 709, 715–17 (2012) (plurality opinion). The government can proscribe speech constituting a true threat,235Tsesis, supra note 16, at 707. but the three-doctrines proposal contains no explicit intent requirement, which the Court requires to prove a true threat.236See Elonis, 575 U.S. at 738–39; Tsesis, supra note 16, at 670.

The three-doctrines proposal may also be void for vagueness because scholars have yet to reduce each doctrine to clear legislative text.237Tsesis, supra note 16, at 690. Proponents do not define how these standards would appear in a statute, possibly because each of the three doctrines has various subtleties.238Id. at 706–07. The incitement doctrine is difficult to reduce to legislative language given its fact-sensitive nature.239Id. at 667. For example, legal scholars note that had the speaker in Brandenburg been delivering his extremist speech before an angry crowd instead of by film, the Supreme Court may have found his words to be sufficient signs of imminent force.240Id. Similarly, the true threat doctrine relies heavily on historical context, which may be difficult to capture in the legislative text.241See Virginia v. Black, 538 U.S. 343, 356 (2003). In Black, the statute at issue targeted cross burning, and the Supreme Court noted cross burning’s historical association with arson and lynching.242Id. at 348, 356; id. at 381 (Souter, J., concurring in the judgment in part and dissenting in part). The material support doctrine presents a different challenge: the Supreme Court has only upheld the doctrine in specific, “as applied” challenges addressing organizations engaged in both conduct and speech.243Holder v. Humanitarian L. Project, 561 U.S. 1, 18–20 (2010). A court may be less willing to approve the material support doctrine’s application to all forms of online terrorist content.244Id. In Humanitarian Law Project, the material support doctrine targeted only a narrow category of speech, most of which overlapped with criminal conduct.245Id. at 26.

  1. Reporting Proposal

The second example, the “reporting proposal,” would require technology companies to report “terrorist activity” content as soon as the company has knowledge that such content exists.246While the Requiring Reporting of Online Terrorist Activity Act received no vote after being introduced and has yet to be reintroduced, the proposal is the best example of Congress proposing a reporting requirement. Leading U.S. Senators on the Senate Select Committee on Intelligence introduced the bill, and since its introduction, the bill has generated a moderate amount of discussion regarding its lack of a definition for “terrorist activity.” Requiring Reporting of Online Terrorist Activity Act, S. 2372, 114th Cong. (2015); Tsesis, supra note 16, at 685, 688. This legislative proposal targets “terrorist activity” being shared on technology company websites.247S. 2372. A reporting requirement must be narrowly tailored to avoid Fourth Amendment protections against compelling private companies to conduct government searches. In Skinner v. Railway Labor Executives’ Ass’n, the Supreme Court held that the government was effectively “encourag[ing], endors[ing], and participat[ing]” in a private search so that it may “share the fruits” of a private search without encountering legal barriers. 489 U.S. 602, 614–16 (1989). Federal appeals courts have held that Fourth Amendment protections do not apply when the government compels a private company to report content only after the company is aware of the prohibited content. This knowledge requirement avoids Fourth Amendment protections because it does not mandate that any search take place. See 42 U.S.C. § 13032(b)(1), repealed by Providing Resources, Officers, and Technology to Eradicate Cyber Threats to Our Children Act of 2008, Pub L. No. 110-401, 122 Stat. 4229 (2008); United States v. Richardson, 607 F.3d 357, 366–67 (4th Cir. 2010). Yet, in one proposal, the bill’s sponsors failed to define “terrorist activity” altogether.248S. 2372.

The proposal’s mere use of a reporting requirement does not raise First Amendment concerns because reporting is distinct from content-based speech regulation.249See Tsesis, supra note 16, at 690 (distinguishing the reporting of offending posts from courts issuing warrants requiring companies to take down content). Reporting public speech does not share the same strict legal standard as regulating public speech.250Id. Public speech and private speech are important distinctions; the former is at the heart of First Amendment protections of political discourse while the latter involves private entities engaged in commercial activity.251See Snyder v. Phelps, 562 U.S. 443, 452 (2011) (“[W]here matters of purely private significance are at issue, First Amendment protections are often less rigorous.”). Reporting public content avoids the issue of content regulation because, like federal statutes dealing with mandatory reporting of child pornography, it relays information to authorities without making content-based judgements.252See United States v. Stevens, 559 U.S. 460, 468 (2010) (addressing the presumption against content regulation); Tsesis, supra note 16, at 699 (comparing a child pornography statute to legislation requiring the reporting of terrorist content).

The major constitutional challenge for a reporting proposal stems from vague or nonexistent definitions of “terrorist activity.”253Tsesis, supra note 16, at 688 (“Critics point out, however, that the bipartisan bill does not define what constitutes ‘terrorist activity’ and thus mandates that businesses make content-based judgment calls.”). A statute that lacks a specific legal definition of terrorist activity runs the risk of being void for vagueness.254See United States v. Rahman, 189 F.3d 88, 116 (2d Cir. 1999) (per curiam). For any criminal statute impacting speech, the statute must include sufficient criminal definitions to avoid arbitrary or discriminatory enforcement.255Id. (quoting Kolender v. Lawson, 461 U.S. 352, 357 (1983)). A statute that poorly defines “terrorist activity” is predisposed to arbitrary enforcement.256See id. Without clearly defining “terrorist activity,” the reporting proposal also imposes a practical challenge on technology companies.257Press Release, Wyden, supra note 227. Companies would decide independently what online content portrayed “terrorist activity.”258See id. A critical senator argued that instead of producing more reporting, it would produce less.259Id. He argued companies would be incentivized to not look for content at all, effectively “st[i]ck[ing] their heads in the sand” to avoid the reporting obligations.260Id. Any vague “terrorist activity” definition leaves technology companies to make their own content-based judgements.261See Tsesis, supra note 16, at 687.

Even if the proposal contained an academic definition of terrorism, it would still encounter practical issues separating political and extremist speech. A counterterrorism expert acknowledged that even if one uses a multi-factor test to define terrorist content, the definition is inescapably subjective.262Richardson, supra note 77, at 3–4. Proposals to target online extremist content will inevitably encounter challenges separating political speech from extremist content.263See Jones et al., supra note 9, at 7–8. DVEs adopt language from mainstream political movements, and extremist speech becomes intermingled with otherwise benign political speech.264See id. Without a specific definition of terrorist activity, congressional efforts may incidentally target some political speech.265See id.

  1. Loss-of-Immunity Proposal

The third proposal example, the “loss-of-immunity proposal,” compels technology companies to report suspicious transmissions by eliminating liability shields that technology companies currently enjoy.26647 U.S.C. § 230; See Something, Say Something Online Act of 2020, S. 4758, 116th Cong. (2020); Steven Beale, Online Terrorist Speech, Direct Government Regulation, and the Communications Decency Act, 16 Duke L. & Tech. Rev. 333, 334 (2018). The loss of immunity proposal would amend section 230 of the Communications Decency Act (“section 230”) to compel technology companies to report content related to “major crimes,” including terrorism.267S. 4758. Section 230 grants civil lawsuit immunity to internet providers and website servicers for third-party content that is false or harmful.26847 U.S.C. § 230(c)(1) (“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”). The proposal uses the U.S. Code’s primary definition for domestic and international terrorism, though this definition does not separately define terrorist threats.269S. 4758; see also 18 U.S.C. §§ 2331(1), (5). If a technology company becomes aware of suspicious content but fails to report it, then the statute strips the company of section 230 liability protection and the company can face a civil lawsuit.270S. 4758. Technology companies consider section 230 to be an essential device for operating any internet service or website.271Beale, supra note266, at 348.

Despite its novel approach of avoiding direct government regulation of speech, the loss-of-immunity proposal implicates First Amendment protections related to vague definitions of terrorism.272Id. at 334. The proposal’s cross-reference to the U.S. Code’s terrorism definition omits any description of terrorist threats, so technology companies would be left to determine how this definition applies to online content.273S. 4758, 116th Cong. (2020); see also 18 U.S.C. §§ 2331(1), (5). Notably, the proposal does not separately define terrorist threats.274S. 4758. One trade association group noted that the legislation exposes technology companies to “crippling liability” if they fail to report evidence of a crime, yet they risk privacy implications if they over-report.275Application for Leave to File Amicus Brief of Internet Ass’n, Facebook, Inc., Glassdoor, Inc., Google LLC, and Reddit, Inc. in Support of Respondents at 16, Murphy v. Twitter, Inc., 274 Cal. Rptr. 3d 360 (Ct. App. 2021) (No. A158214). The proposal’s definition risks being void for vagueness because it strips liability protections without giving proper notice of what the proposal requires.276See, e.g., Papachristou v. City of Jacksonville, 405 U.S. 156, 162 (1972) (striking down a vagrancy ordinance for vagueness in part because the ordinance did not give fair notice). But see NAACP v. Button, 371 U.S. 415, 433 (1963) (finding that “vagueness . . . does not depend upon absence of fair notice to a criminally accused”). If a technology company has no choice but to report private content or lose immunity, a chilling effect could curb technology companies’ willingness to share otherwise protected speech.277See Button, 371 U.S. at 433.

II.     Proposal: Passive and Active Indoctrination as Alternative Definitions for “Terrorist Activity” Appearing in Counterterrorism Legislation

This Part proposes that courts could adopt a legal framework recognizing two definitions of terrorism—active indoctrination and passive indoctrination—in an effort to avoid an unconstitutional ruling on counterterrorism legislation imposing regulatory or reporting requirements.278Though Congress could include active and passive indoctrination as definitions in counterterrorism legislation, this analysis focuses on giving courts a solution should they encounter a counterterrorism statute that imprecisely defines terrorist activity. This solution avoids the “strong medicine” of invalidating laws based on vagueness and overbreadth. United States v. Williams, 553 U.S. 285, 293 (2008) (quoting L.A. Police Dep’t v. United Reporting Publ’g Corp., 528 U.S. 32, 39 (1999)); see also Tsesis, supra note 16, at 654; Weimann, supra note 24. Section A proposes that, when faced with a regulatory statute using an imprecise definition of “terrorist activity,” courts could use the constitutional avoidance canon and interpret that the statute regulates only “active indoctrination,” which Congress can restrict without violating the First Amendment.279See Healy, supra note 170, at 669. Active indoctrination refers to a terrorist forming a conspiratorial agreement with another party.280See 16 Am. Jur. 2d Conspiracy § 13 (2022). Section B proposes that when interpreting a reporting statute, courts could interpret “terrorist activity” as “passive indoctrination,” which is precise enough to avoid implicating First Amendment protections against vague laws.281See Tsesis, supra note 16, at 690 (distinguishing the reporting of offending posts from courts issuing warrants requiring companies to take down content). Passive indoctrination refers to a principal who generally advocates for a terrorist cause but does not directly recruit another party.282See Tsesis, supra note 16, at 654; Weimann, supra note 24.

A.     Active Indoctrination

When faced with a counterterrorism statute using an imprecise definition of terrorist speech, Courts could construe “terrorist activity” to refer only to “active indoctrination,” which avoids invalidating the statute on First Amendment grounds.283See Fish, supra note 23, at 1284 (explaining the constitutional avoidance doctrine). This definition targets conspiratorial speech, which invokes neither a vague categorization nor a content-based speech restriction.284See United States v. Alvarez, 610 F.2d 1250, 1255 (5th Cir. 1980).

Active indoctrination involves a terrorist principal reaching an agreement with an agent to commit a specific act of terrorism.285See 16 Am. Jur. 2d Conspiracy § 13 (2022). The agreement can be express—such as a terrorist recruiter directly asking a subordinate to commit an act and the subordinate agreeing to assist—or it can be implied—such as a terrorist recruiter requesting support and a subordinate responding through supportive actions.286See id.

Courts interpreting “terrorist activity” as “active indoctrination” aligns with the Supreme Court’s modern use of the constitutional avoidance canon, which prefers narrow rulings and proposes alternate constructions.287See Fish, supra note 23, at 1282. In Humanitarian Law Project, the Court narrowly construed a counterterrorism statute to impact only speech collateral to other forms of support to a terrorist organization.288Holder v. Humanitarian L. Project, 561 U.S. 1, 28 (2010). Had the Court been less modest, a broad ruling on the statute’s constitutionality may have jeopardized the material support statute.289See id. at 25, 28. In reviewing a counterterrorism regulatory statute, the Court could simply preface that it is choosing a “plausible” alternative to the statute’s definition of terrorist activity.290See Fish, supra note 23, at 1280.

“Active indoctrination” relies on conspiracy law to avoid First Amendment protected speech.291See United States v. Alvarez, 610 F.2d 1250, 1255 (5th Cir. 1980) (“The conspirator must knowingly agree to join others in a concerted effort to bring about a common end.”). Agreements are an essential element of conspiracy, and these agreements separate conspiratorial speech from mere advocacy.292Id. Conspiracy and active indoctrination share an underlying principle: advocacy rising to the level of criminal conduct through agreements to commit acts of terrorism.293See id.; 16 Am. Jur. 2d Conspiracy § 13 (2022). Actively recruiting an individual to commit an act of terrorism is not protected speech because, like conspiracy, speech becomes the means for the commission of the crime.294See Dennis v. United States, 341 U.S. 494, 511–12 (1951) (plurality opinion); United States v. Rahman, 189 F.3d 88, 115 (2d Cir. 1999) (per curiam). Like conspiracy, active indoctrination focuses only on agreements to commit acts of terrorism, not general discussions or advocacy regarding terrorism.295See Dennis, 341 U.S. at 511.

A statute regulating only active indoctrination is less constitutionally suspect under strict scrutiny review; targeting active indoctrination serves a compelling national security interest and is narrowly tailored to target only conspiratorial speech.296See Williams-Yulee v. Fla. Bar, 575 U.S. 433, 443 (2015); Tsesis, supra note 16, at 672. The Supreme Court has held that curbing the threat of terrorist activity is a compelling state interest, and online terrorist recruitment associated with active indoctrination is indeed a national security threat.297See Holder v. Humanitarian L. Project, 561 U.S. 1, 28–29 (2010). But see Williams-Yulee, 575 U.S. at 442–43 (explaining that exacting scrutiny is often applied in First Amendment challenges). The active indoctrination definition is tailored to impact only conspiratorial speech.298See Williams-Yulee, 575 U.S. at 442–43 (noting that under exacting scrutiny, speech limitations are upheld “only if they are narrowly tailored to serve a compelling interest”). Active indoctrination does not include advocacy for acts of terrorism; it targets only speech constituting an express or implied agreement.299See Dennis, 341 U.S. at 511–12.

Using the active indoctrination definition avoids the presumption against content-based speech regulation because First Amendment protections generally do not apply to conspiratorial agreements.300See Healy, supra note 170, at 669. A content-based speech regulation targets speech based on “its message, its ideas, its subject matter, or its content.”301United States v. Stevens, 559 U.S. 460, 468 (2010) (quoting Ashcroft v. Am. C.L. Union, 535 U.S. 564, 573 (2002)). As an element of conspiracy, a conspiratorial agreement crosses a threshold from mere speech to criminal conduct.302See Hoffman, supra note 168, at 246–47. Actively recruiting an individual to complete a specific crime involves speech that becomes the means for the commission of that crime.303See Packingham v. North Carolina, 137 S. Ct. 1730, 1737 (2017) (“Specific criminal acts are not protected speech . . . if speech is the means for their commission.”); Hoffman, supra note 168, at 246–47.

Active indoctrination is a specific definition that plainly defines prohibited forms of speech and is unlikely to be deemed void for vagueness.304See Grayned v. City of Rockford, 408 U.S. 104, 108–09 (1972). Active indoctrination emphasizes the express or implied agreement reached between a principal and an agent; like conspiracy, active indoctrination targets preparatory actions.305See United States v. Rahman, 189 F.3d 88, 116 (2d Cir. 1999) (per curiam). As the Supreme Court acknowledged in Rahman, conspiracy is a specific criminal definition with an easily understood meaning.306Id. The definition gives adequate notice to potential offenders, and prosecutors can enforce active indoctrination regulations without making arbitrary determinations.307See Grayned, 408 U.S. at 108–09.

B.     Passive Indoctrination

When reviewing a reporting statute, courts could use the constitutional avoidance canon to interpret the definition of “terrorist activity” as “passive indoctrination.”308See generally Fish, supra note 23, at 1282. This alternative definition avoids First Amendment protections against content-based speech discrimination and replaces vague definitions subject to arbitrary enforcement.309See United States v. Stevens, 559 U.S. 460, 468–69 (2010); NAACP v. Button, 371 U.S. 415, 433 (1963). But see Holder v. Humanitarian L. Project, 561 U.S. 1, 20–21 (2010) (holding that the material support statute was not unconstitutionally vague).

Passive indoctrination involves a principal who generally inspires an agent to commit an act of terrorism without a principal’s specific tasking.310See Weimann, supra note 24. Any independent advocacy for terrorism falls under the passive indoctrination category, which is subject to the presumption against content-based speech regulation.311SeeHumanitarian L. Project, 561 U.S. at 24 (“Thus, any independent advocacy in which plaintiffs wish to engage is not prohibited . . . . On the other hand, a person of ordinary intelligence would understand the term ‘service’ to cover advocacy performed in coordination with, or at the direction of, a foreign terrorist organization.”). Passive indoctrination encapsulates propaganda aimed at recruiting lone wolf actors.312Tsesis, supra note 16, at 655. By defining terrorist content as “passive indoctrination,” Congress can execute a reporting scheme aimed at requiring technology companies to report public terrorist content once they are aware that such content exists.313The distinction between reporting public content and otherwise private content has Fourth Amendment implications. The government can require a private company to provide publicly shared content if the company is already aware of such content and does not engage in a compulsory search. However, the government cannot compel a private company to search for content. Doing so would force the company to act as a “government agent” and perform a Fourth Amendment search on behalf of the government. See Skinner v. Ry. Lab. Execs.’ Ass’n, 489 U.S. 602, 614–15 (1989); see also Tsesis, supra note 16, at 690.

A court using the constitutional avoidance canon can limit the definition of “terrorist activity” to passive indoctrination.314See Fish, supra note 23, at 1282; Weimann, supra note 24, at 3. When a constitutional and plausible statutory construction is available, courts may adopt this construction as an alternative.315See Nolan, supra note 221, at 10, 26. The Supreme Court has adopted a minimalist approach to constitutional judgements, especially when a ruling has national security implications.316See, e.g., Humanitarian L. Project, 561 U.S. at 20–21 (adopting a narrow ruling to preserve the counterterrorism statute’s overall purpose). The passive indoctrination definition targets online terrorist propaganda without implicating broader types of speech, and the definition flows naturally from a counterterrorism statute’s purpose.317See generally START, Use of Social Media, supra note 55, at 3; Weimann, supra note 24. Given the gravity of harm posed by online terrorist propaganda, a minimalist approach to reviewing counterterrorism proposals would benefit national security.318See Humanitarian L. Project, 561 U.S. at 33–34.

Passive indoctrination does not include conspiratorial speech, which Congress can regulate directly without violating the First Amendment.319See United States v. Alvarez, 610 F.2d 1250, 1255 (5th Cir. 1980) (discussing the bounds of conspiratorial speech). In the United States, lone wolf actors—inspired by online propaganda—have supplanted organized terrorist networks as the most virulent form of terrorism.320Weimann, supra note 24 (describing the increase in online content radicalizing lone wolf actors). This online content does not rise to the level of a conspiratorial agreement.321See Alvarez, 610 F.2d at 1255 (defining conspiracy). When a domestic terrorist organization spreads propaganda online, recruiters are not reaching affirmative agreements with agents.322See Hoffman, supra note 168, at 202–03. Roughly seventy-three percent of extremists in START’s database passively consumed or spread terrorist-generated propaganda.323START, Use of Social Media, supra note 55, at 3. This passive consumption alone does not cross the threshold into criminal conduct.324See Alvarez, 610 F.2d at 1255; 16 Am. Jur. 2d Conspiracy § 13 (2022).

A reporting scheme for online content that passively indoctrinates followers does not encounter the First Amendment’s presumption against content-based speech regulation.325Tsesis, supra note 16, at 689. Public speech enjoys expansive First Amendment protections, but reporting public speech does not impede these protections.326See Snyder v. Phelps, 562 U.S. 443, 452 (2011). A statute requiring the reporting of public content serves an important purpose: detecting threatening speech that presages a crime.327See United States v. Stevens, 559 U.S. 460, 468–71, 498 (2010). Reporting legislation would produce important intelligence and aid law enforcement in determining at what point speech may pose an imminent harm or true threat.328See id. at 498; Virginia v. Black, 538 U.S. 343, 347–49 (2003).

Conversely, counterterrorism legislation that regulates passive indoctrination would encounter the general presumption against content-based speech discrimination.329Brandenburg v. Ohio, 395 U.S. 444, 447 (1969) (per curiam). Decades of federal court cases solidified this strong presumption,330John Fee, Speech Discrimination, 85 B.U. L. Rev. 1103, 1104, 1117–20 (2005). despite early twentieth-century cases taking a more generous view of the government’s national security authority.331Dennis v. United States, 341 U.S. 494, 516–17 (1951) (plurality opinion) (“[The appellants’] conspiracy to organize the Communist Party and . . . advocate the overthrow of the Government . . . by force and violence created a ‘clear and present danger’ . . . .”); Abrams v. United States, 250 U.S. 616, 623 (1919) (holding that the plain purpose of the appellant’s speech was to excite sedition, riots, and revolution in the midst of war); Schenck v. United States, 249 U.S. 47, 52 (1919) (“When a nation is at war many things that might be said in time of peace are such a hindrance to its effort that their utterance will not be endured so long as men fight and that no Court could regard them as protected by any constitutional right.”). Categorical exceptions to the First Amendment do not apply to passive forms of terrorist indoctrination.332See Black, 538 U.S. at 359 (holding the government may regulate true threats, which involve a speaker communicating a serious intent to commit a crime). The government has no general power to restrict online terrorist advocacy, absent some showing of imminent force or truly threatening speech.333See id. When a principal posts a message that indirectly encourages an agent to commit acts of terrorism, such speech is not a “true threat” because it lacks a serious intent to incite conduct presaging a crime.334See id. (explaining that true threats involve a serious expression of an intent to commit a crime). The vast majority of online terrorist content—websites, chatrooms, and social media that inspires recruits to act—falls within this category.335Tsesis, supra note 16, at 654–55 (describing how over the past decade, the terrorists have used websites, Facebook, and Twitter as indispensable mediums for outreach).

Passive indoctrination coupled with a reporting statute solves this problem by avoiding the Court’s strict scrutiny review of content-based speech regulations. Reporting requirements, unlike regulations, are not content-based speech regulations because they do not compel the removal of speech.336See Reed v. Town of Gilbert, 576 U.S. 155, 163–64 (2015); Williams-Yulee v. Fla. Bar, 575 U.S. 433, 442–43 (2015); Tsesis, supra note 16, at 672. Even though a reporting requirement examines the content of reported speech, it is not a regulation, so strict scrutiny does not apply.337See Reed, 576 U.S. at 162–64; Williams-Yulee, 575 U.S. at 443; Tsesis, supra note 16, at 672.

The passive indoctrination definition is specific enough to avoid being deemed void for vagueness because it provides technology companies with adequate notice of impacted content. Defining terrorist activity as passive indoctrination leaves prosecutors with less room for arbitrary enforcement.338See NAACP v. Button, 371 U.S. 415, 432, 435 (1963). But see Holder v. Humanitarian L. Project, 561 U.S. 1, 20–21 (2010) (holding that the material support statute was not unconstitutionally vague); United States v. Rahman, 189 F.3d 88, 115 (2d Cir. 1999) (per curiam) (holding that a seditious conspiracy law was not unconstitutionally vague). Under the passive indoctrination definition, technology companies can focus on reporting a definite type of content: material inspiring an agent to commit an act of terrorism without a principal’s specific tasking.339See Weimann, supra note 24. One of the chief criticisms of reporting proposals is that a vague definition of terrorist activity leaves technology companies to make their own content-based judgements.340Press Release, Wyden, supra note 227. By using the passive indoctrination definition, the government can provide technology companies with clearer guidelines on what content it must report.341See id.

C.     Counterarguments

Passive and active indoctrination are not perfect definitions. Both involve some degree of overlap: some online actors directly recruit a lone wolf to commit a specific crime, while others post inspiring messages that indirectly persuade a lone wolf to act on his or her own volition.342Weimann, supra note 24 (noting that attackers often are encouraged by other online entities who radicalize and train them). Nevertheless, these two definitions form a distinct border between constitutionally protected speech (i.e., passive indoctrination) and speech that is an exception to First Amendment protections (i.e., active indoctrination).343See United States v. Alvarez, 610 F.2d 1250, 1254–55 (5th Cir. 1980); 16 Am. Jur. 2d Conspiracy § 13 (2022). Active indoctrination provides courts with a definition of terrorism that targets particularly dangerous terrorist conspiracies, and passive indoctrination captures residual forms of online extremist content driving the spread of DVE propaganda.344See Jones et al., supra note 9, at 7.

Opponents may argue that active indoctrination captures too narrow a category of speech to effectively curb the increase in domestic terrorist activity.345See Tsesis, supra note 16, at 701 (noting that “[c]ourts have repeatedly found [that] incitement, true threat, and material support laws” target unprotected speech). Active indoctrination targets only conspiratorial agreements to engage in acts of terrorism.346See Alvarez, 610 F.2d at 1255; 16 Am. Jur. 2d Conspiracy § 13 (2022). As noted previously, only nine percent of extremists use social media and online forums to facilitate attacks.347START, Use of Social Media, supra note 55, at 6 (referencing a study of social media activities by extremists conducted between 2005 and 2016). A broader definition may indeed target a larger number of extremists, but a broad definition would confront a legal reality: the Supreme Court’s strong presumption against content-based speech regulation.348See United States v. Stevens, 559 U.S. 460, 468–69 (2010); Brandenburg v. Ohio, 395 U.S. 444, 447 (1969) (per curiam). A narrow, constitutional definition of terrorism nevertheless contributes more to national security than a broad definition that a court deems unconstitutional and void.349See Stevens, 559 U.S. at 468–69; Brandenburg, 395 U.S. at 447–48.

Critics may also point out that prosecutors already use criminal conspiracy to prosecute principals recruiting agents and, therefore, active indoctrination is a redundant definition.350See Hoffman, supra note 168, at 246–47. However, this argument ignores the distinction between prosecuting individuals for conspiracy and regulating online extremist speech.351Id. Active indoctrination focuses only on the latter.352See Alvarez, 610 F.2d at 1255; 16 Am. Jur. 2d Conspiracy§ 13 (2022). For the purpose of removing online terrorist content, the active indoctrination definition adapts conspiracy law principles and uses these principles as a guide to avoid regulating speech protected under the First Amendment.353See Alvarez, 610 F.2d at 1255; 16 Am. Jur. 2d Conspiracy § 13 (2022).

Opponents may also argue that the passive indoctrination definition, coupled with a reporting statute, does not aggressively address the surge in DVE activity online.354See Weimann, supra note 24 (addressing the rise of new media in cyberterrorism). Several legislative proposals focus on addressing propaganda that passively indoctrinates agents to commit acts of terror; removing this content outright may reduce DVE violence.355Tsesis, supra note 16, at 684, 688. Yet, this criticism ignores First Amendment limitations (chiefly, the presumption against content-based speech regulation) as well as the power of open-source intelligence.356Id. at 662, 690. A reporting statute with a specific definition would be an enforceable tool to gain intelligence on online radicalization and preempt imminent attacks.357Id. at 655 (discussing online radicalization attempts). While intelligence sharing is an imperfect solution to stopping the spread of online propaganda, open-source intelligence has been effective in preempting terrorist attackers.358Andrew V. Moshirnia, Valuing Speech and Open Source Intelligence in the Face of Judicial Deference, 4 Harv. Nat’l Sec. J. 385, 395 (2013). It informs law enforcement of imminent attacks while bypassing First Amendment protections against speech regulation.359Id.(discussing examples of shared information on imminent attacks).

Conclusion

First Amendment protections are sensitive to minor distinctions. Legislation that fails to precisely define terrorism in reporting and regulatory proposals risks being deemed unconstitutional.360E.g., See Something, Say Something Online Act of 2020, S. 4758, 116th Cong. (2020); Requiring Reporting of Online Terrorist Activity Act, S. 2372, 114th Cong. (2015); see also Tsesis, supra note 16, at 688 (describing a proposed act with a vague definition of terrorism). Federal courts have adopted successively higher burdens for regulating speech content,361See, e.g., Holder v. Humanitarian L. Project, 561 U.S. 1, 26 (2010); Virginia v. Black, 538 U.S. 343, 359–60 (2003); Grayned v. City of Rockford, 408 U.S. 104, 108–09 (1972); Brandenburg v. Ohio, 395 U.S. 444, 447 (1969) (per curiam). and even in the context of a reporting statute, broad statutory language is susceptible to being void for vagueness.362Watts v. United States, 394 U.S. 705, 708 (1969) (per curiam).

Recent counterterrorism proposals contain vague definitions of terrorism, which confront presumptions against speech regulation and vagueness.363See discussion supra Section I.F. Courts could rely on the constitutional avoidance canon to limit the definition of terrorist activity to “active indoctrination” and “passive indoctrination” in the context of regulatory and reporting statutes, respectively.364See discussion supra Part II. Regulatory statutes can rely on the active indoctrination definition to prohibit terrorist content comparable to conspiratorial agreements, and reporting statutes can rely on the passive indoctrination definition to share open-source intelligence on online terrorist activity.365See discussion supra Part II.

Policies surrounding terrorism and free speech are incredibly nuanced, yet the material threat of online terrorism is painfully stark. Our nation will undoubtedly face challenges with spotting online domestic extremists hiding behind the cloak of political bluster. Irony aside, the evolving nature of terrorism requires that courts use innovative legal solutions to balance our nation’s security with its historic commitment to preserving speech.

Share this article

Twitter
Facebook
LinkedIn