Countering Violent Extremism (CVE) online in the U.S.- Part 1 of 7

Google’s recent conference on countering violent extremism (CVE) precipitated a ‘surge’ (that term is so 2007 now) in discussion over how the U.S. should mitigate what is perceived to be the growing threat of homegrown extremism spread via the Internet. On Twitter, a sizable group intermittently deliberates the need for and method to accomplish the CVE goals put forth at the Google conference and echoed in the third tenet of the administration’s new CVE strategy- “Countering Violent Extremist Propaganda While Promoting Our Ideals”.

Joshua Foust (Google Wants to Fight Extremism) and Will McCants (Don’t Be Evil) wrote what I believe are the two best responses to this debate.  I’ve been discussing with a group of colleagues on what might be an alternative to the Google CVE approach of peppering individuals ‘susceptible to extremism’ (a.k.a. confused adolescent boys & lost, lonely loser men) into submission through a barrage of positive email spam and heart-warming YouTube videos. Having worked on and studied ten years of U.S.-GWOT strategic communications, I hear echoes of previous efforts that failed to counter much of anything and in many cases exacerbated the problem of extremism.  Additionally, the online CVE approaches recently put forth by ‘experts’ usually prove quite expensive to execute and extremely difficult to assess.

I’m already guessing my take on the use of social media to counter violent extremists using social media will not make social media zealots particularly happy.  That being said, I hope this series of posts initiates some discussion about how the U.S. might implement a sensible U.S. CVE strategy in cyberspace- a strategy scaled such that the benefits outweigh the costs of implementation.  Note, I don’t disagree with the objective of CVE.  However, I’ve grown quite wary of strategies hollow of any reasonable method for their implementation.

My comments below are just thoughts for now and I’m not convinced I’m correct.  I look forward to any and all discussion on the topic and hope that some feasible solutions might be revealed.  That being said, here are some of the sub-questions I have created with regards to this topic.  They are not necessarily in a logical order or sequenced for any particular reason but generally focus on two general approaches: 1) Shutting down extremist websites and their content & 2) Counternarratives against extremist rhetoric.  These are just questions I considered as I weighed the options for countering violent extremism:

  1. Should the U.S. Government (USG) notify Internet Service Providers (ISP) when their terms of service are being broken by people posting extremist content?
  2. What does the USG think will be accomplished by shutting down extremist websites?
  3. Who would be responsible for identifying and tagging extremist content in the USG? (Essentially, what is extremist content and who will decide what is extremist content?)
  4. AQ extremism has driven our CVE thinking, but what about other domestic groups that advocate extremism?
  5. What happens when the USG starts policing businesses based on their terms of service?
  6. What will extremists do when their websites get shut down?
  7. Do we expect extremists to listen to our ‘counternarratives’?
  8. If they don’t want to listen to our ‘counternarratives’, then what else could be done?

These eight questions are not likely to encompass all of the issues that need to be addressed. I would enjoy hearing any thoughts on what else should be included.  But for now, I’ll start with question #1:

1) Should the U.S. Government (USG) notify Internet Service Providers (ISP) when their terms of service are being broken by people posting extremist content? 

Yes, the USG should tell ISP’s that they are hosting extremist content.   My larger issues with this are 1) enforcement and 2) costs versus benefits.

As I understand it, the recommendation by most is that DHS would enforce this provision.  Surfing the Internet, identifying extremist content, notifying ISP providers, then ensuring removal of extremist content is a cumbersome bureaucratic mess requiring a lot of resources.

How much federal time and resources should we commit to removal of these websites? Many are just young guys who can get shut down one day and immediately open up another website the next day- thus draining government time, money and effort.  DHS could spend tons of resources chasing websites with no following as well.

How many resources are we willing to commit to the elimination of a website that may or may not lead to a rare event- a violent extremist attack?  In my opinion, we would be committing large amounts of effort to counter only one of the inputs to radicalization.  I still think the websites should be shut down.  My issue focuses more on “bang for buck.”  Since the take down of extremist content has a small impact on proliferation of extremist content (but probably is still necessary), how can we reduce the costs of policing this content?

Enough for now and more to come in posts 2 through 7….

5 comments

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>