Countering Violent Extremism (CVE) online requires private sector support and intervention. Unlike national borders or illegal products, governments have few if any mechanisms to control the Internet and the distribution of its content. In parts #1, #2, #3, and #4, I explored several problems with trying to identify and remove extremist content residing in the private sector. Today, I’ll shift to:
5) What happens when the U.S. government starts policing businesses (primarily ISP’s) based on their terms of service?
Right now, it appears a few ISP’s and content hosts have decided to try and police themselves enough to keep consumers and governments sufficiently content to stay off their backs without impeding their services. However, their argument is a bit absurd when it comes to enforcing their own terms of service. “we try, but there is so much data that we can’t help it, our product is so good we just can’t stop extremist content from moving through our service.”
Here’s a hypothetical example: A water company delivers water to an entire city and 0.01% of the water turns out to be poisoned resulting in a handful of deaths. Would the citizens say? “it’s only a few people, no big deal, so we’ll just let it go for now and won’t hold the water company responsible because most of the water is really clean.” No way!
ISP’s operate in a similar fashion to water companies except the product moving through ISP pipes is information rather than water. Is poisonous information as dangerous as poisonous water? Depending on one’s perspective, extremist content is a weapon and its transport into the U.S. via ISP pipes should result in regulation and/or action. This analogy is again a bit extreme. However, I use it to illustrate important questions which are fundamental in our online CVE approach: Can information be a weapon? Should we protect the freedom of all speech, regardless of its content? My guess is ‘no’ on both but I’m not sure I can identify the appropriate middle.
Businesses, by design, maximize profits and minimize costs. Today, there is no incentive for ISP’s to slow down content upload and weaken their competitive advantage in order to filter out extremist content. I expect that their push towards “wanting to counter violent extremism online” is two fold. First, it’s good public relations. It’s probably cheaper for them to project a desire to counter violent extremism online than it is to actually counter violent extremism online. Second, by calling for an increased CVE effort online, they will likely advocate for government funding to deal with extremism. Essentially, this would mean the government would be funding ISP’s and other web companies to counter a problem they created by not filtering their content. These companies would receive funding to offset their costs while also maintaining or increasing their revenues.
I also wonder if companies would reduce their internal policing of extremist websites if the government takes on the role of identifying extremist content and notifying the providers. A smart company might think, “well, the government will now tell me what is extremist content, so I’ll reduce my internal policing staff and resources and just wait for the government to tell me what content to take down. This also saves our company the headache of dealing with customers that want to argue about my company’s judgment on what is extreme.” Essentially, government policing of extremist content may provide ISP’s a disincentive to police their hosted content.
(A quick note: Some may think my comments above are anti-ISP’s. Actually, if I operated an ISP, I doubt I would work vigorously to remove extremist content either. The purpose of a business is to provide a product or service and earn profits. By unilaterally pursuing the removal of extremist content, these ISP’s would only be raising their internal costs and hurting their competitive advantage.)