Informal Internet Censorship: The Counter Terrorism Internet Referral Unit (CTIRU)
Read the full report here.
The CTIRU’s work consists of filing notifications of terrorist-related content to platforms, for them to consider removals. They say they have removed over 300,000 pieces of extremist content.
Censor or not censor?
The CTIRU consider its scheme to be voluntary, but detailed notification under the e-Commerce Directive has legal effect, as it may strip the platform of liability protection. Platforms may have “actual knowledge” of potentially criminal material, if they receive a well-formed notification, with the result that they would be regarded in law as the publisher from this point on.[1]
At volume, any agency will make mistakes. The CTIRU is said to be reasonably accurate: platforms say they decline only 20 or 30% of material. That shows considerable scope for errors. Errors could unduly restrict the speech of individuals, meaning journalists, academics, commentators and others who hold normal, legitimate opinions.
A handful of CTIRU notices have been made public via the Lumen transparency project.[2] Some of these show some very poor decisions to send a notification. In one case, UKIP Voices, an obviously fake, unpleasant and defamatory blog portraying the UKIP party as cartoon figures but also vile racists and homophobes, was considered to be an act of violent extremism. Two notices were filed by the CTIRU to have it removed for extremism. However, it is hard to see that the site could fall within the CTIRU’s remit as the site’s content is clearly fictional.
In other cases, we believe the CTIRU had requested removal of extremist material that had been posted in an academic or journalistic context. [3]
Some posters, for instance at wordpress.com, are notified by the service’s owners, Automattic, that the CTIRU has asked for content to be removed.This affords a greater potential for a user to contestor object to requests. However, the CTIRU is not held to account for bad requests. Most people will find it impossible to stop the CTIRU from making requests to remove lawful material, which might still be actioned by companies, despite the fact that the CTIRU would be attempting to remove legal material, which is clearly beyond its remit.
When content is removed, there is no requirement to notify people viewing the content that it has been removed because it may be unlawful or what those laws are, nor that the police asked for it to be removed. There is no advice to people that may have seen the content or return to view it again about the possibility that the content may have been intended to draw them into illegal and dangerous activities, nor are they given advice about how to seek help.
There is also no external review, as far as we are aware. External review would help limit mistakes. Companies regard the CTIRU as quite accurate, and cite a 70 or 80% success rate in their applications. That is potentially a lot of requests that should not have been filed, however, and that might not have been accepted if put before a legally-trained and independent professional for review.
As many companies will perform little or no review, and requests are filed to many companies for the same content, which will then sometimes be removed in error and sometimes not, any errors at all should be concerning.
Crime or not crime?
The CTIRU is organised as part of a counter-terrorism programme, and claim its activities warrant operating in secrecy, including rejecting freedom of information requests on the grounds of national security and detection and prevention of crime.
However, its work does not directly relate to specific threats or attempt to prevent crimes. Rather, it is aimed at frustrating criminals by giving them extra work to do, and at reducing the availability of material deemed to be unlawful.
Taking material down via notification runs against the principles of normal criminal investigation. Firstly, it means that the criminal is “tipped off” that someone is watching what they are doing. Some platforms forward notices to posters, and the CTIRU does not suggest that this is problematic.
Secondly, even if the material is archived, a notification results in destruction of evidence. Account details, IP addresses and other evidence normally vital for investigations is destroyed.
This suggests that law enforcement has little interest in prosecuting the posters of the content at issue. Enforcement agencies are more interested in the removal of content, potentially prioritised on political rather than law enforcement grounds, as it is sold by politicians as a silver bullet in the fight against terrorism. [4]
Beyond these considerations, because there is an impact on free expression if material is removed, and because police may make mistakes, their work should be seen as relating to content removal rather than as a secretive matter.
Statistics
Little is know about the CTIRU’s work, but it claims to be removing up to 100,000 “pieces of content” from around 300 platforms annually. This statistic is regularly quoted to parliament, and is given as an indication of the irresponsibility of major platforms to remove content. It has therefore had a great deal of influence on the public policy agenda.
However, the statistic is inconsistent with transparency reports at major platforms, where we would expect most of the takedown notices to be filed. The CTIRU insists that its figure is based on individual URLs removed. If so, much further analysis is needed to understand the impact of these URL removals, as the implication is that they must be hosted on small, relatively obscure services. [5]
Additionally, the CTIRU claims that there are no other management statistics routinely created about its work. This seems somewhat implausible, but also, assuming it is true, negligent. For instance, the CTIRU should know its success and failure rate, or the categorisation of the different organisations or belief systems it is targeting. An absence of collection of routine data implies that the CTIRU is not ensuring it is effective in its work. We find this position, produced in response to our Freedom of Information requests, highly surprising and something that should be of interest to parliamentarians.
Lack of transparency increases the risks of errors and bad practice at the CTIRU, and reduces public confidence in its work. Given the government’s legitimate calls for greater transparency on these matters at platforms, it should apply the same standards to its own work.
Both government and companies can improve transparency at the CTIRU. The government should provide specific oversight, much in the same way as CCTV and Biometrics have a Commissioner. Companies should publish notifications, redacted if necessary, to the Lumen database or elsewhere. Companies should make the full notifications available for analysis to any suitably-qualified academic, using the least restrictive agreements practical.
FoIs, accountability and transparency
Because the CTIRU is situated within a terrorism- focused police unit, its officers assume that their work is focused on national security matters and prevention and detection of crime. The Metropolitan Police therefore routinely decline requests for information related to the CTIRU.
The true relationship between CTIRU content removals and matters of national security and crime preventions is likely to be subtle, rather than direct and instrumental. If the CTIRU’s removals are instrumental in preventing crime or national security incidents, then the process should not be informal.
On the face of it, the CTIRU’s position that it only files informal requests for possible content removal, and that this activity is also a matter of national security and crime prevention that mean transparency requests must be denied, seems illogical and inconsistent.
The Open Right Group has filed requests for information about key documents held, staff and finances, and available statistics. So far, only one has been successful, to confirm the meaning of a piece of content.
During our attempts to gain clarity over the CTIRU’s work, we asked for a list of statistics that are kept on file, as discussed above. This request for information was initially turned down on grounds of national security. However, on appeal to the Information Commissioner, the CTIRU later claimed that no such statistics existed. This appears to suggest that the Metropolitan Police did not trouble to find out about the substance of the request, but simply declined it without examining the facts because it was a request relating to the CTIRU.[6]
We recommend that the private sector takes specific steps to help improve the situation with CTIRU.
Recommendations to Internet platforms:
1. Publication of takedown requests at Lumen
2. Open academic analysis of CTIRU requests
[1] European E-Commerce Directive (2000/31/EC) https://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX:32000L0031
[2] https://www.lumendatabase.org
[3] Communication with Automattic, the publishers of wordpress.com blogs
[4] https://www.theguardian.com/uk-news/2017/sep/19/theresa-may-will-tell-internet-firms-to-tackle-extremist-content and https://www.bbc.co.uk/news/uk-42526271 for instance
[5] https://www.whatdotheyknow.com/request/ctiru_statistical_methodology “A terrorist group may circulate one product (terrorist magazine or video) – this same product may be uploaded to 100 different file-sharing websites. The CTIRU would make contact with all 100 file sharing websites and if all 100 were removed, the CTIRU would count this as 100 removals.”
[6] Freedom of Information Act 2000 (FOIA) Decision notice: Commissioner of the Metropolitan Police Service; ref FS50722134 21 June 2018 https://ico.org.uk/media/action-weve-taken/decision-notices/2018/2259291/fs50722134.pdf