Lessons and questions for the IWF
Last week’s outrage over the blocking of Wikipedia by the UK’s major ISPs after the Internet Watch Foundation (IWF) added, and then removed, an image hosted by the online encyclopedia to their blocklist may have died down. But lingering questions about the UK’s internet censorship practices remain unanswered.
According to this report in the Guardian:
“A spokeswoman for the IWF said that to her knowledge it was the first time in its decade-long history that any image or page banned by the IWF had been reassessed, and the first time that any page or image on Wikipedia had been banned.”
These two IWF firsts could well be related. Until the activities of the IWF and the ISPs who employ their blacklist was brought into the open by their catastrophic interaction with Wikipedia, it was nearly impossible to scrutinise decisions made by the IWF.
We fully appreciate the sensitivity of the work carried out by the IWF. The efforts the IWF makes to be open about how it goes about that work on its website put many other organisations to shame. We also understand and applaud the fact that the activities of the IWF are audited by independent legal and technical experts. And we’re pleased that a consumer representative was recently appointed to the IWF Board.
But improvements can be made, both at the IWF and at the ISPs who employ the IWF blacklist. Users at many ISPs will have seen error 404 messages (which means “file not found”). We think it would be more appropriate for ISPs to ensure that their users see error 403 pages (which means access to the requested material has been forbidden). For example, Demon return the following error message for those attempting to access images on the IWF blocklist:
“We have blocked this page because, according to the Internet Watch Foundation (IWF), it contains indecent images of children or pointers to them; you could be breaking UK law if you viewed the page.”
This way, internet users will know what’s going on, and will have more of a clue whether a URL has been added to the blocklist either in error or as the result of a misjudgement.
For the IWF’s part, we think they could do a lot better at notifying people when a URL is added to the blocklist. Firstly, this means notifying site owners. The Wikipedia community had to work out for themselves that a URL on their site had been targeted by the IWF. If site owners are responsible enough to leave contact details on their website, the IWF should be able to notify them when a URL has been added and also notify them of the routes of appeal.
Secondly, the IWF should review the feasibility of contacting ISPs hosting illegal content outside of the UK. Notification and takedown is a lot more effective than blocking, and it also allows for more transparency, as site owners and ISPs know what’s going on, and have better access to routes of appeal. If banks can now remove phishing sites hosted outside of the UK within a matter of hours, it shouldn’t take the IWF an average of a month to have illegal images removed [.pdf].
Finally, we would support introducing judicial oversight to the IWF’s decision-making processes. The IWF do have a trained team to assess images against sentencing guidelines, and IWF assessors can contact the police for their opinion. But this system can only ever decide whether an image is “potentially illegal” – it does not replace the independent scrutiny of a judge. This could be seen as an expensive imposition, but we believe it’s worth it – censorship should not come cheap in a civilised democracy.