Mass Surveillance
The Online Safety Bill puts a spy in your pocket
The deployment of client-side scanning on private messaging systems was trailed in a research paper published by the technical directors of GCHQ and the National Cybersecurity Centre (NCSC). Joining the dots, it likely connects to a political move in the Online Safety Bill and signals a paradigm shift in online surveillance.
What is client-side scanning?
The research paper entitled “Thoughts on child safety on commodity platforms” discusses technologies for tackling child sexual abuse material on social media and messaging services. The paper comes out in favour of client-side scanning, even though the authors acknowledge there are risks to users’ security and privacy.
Client-side scanning is a technology for moderating content on encrypted messaging services. It involves software that resides on a user’s smartphone, and checks images being uploaded for matches against a database of prohibited content. The database does not contain the actual images, but instead has what are known as “neural hashes” – rather like digital fingerprints – that enable two images to be compared. Where there is a match, the system will take a pre-programmed action to remove or report it to the authorities.
The relationship between people and their phones is unsurprisingly, personal. The thought that their private and possibly most intimate communications are being monitored for the government’s purposes will be abhorrent to many.
Dr Monica Horten, ORG Policy Manager
The threat to encryption
The Online Safety Bill is widely understood to contain a mandate for client-side scanning, but it does not say so specifically. It is encapsulated in a covert phrase ‘publicly or privately’ within three Clauses [192, 188 and 104]. Clause 192 contains an interpretation of “content communicated publicly or privately”. Clause 188 defines what Ofcom should consider when deciding if content is communicated “privately”, and Clause 104 gives Ofcom powers to mandate private services to install monitoring technology. The regulator can require providers either to develop their own system or implement an Ofcom accredited system, both of which must meet Home Office approved standards.
However, in all 230 pages of the Bill, it is never stated directly what “privately” means. The only formal confirmation has come from Ministerial statements in Parliament that the “responsibilities are the same” for encrypted messaging services as for public social media platforms. [See Damian Hinds 4 November 2021 Q287]
A Guardian article citing the two authors of the paper, Ian Levy of the National Cyber Security Centre and Crispin Robinson of GCHQ, provides some pointers as to the likely thinking behind this undefined legal language. The cybersecurity chiefs are quoted as saying that the technology could “protect children and privacy at the same time”. They say they are writing in a personal capacity, but given their day jobs, it is reasonable to assume their words are in line with the desired government approach.
Their paper concludes that the benefits in tackling child sexual abuse outweigh the risks but that the risks can be overcome. This is questioned by other technical experts. And is this sufficient to justify legislating to force service providers to implement it – as the Online Safety Bill does?
The wide-ranging Ministerial powers in the Bill present a worrying prospect that a future government could add to the scanning requirement and there would be minimal, if any, scrutiny from Parliament.
Dr Monica Horten, ORG Policy Manager
Client-side scanning is being developed to tackle child sexual abuse material, and remove it from encrypted messaging systems. It is claimed to provide a method of checking images without breaking the encryption. The images are matched while the user is uploading and before it is encrypted so technically there is no breach. However, it does break the premise of encryption that the communication will not be interfered with.
False positives and mission creep
There are question marks around the technology and the likelihood of false positives. The paper admits that “automated detection assumes a non-zero false positive rate” and it is relatively easy to create “benign images that generate false positives”. This article demonstrates how is can be done. When Apple released its version of client-side scanning in August 2021, it was forced to withdraw the system a month later after criticism from researchers that its ‘NeuralHash’ algorithm was generating false positives.
One of the risks highlighted in the Levy-Robinson paper is the possibility that bad actors could subvert the system. The paper highlights the possibility for manipulation of the detection system in order to conduct unauthorised surveillance or tracking. It further highlights the critical role of the database curators, and the potential for coercion to be applied by third parties who would require them to insert additional types of content. There is nothing inherent in the techology that limits its use to child sexual abuse material. Hashes relating to any type of image could be inserted into the database.
Such interference with the database could be done for political motives. Governments could exert political pressure via legislation, as we are seeing with the Online Safety Bill. The wide-ranging Ministerial powers in the Bill present a worrying prospect that a future government could add to the scanning requirement and there would be minimal, if any, scrutiny from Parliament.
The Online Safety Bill includes a requirement that providers report positive matches of child sexual abuse material to the National Crime Agency. This is not an inherent function of the technology, but a specific mandate of the UK government. It means that any false positives would also be reported. This raises concerns about safeguards for privacy rights. There is no process for innocent users to defend themselves against any allegations arising, and legal scholars remind us that the right to a fair trial could be comprised. [See Ludvigsen, Nagaraja and Daly].
Stop the Spy Clause
Privacy concerns are a central issue. Even Levy and Robinson acknowledge this, although they are, I feel, a little dismissive in suggesting that it is a manageable risk “in the majority of cases”. The scale of the deployment means this is a mass surveillance tool. It would be on every smartphone in the country, operating 24/7, checking for matches against all of our content. It is a vastly disproportionate measure, and given the uncertainties around the technology, should be approached by policy-makers with caution.
The relationship between people and their phones is unsurprisingly, personal. The thought that their private and possibly most intimate communications are being monitored for the government’s purposes will be abhorrent to many. The public image of the spy is the sharp-suited James Bond, not an algorithm in our coat pocket.
Don’t Scan Me!
Online safety Bill policy hub
Discover more about the risk to privacy and freedom of expression in our policy papers.
Find out more