Online harms: Freedom of expression remains under threat
Today (15 December) government has published its long-awaited full response to the Online Harms White Paper ahead of the publication of the Online Safety Bill next year.
Open Rights Group remains concerned about the framework’s implications for freedom of expression and privacy.
Private messaging
Government has indicated that private messaging, including interpersonal communications as well as closed social media groups, will fall within the scope of the online harms framework. Private messages may be subject to interception and scanning on the assumption that CSAM or terrorist content is being exchanged. If encryption is used to protect private messaging, then companies may need to show that illegal content is nevertheless being dealt with. Thus encryption and privacy are set as a privilege dependent on wider corporate policies rather than something we are entitled to.
The government omits to say what the “stringent legal safeguards to protect users’ rights” will be in a regulatory framework which appears to presume guilt. These matters will be delegated to Ofcom to resolve in a ‘code of practice’ pertaining to private communications.
It seems that the necessary use of privacy and security safeguards to protect private messaging from unlawful and third-party interception could qualify as both achieving of a “duty of care” as well as a violation of it.
“Legal but harmful”
To protect freedom of expression, the proposed online harms regime will require content providers to treat illegal and ‘legal but harmful’ content differently. Content which is undoubtedly illegal, within the rule of law, is relatively easy for content providers to manage and mitigate. Requiring content providers to regulate content which is ‘legal but harmful’, on the other hand, creates an obligation to measure the risk, likelihood, and results of any given ‘harm’ within subjective standards to achieve objective legal compliance.
Government still have not set out their definition of harm or risk. They say: “The legislation will set out a general definition of harmful content and activity.” Unfortunately, this is the central question which sets the thresholds for regulatory action, and it appears to remain unanswered.
Similarly, the response says that “to meet the duty of care, companies in scope will need to understand the risk of harm to individuals on their services and put in place appropriate systems and processes to improve user safety.” We still do not know how risk is to be quantified, or for which individuals. While there are good statements about proportionality, the duty of care remains surprisingly ill-defined and open to many potential interpretations.
Creating a complex and varied harms mitigation regime around content which is ‘legal but harmful’ is neither feasible nor practical for all but the largest platforms and content providers. In practice, this can only result in ‘collateral censorship’, where service providers and administrators will feel that they have no choice but to remove what may be perfectly innocent and harmless content, rather than risk falling afoul of a regulatory system which threatens financial and criminal sanctions for failure. The chilling effect on free speech seems hard to avoid.
The response states that companies in scope will have a duty to create effective and accessible reporting and redress mechanisms. These structures are meant to safeguard against infringement of rights, such as taking down too much content. Nevertheless, appeals are rarely sufficient to protect rights, as people are generally reluctant to invest time and energy over appeals. Thus appeals may flag areas of problems, but do not by themselves remove free speech costs.
Government has erred before by providing insufficient transparency in content takedowns, such as in the Nominet domain takedown system, and we are encouraged to see a commitment to getting that transparency right from the start. Likewise, Government has indicated that penalties for misapplying terms and conditions could also apply to excessive content removal. There certainly will need to be incentives against misapplication, and also against over-broad terms and conditions.
Terms and conditions will have a new meaning in this regime. Until now, they have permitted arbitary action by platforms, while affording users few rights. In the future however, they will become a battle ground in which the extend of speech and the enforceability of takedowns is decided. Once settled, then enforceability is potentially a good thing, but the opportunities to pressurise companies to extend what is disallowed is deeply problematic.
Two-tier system
In order to avoid criticism for placing large burdens on small, innovative companies, the Government wants to create a two-tier system, meaning the regulatory system places greater burdens on the large social media companies. This is unlikely to satisfy the demands of groups that have pushed for online harms legislation, as arguably many of the riskier behaviours are found on smaller, emerging services.
The strategic problem is that Government will find itself favouring the larger platforms as more compliant, and easier to command-and-control, for content purposes. This kind of content regulation may need to be tailored to the different needs of small and large companies, but the long-term danger is that the state prefers large corporations to deliver its policy goals, whether these are content or surveillance. The danger to rights and democratic discourse is that monopolistic companies provide a monotonous environment. We should be aiming to create social media diversity to provide a range of experiences and content.
Managerial liability
We welcome government’s partial climbdown on demands for director and senior managerial liability for speech and content offences. Today’s response states that government “will reserve the right to introduce criminal sanctions for senior managers who fail to respond fully, accurately, and in a timely manner, to information requests from the online harms regulator”, and that this option will not be introduced without secondary legislation. Sanctions, in this system, will only be a last resort for systemic failures of regulatory engagement, rather than a retaliatory option for content disputes which could have a chilling effect on free speech.
Holding the prospect of corporate director and senior managerial liability and prison sentences nevertheless sends the wrong message. These are not acceptable steps in a democratic society. Apply this tool to Hungary, Hong Kong or Turkey and it becomes obvious why.
We know that online harms will not be tackled by turning the issue into a series of “winnable” ad-hominem witch-hunt, arrests, and trials. Nor will they be tackled by criminalising everyday site administrators, managers, and moderators for the ways members of the public misuse their terms and services. Government should remove these clauses rather than placing them, as a sword of Damocles, hovering to be enacted when newspapers or others demand it.
Media and journalistic content
To ensure freedom of expression, government has signalled its intent to exempt newspaper and journalistic content, published on its own sites, from the scope of the regulation. “Below the line” comments under these articles will also be exempt.
This approach, however, views media content as existing in walled gardens. It does not address what will happen when media content is shared and discussed on platforms, and both the content as well as the ensuing discussion risk falling afoul of the content regulation principles. The Government says “legislation will include robust protections for journalistic content shared on in-scope services.”
It is difficult to see how this could work in practice without creating spaces in which sharing newspaper content exempts users from platform terms and conditions. In short, post something from your favourite tabloid, and constraints have to be loosened. This makes no sense.
What is next?
It is often easy to spot a badly designed policy. They start with large annuncements, are pushed through relatively narrow concerns, and then take years to deliver. In the end, they tend to be disappointing or are simply forgotten. Remember the Internet cut offs in the Digital Economy Act 2010? Or age verification in the DEA 2016?
There is already an interesting divergence between the UK’s proposals for a hands-on regulator and wide-ranging duty of care, and relatively narrow approaches being suggested elsewhere. The political pressure that has come with multiple grievances is very real; the problems are real; yet it is not obvious that a duty of care really has the potential to deal with them while the market is concentrated and dependent on an attention model.
The main drivers of the problems are, after all, the reach of the platforms, their desire for interaction, users who wish to exploit the vulnerabilities this creates, and the lack of choice for users to determine their experience with those platforms. A duty of care does not address the fundamentals, for all the talk of a systemic approach, but rather asks for the most politicised and extreme issues to be risk-assessed.
Nevertheless, we do appreciate that the Government has understood that these problems are not easy to solve. The problems emerge as Parliament assesses any future bill, and begins to understand the weaknesses of the approach; it is likely to call for stronger, and more dangerous interventions. Further problems may come if Ofcom is tasked as intended; it too is likely to have to disappoint the Online Harms Bill’s proponents.
We will continue to engage with government and Ofcom on these and the other issues raised by the online harms framework, and will work to protect your rights to privacy and freedom of expression. We will also be advocating approaches that could really address the problems of platform power: competition and greater control over the way you are profiled and information is prioritised for you. Rights and principles are the answer in a complex world.