Legal opinion finds Online Safety Bill may breach international law

  • A legal opinion has found that there are, “real and significant issues” regarding the lawfulness of a clause in the Online Safety Bill, which would require social media platforms to proactively screen their users’ content and prevent them from seeing anything deemed illegal.
  • The opinion finds that there is “likely to be significant interference with freedom of expression that is unforeseeable and which is thus not prescribed by law”.1
  • This represents “a sea change in the way public communication and debate are regulated in this country” as the Bill would require online content to be screened and blocked before it is even posted, with tech companies that don’t comply being sanctioned.

Open Rights Group has received legal advice from Dan Squires KC and Emma Foubister of Matrix Chambers, which states that measures in the Online Safety Bill may involve breaches of international law.

The legal opinion examines Clause 9(2) of the Bill, which places a duty on online platforms, such as Facebook and Twitter, to prevent users from “encountering” certain “illegal content”. The Opinion confirms that this duty “amounts to prior restraint”: it will require platforms to screen, intercept and block online communications before they have been posted. This, they warn, represents, “a sea change in the way public communication and debate are regulated in this country” as people may be censored before they have even been able to post online.

Dr Monica Horten, policy manager for freedom of expression at Open Rights Group, said:

“It’s one small clause with a huge impact. It up-ends the existing legal order on tech platforms. This legal opinion confirms that these measures could compel online platforms to remove content before it has even been posted, under threat of fines or imprisonment. There are no provisions for telling users why their content has been blocked.

“As well as being potentially unlawful, these proposals threaten the free speech of millions of people in the UK. It is yet another example of the government expecting Parliament to pass a law without filling in the detail.

“This stark warning cannot be ignored. As the Bill reaches its final parliamentary stages, it’s vital that peers press the government on whether it will require tech companies to pre-screen social media posts, and how, in those circumstances, the Bill could protect online freedom of expression.”

Risk to freedom of expression

Until now, the law allows individuals to post without being restricted by the government or tech platforms, but if content is reported as illegal or breaches the site’s terms and conditions, the platform would take it down and would not be liable for the content.2

This Clause turns that premise upside down. Under Clause 9(2(a), tech companies are required to “prevent users encountering” any illegal content. The only way to ensure that they are prevented, is to stop it ever appearing on the platform in the first place.

The online platform is made strictly liable for policing the illegal content, with the threat of large fines and imprisonment of its management, if they fail to do so. This will incentivise companies to err on the side of caution and block content rather than risk sanctions.

The platform must make its own determinations of what is illegal, creating the risk that “a significant number of lawful posts will be censored”. The Opinion highlights the particular challenges of identifying whether content that could appear to relate to fraud, terrorism, or immigration crimes is actually illegal. For example:

“Assisting unlawful immigration under section 25 of the Immigration Act 1971 is listed in Schedule 7. The government has expressly indicated that this provision may be used to prevent people from posting photos of refugees arriving in the UK by boat if they “show that activity in a positive light”1. It is difficult to predict how an AI algorithm will decide what constitutes showing such an image in a “positive light”. What if the image was posted without any text? Removing these images prior to publication, without due process to decide whether they are illegal, may have a chilling effect on a debate of public importance relating to the treatment of refugees and illegal immigration”

The measures are not ‘prescribed by law’

The Opinion notes that there is a significant risk of interference with freedom of expression which is not “prescribed by law”. In order to be prescribed by law, the Act must have “some basis in domestic law” and (b) must be “compatible with the rule of law”, which means that it should comply with the twin requirements of “accessibility” and “foreseeability”. The law “must afford adequate legal protection against arbitrariness and accordingly indicate with sufficient clarity the scope of discretion conferred on the competent authorities and the manner of its exercise.” The opinion concludes that Clause 9 (2) (a) fails on all these counts and therefore is not itself lawful.

This is because users of platforms will not be able to foresee and predict in advance what kinds of posts will be removed. Content being blocked by means of prior restraint amounts to a straightforward interference with the right to express and receive that information. That may happen wrongfully if the AI or other algorithm misidentifies legitimate content as priority illegal content.

Users will not be able to foresee and predict in advance which content will be subject to prior restraint because of both the vast range of broadly defined illegal content, and the lack of transparency and accountability about how it will be identified and removed. Decisions on illegality will not be made by the courts, but by private providers who will be granted an over-broad discretion to determine if content is illegal without having to provide evidence.

Redress for affected individuals

Due to the lack of transparency and accountability and the removal of content at scale, it is especially vital to have safeguards to ensure redress for those affected. Currently in the Online Safety Bill, there is no requirement for individuals to be notified that their content has been blocked, no timescale to respond to complaints nor do companies have to provide them with a reason as to why their content has been removed. The Opinion notes: “There do not appear to be any enforcement processes for a failure adequately to address complaints and no entitlement to compensation” and that “the complaints process is very likely to be ineffective”.

Risks of discrimination

The opinion also warns that: “Adverse and unintended consequences are likely to include that the prior restraint provisions disproportionately and unjustifiably impact certain ethnic or other minority groups”.

In order to comply with the provisions in the Bill, online platforms will have to use algorithms that automatically ‘identify’ content as illegal and remove it before it is posted. There is a significant and growing body of research that demonstrates that automated technology frequently contains inherent biases. The use of such technology creates a clear risk that the screening systems will disproportionately block content relating to or posted by minority ethnic or religious groups.

The Opinion notes that the proposals: “will give rise to interference with freedom of expression in ways that are unforeseeable and unpredictable and through entirely opaque processes, and in ways which risk discriminating against minority religious or racial groups […] This would be contrary to Article 14 of the Convention, taken with Article 10’.

ORG has shared the legal opinion with peers and urged them to address the threats posed by Clause 9(2). Read the legal briefing here.

Footnotes

  1. Measures must be foreseeable and accessible in order to comply with international law.legisla
  2. E-commerce regulations: www.legislation.gov.uk/uksi/2002/2013/contents/made

Defend Democratic Expression

What’s legal to say should be legal to type.
Find Out More
Defend Democratic Expression