Age verification in the Online Safety Bill
The draft Online Safety Bill’s provisions on mandatory age verification don’t just threaten your rights to privacy and freedom of expression – they also threaten the integrity of the Internet’s architecture.
As part of its goal to make the UK the “safest place in the world to be online”, government’s draft Online Safety Bill includes provisions which will mandate age verification processes onto all sites, services, or applications offering user-to-user content or communication which can be accessed in the UK.
To be clear, what government is mandating here is not age verification as we have traditionally known it, for example, blocking access to explicit adult content or your local brewpub’s draught menu.
What is being proposed is age checks on sites and services with user-to-user content or communication across the board, meaning all content, all sites, all services, and all users, all the time, excepting sites which are deemed ‘child safe’.
These checks may take the form of direct age verification, linked to something like a passport or a credit card, or it may take the form of age assurance, which uses other information such as metadata, facial recognition, or behavioural profiling to establish the user’s age.
Either way, both technological requirements have disturbing implications for your digital rights. And either way, government does not seem to grasp the unintended consequences of what they are about to legislate.
Although the Bill’s impact assessment notes that “We expect only a small percentage of the highest risk businesses that are likely to be accessed by children to be required to implement age verification systems”, the Bill itself has been drafted in a way that means every business in scope will need to implement some form of age verification or assurance.
They will need to do this not to shield children from subjectively harmful content, but to achieve their compliance requirements, in order to know which of their visitors are children, and which of us are adults.
How did we get here?
Two years ago, the ICO’s draft Age Appropriate Design Code attempted to mandate age verification across all sites and services, for all content, for all users, regardless of scope, proportionality, or risk. This was due to pressure from children’s rights groups as well as the age and identity verification software lobby, which has sought to steer the UK’s post-Brexit Internet regulation regime in the interest of the creation of a market for their products.
Those aspects of the Code were negotiated out, but we always knew that the push would return. It has indeed returned, with a vengeance, in the draft Online Safety Bill.
Part 2, Chapter 4 states that:
(1) A provider of a regulated service must, at a time set out in section 27, carry out an assessment—
(a) to determine whether it is possible for children to access the service or any part of the service, and
(b) if it is possible for children to access the service or any part of the service, to determine whether the child user condition is met in relation to the service or any part of the service.
(2) If a provider provides more than one regulated service, an assessment under subsection (1) must be carried out for each service separately.
(3) A provider is only entitled to conclude that it is not possible for children to access a service, or a part of it, if there are systems or processes in place that achieve the result that children are not normally able to access the service or that part of it.
(4) The “child user condition” is met in relation to a service, or a part of a service, if—
(a) there are a significant number of children who are users of the service or of that part of it, or
(b) the service, or that part of it, is of a kind likely to attract a significant number of users who are children.
The explanatory notes go on to say that
A provider can only conclude that it is not possible for a child to access its service if it has robust systems or processes in place which result in children not normally being able to access the service. These could be effective age-verification measures or an equivalent technology which identifies and prevents children accessing the service.
In other words – and in government’s eyes – if you are not using a process to age-verify all your visitors, whether that is identity verification or age assurance, you are violating your duty of care to children, and you are breaking the law.
You then become liable for the consequences.
Are age verification products safe?
Some age verification and assurance providers make the case that their products are safe. However, the sector remains largely unregulated outside of basic GDPR provisions, which is deeply problematic. And government, as you know, is keen to water down GDPR provisions in favour of “innovation”, which in this context means keeping age verification providers very lightly regulated.
Ahead of the Online Safety Bill, backbenchers in Parliament have introduced a Bill, the Age Assurance Minimum Standards Bill, to establish, as the name suggests, minimal privacy, ethics, and human rights standards around the use of these technologies. However, this approach would not provide sufficient protection across the huge range of commercial appliations which will be required to implement the technology.
More to the point, by seeking to present the appearance of creating guardrails around blanket age verification requirements, MPs are seeking to legitimise the practice, and to present it as being safe and ethical. In reality, it is technically risky, morally dubious, and nearly impossible to regulate.
As a backbench bill, the Age Assurance Minimum Standards Bill has little chance of succeeding, which makes it all the more important for Parliament’s pre-legislative scrutiny committee to look at the unintended consequences ahead.
What will mandatory age checking mean in practice?
The Online Safety Bill’s mandatory age checking processes present different risks, regardless of whether a site attempts to assume your age through indirect data collection or verify your age you through direct identification.
The first and most obvious risk from blanket age verification is consent fatigue. The simplest way to put it is this: if you hate cookie consent pop-ups, welcome to the wonderful world of age verification pop-ups. You will be required to prove your age and identity, linked to some official form of identification or via a third party intermediary, on every site you visit and every service you use. (Once you get through that process, then you can move on to cookie consents.)
The second risk is the chilling effect that these blanket age verification processes will have on your digital rights to privacy and freedom of expression. You will no longer be able to read some websites without proving your identity. You will no longer be able to say some things without proving your identity. You will no longer be able to seek certain information without proving your identity. And in the view of some age verification proponents, the only reason you would be opposed to any of that is if you have something to hide.
The third risk comes from the age verification and assurance processes themselves. These processes may collect many different pieces of personally identifiable information in order to profile you to establish your likely age. In doing so, they will create massive privatised databases of personal Internet browsing – databases which would be very appealing to governments or hackers.
Regardless of whether a service provider chooses a data-intensive form of age verification or a third-party provider of age assurance, either solution creates a barrier to public access which can only be opened by providing personal data. That is, in fact, what the law intends it to do. This will be excellent news for the identity verification provider market if no one else.
And the fourth risk, as will be obvious, is the risk that this will pose to Internet architecture. Large parts of the UK Internet will, for all intents and purposes, be gated off behind a giant state-mandated identity verification wall. That wall, as a form of content filtering, will act as an additional technical layer within the UK’s Internet architecture – a wall not duplicated in any other western country. As the Internet Society has noted, this kind of mandated content filtering process risks four out of five of the fundamental networking properties of the Internet.
In that light, “making the UK the safest place in the world to be online” is the first step towards the creation of a British ‘splinternet’ cut off from it.
But why stop at just age checks?
While the thought of mandatory age verification across all sites and services will horrify many people, it must be understood in context as part of a wider push to mandate stronger identification of social media users, and expand the scope of mandatory verification from age to other areas, at the expense of anonymity and privacy. That is a bigger question which we will come back to in a later post, however, it is no secret that this scope creep makes us extremely worried.
The combination of an aggressive commercial lobby, and deeply entrenched political views about how to manage online harms, is providing an extremely dangerous momentum behind the Online Safety Bill. As MPs gather to scrutinise the Bill, they must also consider where this dangerous momentum might be taking us.
Hear the latest
Sign up to receive updates about Open Rights Group’s work to protect our digital rights.
Subscribe