Digital Privacy

AI Regulation: Will Labour Promote Growth and Protect Rights?

In the King’s Speech, Labour committed to binding regulation on “the handful of companies developing the most powerful AI models” to ensure the safe developments of these technologies. These plans come alongside manifesto commitments, which stressed the importance of maintaining strong legal safeguards and placed a focus on the industrial strategy as a driver for economic growth and technological development.

A DOUBLE-EDGED SWORD

Taken at face value, Labour’s commitment signals a departure from the previous Government’s refusal to introduce binding legal requirements on AI as part of their “pro-innovation approach to AI regulation”. As such, these announcements ought to be greeted with mild relief. However, details are still lacking, and the overarching commitment of the new Government to pursue economic growth, while not bad in itself, could be a double-edged sword. Economic growth can, indeed, be an important means to increasing standards of living. However, the past 15 years have taught us that pursuing growth in the absence of regulations that even out power imbalances and protect individuals leads to job losses and poor working conditions, exacerbates inequality, and deteriorates the business environment — in other words, deregulation undermines inclusive growth, economic justice and the very foundations on which the modern economy rests.

These concerns are far from being speculative: while some voices in the Labour Party have stressed that only by “building public trust and confidence” and “addressing these legitimate worries can leaders seize the opportunities presented by AI and other fast-developing technologies”, former Prime Minister Tony Blair and his namesake Institute have been pushing a gold-rush narrative which prioritises the fast adoption of AI against any other consideration, including the technical limitations of this technology and its well-known risks of bias, discrimination and accuracy.

Blair’s narrative didn’t originate in a vacuum, but follows four years of a blind, Brexit-driven ideological belief that deregulation, lack of accountability and freedom to “move fast and break things” would have left technology companies free to innovate and “unleash growth” across the economy. At the same time, the TBI’s narrative contrasts with more balanced papers from Labour Together, who have emphasised the risks and need for regulation. Likewise, the temperament of the new government appears to have features of both desire for speed and a commitment to caution and competence; time will tell which tendencies win out and where.

Perhaps the most significant risk may be around the UK’s ambition to use AI to support public service delivery. This will be a tempting area for new AI delivery, as AI may appear cheaper than other kinds of intervention, and vendors will inevitably oversell what it can deliver. Left unchecked, AI has the potential to exacerbate existing inequalities, produce biased and discriminatory results, and ultimately undermine the standards of living and the well-being of people who are already marginalised by society.

We have already seen this in the context of policing, where AI systems are disproportionately used to target racialised, working class and migrant communities. Here, AI can undermine the presumption of innocence and carries particular risk to people’s rights, safety and liberty. As a result, ORG and 16 other rights organisations have today written to the Home Secretary asking for a ban on so-called ‘predictive policing’ and biometric AI systems, and for transparency, accountability, accessibility and redress for all other uses.

LABOUR’S AI AND TECH STRATEGY

Ultimately, if the Labour Government wants to promote long-term, inclusive growth, and deliver the change they promised to the public, they will need to hit the reset button on the UK’s digital policies first.

It is understood that following the announcement in the King’s Speech, there will be a consultation aimed at informing what a UK AI Bill should look like.

This should be seen in context: in their Manifesto for Change, the Labour Party has centred its ambition to drive innovation and AI development on a combination of Industrial Policy, infrastructural development and public procurement strategy, which includes:

  • Using the Government’s Industrial Strategy to support the development of AI technologies and the construction of new datacentres in the UK;
  • Creating a “National Data Library” to support the delivery of “data-driven public services whilst maintaining strong safeguards”;
  • Creating a new Regulatory Innovation Office, to support the Government in introducing binding regulation on the technology companies developing AI models.

This may signal a welcomed step-change in the UK approach to innovation and AI: Industrial Strategies are powerful public policy tools that can indeed support innovation effectively while ensuring that technological development aligns with broader societal expectations. Public investments can be directed toward critical infrastructures that create solid foundations for innovation, while public procurement can help ensuring that public money is spent on technologies which are developed responsibly, that operate with transparency and meet regulatory and safety standards.

However, the Government seem willing to make their future AI Bill narrow in scope, as legal requirements that only apply to “the most powerful AI models” could exempt a large number of AI applications that are maybe not powerful, but are still dangerous. For instance, the faulty fraud-detection algorithm deployed in the Netherlands was far from being sophisticated, but it still harmed thousands of families who were wrongly accused of fraud and led to the collective resignation of the Dutch Government.

Likewise, the infamous A-levels algorithm was rather trivial in its functioning, but failed so big that young people hit the streets in the middle of a global pandemic to chant “fuck the algorithm”. Thus, a narrow focus on “powerful models” that leaves aside the societal impact on AI and the need of robust guardrails to govern its adoption is likely to fall short of addressing the risks, the challenges, and ultimately the obstacles to the adoption of this technology.

THE ROTTEN FOUNDATIONS OF UK DIGITAL POLICIES

Labour’s plans will also have to deal with the hefty legacy left by the Conservatives. In 2020, the National Data Strategy argued that lowering regulatory standards would have helped “the widespread uptake of digital technologies”. This overall argument was later replicated by the TIGRR report which, notably, argued that the “GDPR should at a minimum be reformed to permit automated decision-making [that is to say, against the will of the individuals being affected by such decision] and remove human review of algorithmic decisions”.

This brings us to the UK Data Protection Reform, a four-year-long disastrous attempt to get rid of the few legal safeguards that are currently in place to ensure the safe deployment and use of AI systems. Indeed, the Data Protection and Digital Information (DPDI) Bill would have removed the right not to be subject to automated decision making, which prohibits computers from taking unaccountable and life-changing decisions about you.

Likewise, the DPDI Bill would have disempowered individuals from asking their data to be corrected or deleted by an AI system, and allowed organisations to deploy high-risk technologies such as AI even in the absence of measures that would mitigate their impact and risks. Finally, the Bill would have given the Department for Work and Pension access to the bank accounts of virtually any UK resident for the purpose of tacking benefits fraud—it is worth noting, DWP algorithms were already found to incorrectly accuse people of fraud in two-thirds of the times, in what has led 200.000 UK residents to be wrongly investigated on accusations of fraud.

The Conservatives doubled down on this approach by adopting “A pro-innovation approach to AI regulation”, which sets out non-statutory principles and then asks UK regulators to implement these principles into their work. In doing so, they did not only fail to provide answers and guardrails for the deployment of AI technologies, but had optimistically transferred this responsibility onto regulators, without giving them additional statutory means that would allow them to actually do the job.

HIT THE RESET BUTTON

While the UK claimed to pursue a “world-leading” race for deregulation, other governments around the world were either discussing or adopting laws to establish governance requirement and to mitigate some of the risks to individuals who are affected by these systems. For instance, the European Union adopted a first of its kind regulation on Artificial Intelligence – the EU AI Act. The legislation bans certain uses of AI, and requires organisations to identify and mitigate risks that could arise from the deployment of an AI system before it’s introduced. However, their efforts were limited and flawed; the final text included loopholes, carve-outs and exemptions, particularly in law enforcement and immigration contexts, and did not focus on harm to rights.

Elsewhere, countries like the United States, Canada, Brazil and even China have been adopting or introducing legal instruments which are aimed at establishing protections and good governance processes to prevent harms caused by AI systems. Nothing yet is perfect, but the UK can now learn from the good and the bad, rather than pausing indefinitely.

The UK’s global isolation in digital regulation is highly consequential for the new Government’s ambitions to “unlock growth”: new technologies are, by definition, prone to errors that will be externalised and paid for by the lowest link of the chain – be they workers, consumers, patients, or individuals. This can be avoided only if the Government put into place robust, rights-based regulation that ensures the safe deployment of these systems, independent oversight, as well as clear and effective avenues for redress.

The new Government can choose to ditch the techno-lunatic ideology that has polluted the last four years of UK digital policy development, and work toward providing the robust and legally binding guardrails that innovation needs in order to be socially accepted, widely adopted and, ultimately, trusted by the public it ought to serve. Or they can fall to the pressure coming from the TBI and tech lobbyists to follow in the steps of the previous Government, leaving technology companies free to “move fast and break things”, fuelling societal resistance toward technological adoption and, ultimately, eroding the foundations upon which a modern economy and a healthy market should rest.