Duty of care: an empty concept
There is every reason to believe that the government and opposition are moving to a consensus on introducing a duty of care for social media companies to reduce harm and risk to their users. This may be backed by an Internet regulator, who might decide what kind of mitigating actions are appropriate to address the risks to users on different platforms.
This idea originated from a series of papers by Will Perrin and Lorna Woods and has been mentioned most recently in a recent Science and Technology committee report and by NGOs including children’s charity 5Rights.
A duty of care has some obvious merits: it could be based on objective risks, based on evidence, and ensure that mitigations are proportionate to those risks. It could take some of the politicisation out of the current debate.
However, it also has obvious problems. For a start, it focuses on risk rather than process. It moves attention away from the fact that interventions are regulating social media users just as much as platforms. It does not by itself tell us that free expression impacts will be considered, tracked or mitigated.
Furthermore, the lack of focus that a duty of care model gives to process means that platform decisions that have nothing to do with risky content are not necessarily based on better decisions, independent appeals and so on. Rather, as has happened with German regulation, processes can remain unaffected when they are outside a duty of care.
In practice, a lot of content which is disturbing or offensive is already banned on online platforms. Much of this would not be in scope under a duty of care but it is precisely these kinds of material which users often complain about, when it is either not removed when they want it gone, or is removed incorrectly. Any model of social media regulation needs to improve these issues, but a duty of care is unlikely to touch these problems.
There are very many questions about the kinds of risk, whether to individual in general, vulnerable groups, or society at large; and the evidence required to create action. The truth is that a duty of care, if cast sensibly and narrowly, will not satisfy many of the people who are demanding action; equally, if the threshold to act is low, then it will quickly be seen to be a mechanism for wide-scale Internet censorship.
It is also a simple fact that many decisions that platforms make about legal content which is not risky are not the business of government to regulate. This includes decisions about what legal content is promoted and why. For this reason, we believe that a better approach might be to require independent self-regulation of major platforms across all of their content decisions. This requirement could be a legislative one, but the regulator would need to be independent of government and platforms.
Independent self-regulation has not been truly tried. Instead, voluntary agreements have filled its place. We should be cautious about moving straight to government regulation of social media and social media users. The government refuses to regulate the press in this way because it doesn’t wish to be seen to be controlling print media. It is pretty curious that neither the media nor the government are spelling out the risks of state regulation of the speech of millions of British citizens.
That we are in this place is of course largely the fault of the social media platforms themselves, who have failed to understand the need and value of transparent and accountable systems to ensure they are acting properly. That, however, just demonstrates the problem: politically weak platforms who have created monopoly positions based on data silos are now being sliced and diced at the policy table for their wider errors. It’s imperative that as these government proposals progress we keep focus on the simple fact that it is end users whose speech will ultimately be regulated.