Free expression online
Musk and Zuck: Engineering Free Speech
Under the guise of protecting free speech, an alarming alignment between government power and Big Tech’s corporate power is unfolding. Elon Musk and Mark Zuckerberg, while publicly claiming that they wish to protect free speech, are in fact aligning their corporate power with the political interests of the second Trump administration. Unsurprisingly, their claims have provoked alarm.
The politico-corporate bubble
What does it mean for global social media giants to be promoting “free speech” under Donald Trump? Are Meta and X now claiming that misinformation and disinformation are not a significant challenge; or that we should protect the speech of liars and frauds? The challenges of dealing with misconceptions and lies, especially when sponsored by state actors, are not at all trivial, and solutions are not easy to implement at scale.
At Meta, the appointment of Joel Kaplan, a Trump ally as their head of global affairs marks a significant shift in the company’s direction. Following this appointment, Zuckerberg announced changes at Meta in a video in which he addressed issues with the company’s approach to fact-checking and algorithmic moderation systems. Both of these issues warrant closer examination.
Algorithmic moderation
Firstly, the flaws of algorithmic moderation systems are well-documented. Open Rights Group campaigned against mandating these automated censorship tools during the debates on the Online Safety Act. We warned that governments could pressure social media companies to deploy algorithms in ways that align with political agendas, effectively enabling prior restraint censorship of content. However, simply removing one kind of unaccountable algorithmic moderation and replacing it with another, looser kind of moderation still leaves users disempowered and likely more vulnerable.
Furthermore, what content is promoted and why is just as important as what is removed, when we consider its impacts. All of the approaches suggested by Meta are top down and centralised;, as of today, we are forced to accept their rules if we want to engage with Meta’s products.
Subject to T&Cs
Secondly, Meta’s loosening of moderation rules are not a one way street to greater freedom of expression. Meta’s products serve a vast diversity of communities, and changes to limitations can be both positive and negative. There is a need to ensure civility between users, which is a legitimate and necessary aim for any online community, but is also incredibly hard to achieve when moderating at scale.
Meta’s content acceptability rules are designed to enable easy and swift decisions, but present arbtitrary hard lines (this not that) and are problematic as a result. For example, their restrictions on nudity and adult material have frequently caused problems, as have bans on “violent” material. However, changes designed to allow people to be more insulting or offensive could easly result in legitimising toxic behaviour. Many of the exclusions which have been removed look very likely to result in more hate speech, rather than more useful free speech.
This matters: for example, speech on Facebook denying the existence of the Rohingya as an ethnic group in Myanmar was a precursor to genocidal violence, and after being added to Meta’s exclusions, is now again permissable. Problems enforcing content rules at these extremes, especially in non-English language contexts are very serious, so presenting the problems of moderation as solely about over-enforcement while removing references to real world harms is not a good way to signal Meta’s direction of travel.
Decisions for change in moderation rules at scale need to be based on human rights principles alongside evidence of their likely impact, and should preferably interate slowly. Meta’s team needs to explain what evidence and human rights analysis lies behind their decision, and how they believe they will continue to ensure users’ safety. Making politically motivated announcements to present moderation changes has understandably undermined public confidence in their decision making.
Fact-checking to Community Notes
Thirdly, fact-checking panels—composed of experts appointed by centralized authorities—have their own limitations. These systems depend on trust in the appointing authority, raising the question: what happens if that authority abuses its power? And who oversees the fact-checkers – Quis custodiet ipsos custodes?
Community-driven solutions, such as the Community Notes feature offer a different approach but raise philosophical and ethical problem of the “tyranny of the majority”. In any case, while useful mitigations against misinformation, neither fact checkers nor community notes directly address the underlying reasons causing untrue and unreliable content to be created and spred. Some are about the platforms’ business models, and their desire to drive advertising engagement, through mechanisms such as recommendation algorithms, which tend to promote extreme content over more nuanced opinions.
Other reasons are about the society we live in today, and may be very hard to address – such as the desire to seek simple explanations, social alienation, to the failure of politics to produce a sense of fairness. High spending domestic or foreign actors deliberately attempt to manipulate these concerns. The problems generated by bad actors with deep pockets need more fundamental actions, which are the responsibility of governments to co-ordinate.
Big Tech / Big Media
The immediate concern generated by the realignment of social media’s corporate and political interests by figures like Musk and Zuckerberg has familiar parallels. Historically, media companies were criticised for similar behaviour. It was once said that winning a general election in the UK required the backing of the Murdoch news empire. Today, Musk, the world’s richest man, has acquired Twitter (now X) and uses it as a platform to amplify his worldview including directly attacking the UK government. Big Tech’s platform power has become a political weapon, enabling figures like Musk to attack politicians, while on occasions still suppressing journalistic free expression.
Zuckerberg, meanwhile, speaks of previous political pressures to censor content and a new political direction under the Trump administration. Yet his past compliance with such pressures and current alignment with the new presidency reveal a troubling pattern. Whereas Musk uses his platforms to impose his perspective, Zuckerberg appears to have aligned Meta (who own four of the most popular social media platforms Facebook, Instagram, Messenger and WhatsApp) with the prevailing political climate in order to ensure that Meta is better able to maintain their dominance and model of surveillance capitalism.
Ending monopolies
The solution to these abuses of power lies in dismantling Big Tech’s monopolistic grip on the Internet. As we have argued repeatedly, this requires shifting power to social media’s users, through greater use of competition powers and enforcing data protection rights. The aim has to social media plurality – just as we aim for media plurality in traditional media.
Over the past year, users have increasingly abandoned platforms like X in favor of alternatives such as Mastodon, the Fediverse, and BlueSky. However, many users remain trapped in Big Tech’s “walled gardens” because their content, connections, and networks are deeply embedded within them. How often do we stay on Facebook or X simply because a specific relative, organisation, or journalist we follow remains active there? Writ large, these “network effects” make Big Tech heavily entrenched, for users, organisations and advertisers.
Competition
Political leaders concerned about misinformation and Big Tech’s abuses of power (and our privacy and personal data) should take concrete steps to promote fair competition online. Encouraging decentralised social media models, such as the Fediverse, where communities host their own content independent of any single corporate entity, could be a crucial step forward. Interoperability standards that allow seamless communication across platforms could further reduce Big Tech’s stranglehold. Other measures can be taken to allow alternative prioritisation and moderation engines, even within centralised systems, as envisaged by BlueSky.
Legislation
Likewise, greater enforcement of data protection law could begin to dismantle the unfair abuse of personal data, to engage in profitable but unlawful profiling of users, and to harvest their data to build Artificial Intelligence products that are further designed to cement Big Tech monopolies.
The alternative looks dire. Measures that concentrate on the symptoms of the social media mess – like the Online Safety Act – at best can only deal with the most problematic content. At worst, they result in safe online spaces closing down because they face inappropriate levels of compliance risks. With a resurgent Trump administration, even measures like the UK’s OSA and the European Digital Services Act will be under pressure through trade related threats made to the UK and EU about their approach to US corporations. The underlying agenda will be to promote a changed information environment, that favours Trump’s allies, whether in the US or Europe; this is only possible because social media’s power is concentrated and monopolistic.
Breaking these monopolies and fostering diverse, user-driven platforms is essential to ensuring a free and fair society.