Digital Privacy

Bad Ads: Targeted Disinformation, Division and Fraud on Meta’s Platforms

This report lays out clear evidence of how Meta enables bad actors to use its targeted advertising system to manipulate elections, spread disinformation, fuel division, and facilitate fraud.

Executive Summary

Meta’s social media platforms, Facebook and Instagram, sit at the intersection of the attention economy and surveillance capitalism. Meta’s business model is built on maximising user attention while tracking behaviours, interests and harvesting personal information. This surveillance is used to categorise people into ‘types’. Meta uses this profiling to sell the attention of these ‘types’ to would-be advertisers – a practice known as surveillance advertising.

This report brings together existing and new evidence of how bad actors can and have used Meta’s targeted advertising system to access the attention of certain types of users with harmful adverts. These ‘bad ads’ seek to mislead, to divide, and to undermine democracy. Through a series of case studies, it shows how bad actors — from political campaigns to financial scammers — have used Meta’s profiling and ad-targeting tools to cause societal harm.

The case studies in this report examine how bad actors use Meta’s advertising systems to spread bad ads across five areas:

  • Democracy
    Voter suppression, the targeting of minorities, electoral disinformation, and political manipulation by the Trump campaign, Musk-backed dark money groups, and Kremlin-linked actors.
  • Science
    The COVID infodemic, vaccine disinformation and climate crisis obfuscation.
  • Hate
    Sectarian division, far-right propaganda, antisemitism, and Islamophobia.
  • Fear
    Targeting of vulnerable communities, UK Home Office migrant deterrence, and the reinforcement of trauma.
  • Fraud
    Deepfake scams, financial fraud, and the use of targeted adverts to facilitate black market activities on Meta’s platforms.

This report evidences the individual and collective harms enabled by Meta’s advertising model. Three major issues emerge, each requiring urgent action:

The Transparency Problem

Meta’s ad system is insufficiently transparent about the profiled targeting categories advertisers choose. This opacity facilitates harmful advertising and prevents public scrutiny of disinformation, fraud, and manipulation.

Recommendation
Meta must be required to publish full ad targeting details in its public Ad Library. This should include all demographic, interest-based, and behavioural categories used by each advertiser for
each advert. Greater transparency would deter some forms of harmful targeting and enable greater public scrutiny of harmful ad targeting on Meta’s platforms.

The Moderation Problem

Meta’s moderation heavily relies on user reporting of bad ads once they are circulating rather than preventing them from appearing in the first place. Meta’s largely automated approval process and lax approach to ad moderation enable targeted disinformation and harm.

Recommendation
Meta must significantly expand both human and technological resources allocated to pre-publication ad moderation to tackle obvious disinformation, fraud and harmful ads upstream of publication rather than downstream of harm. A useful starting ground could be provided by provisions, in the Digital Services Act, which already establish transparency obligations covering both content moderation decisions of online platforms and the criteria which advertisers use to target advertisement. The DSA also introduces so-called anti darkpatterns provisions, that prohibit online service providers from misleading, tricking or otherwise forcing users into accepting targeted advertising against their best interest.

The Profiling Problem

Meta’s business model is built on profiling users by harvesting vast amounts of personal and behavioural data, yet it offers users no effective opt-out of surveillance and targeting for users to protect themselves from the harms evidenced in this report. Additionally, there is little awareness of how users can opt out of their data being used to train generative AI.

Recommendation
Users should be presented with a clear and explicit opt-in option for profiling and targeting, and be warned that this means they can be targeted by bad actors seeking to mislead or defraud them. Given the invasive nature of the data collection and Meta’s inability to demonstrate it can protect citizens from harm, informed consent must be required above and beyond the acceptance of lengthy terms and conditions. For users who do not opt-in to surveillance advertising, Meta should adopt contextual advertising within broad geographies – targeting ads based on the content users are presently engaging with rather than based on the surveillance and profiling of citizens.

Despite Meta’s public assurances that it does not allow disinformation, voter suppression, hate and division, or fraudulent adverts on its platforms, the case studies in this report demonstrate it consistently enables these harms. This is not solely a problem of policy enforcement, but an issue with the fundamental architecture of Meta’s opaque and poorly moderated advertising model built on surveillance, profiling, and microtargeting.

Combined with Meta’s unwillingness to mitigate the harms it enables, Meta’s surveillance advertising continues to facilitate societal harm. Without legal or regulatory intervention, these threats to democratic integrity, public safety, and social stability will persist, and citizens globally will continue to be deprived of the right to a reality that is not shaped by opaque, targeted advertising systems available for hire by bad actors.