Vast Networks of Fake Accounts Raise Questions About Meta’s Compliance with the EU's DSA
Reset documented the existence of several massive networks consisting of hundreds of thousands of inauthentic Facebook pages. The mere scope of the networks and the common characteristics of their individual assets suggest that they were generated with the help of automated software or scripts.
Our analysis indicates these networks predominantly consist of dormant accounts, and are likely set up for commercial purposes. Individual pages are occasionally activated to engage in various malpractices, including pro-Kremlin advertising campaigns and scam advertising. With these activities, the networks violate multiple platform Terms of Services, such as policies on coordinated inauthentic behaviour (CIB), advertising, and Community Guidelines.
Our investigation zooms in on the activities of a network of 242,000 pages launched in late 2021 and used for disseminating both Russian propaganda and scam ads throughout 2022 and 2023. This is the first-ever attempt to reveal the scale of an operation that has been ongoing for at least over a year after it was first detected by researchers.
The network has grown exponentially since 2022, spending tens of thousands of euros on ads that violate Meta’s Terms of Service. While the Russian propaganda ads primarily targeted French and German audiences, the commercial ads promoted potentially dangerous scam products, phishing, and malware to audiences in +32 countries.
Meta has known about the network since at least September 2022, but to this day, it has failed to discontinue its malign activities, thus causing significant risks for consumers, as well as jeopardising democratic integrity in the EU.
Reset also identified a second ecosystem of three interconnected networks engaged in similar malign activities. This ecosystem exceeds 340,000 pages.
Without effective mitigation from Meta, the limit on the size of these networks is infinite, given that new accounts can be set up by automated means at almost no cost. Meta’s apparent failure to detect a basic form of automation further raises questions about the company’s ability to tackle more sophisticated automation, such as swaths of content produced by generative AI.
In view of next year’s European elections, our findings beg the question: How does Meta intend to prevent such networks from being used for targeting disinformation and Russian propaganda at millions of voters? Failure to mitigate the risks caused by large-scale inauthentic activity on the platform raises further questions about Meta’s compliance with the EU’s new digital rulebook, the Digital Services Act.