• Report

Italian Elections Content Moderation Efficacy Report

Decorative image

Italy’s snap election on September 25 took place amid a rising tide of social media distortion, disinformation, and online hate campaigns. During critical moments such as these, the proper enforcement of platform content moderation policies is essential. Yet, a review of the Italian social media landscape profiled in this study demonstrates clearly that all of the major Big Tech platforms failed to apply consistent and transparent enforcement procedures in Italy. To evaluate platform compliance with their own rules and standards, we performed an in-depth examination to determine what actions platforms were in fact taking to mitigate the dissemination of content that violates Terms of Service and (in many cases) Italian law. Were users able to report such content, and would their notifications be handled swiftly and diligently? Would users be able to understand and appeal to content moderation decisions? What share of notified content would remain online a week after it had been flagged via the platforms’ content moderation system?

For this report we collaborated with LUISS Data Lab to assess the effectiveness and implementation of Facebook’s, Twitter’s, and YouTube’s systems for moderating user-reported content in Italy. In a prior investigation, we already scrutinised VLOPs’ ability to moderate large-scale coordinated inauthentic behaviour in the country. What we found was a general failure by the three social media platforms to respond to flagged content and enforce moderation policies in a timely and fundamental rights sensitive way.

The EU’s Digital Services Act (DSA), the landmark piece of legislation which seeks to address a host of online harms – including those associated with electoral processes — was not yet in force at the time of Italy’s election. However, the major digital media companies have all agreed to a Code of Practice on Disinformation that pledges consistent and transparent enforcement of rules. They have not met these standards. Our investigation offers valuable insights into the content moderation practices of Very Large Online Platforms (VLOPs), and it may help inform the design of regulatory standards and set expectations for the nascent DSA framework.

Our key findings are as follows:

  • User-reported content in clear violation of platform rules is almost never removed by social media giants in Italy. In cases where content was flagged for review for violating 1 platform Terms of Service, Facebook chose to delete the problematic content only 12% of the time. YouTube and Twitter fared even worse than Facebook, with YouTube removing flagged content just 6% of the time and Twitter just 5%. Reported content included incitements to violence and racist hate speech. Disinformation that clearly violated platform terms of service, including posts that denied the Bucha massacre, was not removed in a single case even after it was flagged via platforms’ notification systems.
  • Removal rates were only slightly higher for content that experts assessed to be illegal under Italian law. Out of 53 illegal comments that we flagged to Facebook, only 17 were removed; out of 18 illegal comments that we flagged to Twitter, only 1 was removed; out of 93 illegal comments that we flagged to YouTube, only 7 were removed.
  • User-reported content was rarely addressed in a timely or transparent manner. In a stark indication of serious deficits in platform notice and action procedures during the monitoring period, YouTube did not respond to any user notifications of potentially violating comments under videos. Facebook replied to user notifications of reported content only 45.3% of the time, while Twitter did so just 13.3% of the time. When Facebook and Twitter did respond to user notifications, the average time span between notification and response was 3 days and 1 day respectively.
  • When Facebook and Twitter did respond to user notifications, responses often did not include information about content moderation decisions or their reasoning, nor information about redress possibilities. 40 percent of the responses we received from Facebook merely stated that the company was not able to review our notification because of a high overall volume of user notifications. Another 11 percent of the responses we received from Facebook stated that the company was not able to review our notifications because of “technical issues”.
  • Content that violated platforms’ Terms of Service or Italian law had already been publicly available on the platforms for months. The comments reported during the monitoring period had already been online for 123 to 495 days prior to being flagged for review.

These findings suggest that in order to comply with the forthcoming DSA, as well as their commitments under the new EU Code of Practice against Disinformation, Facebook, Twitter, and YouTube will need to dramatically improve their notice and action systems, and invest in human as well as technical content moderation capacities.

In the sections of the report that follow, we discuss our research design, the results of our monitoring process, and the responsiveness (or general lack thereof) of platform notice and action procedures. We then conclude with recommendations regarding next steps for stronger compliance with the Code of Practice on Disinformation and the DSA.

Download assets

  • Italian Elections Full Report