• Report

Case Study: Bratislava Terrorist Attack

Decorative image

Overview & Key Findings

On 12 October 2022, a radicalised attacker shot and killed two people and injured another outside an LGBTQ+ bar in Bratislava, Slovakia. Prior to the attack, the shooter shared his manifesto – which contained extremist ideology, hate speech, and harmful conspiracy theories – on multiple file-sharing platforms. After the shooting spree, the attacker took to posting on Twitter and 4chan about the event before committing suicide. Local authorities subsequently reclassified the attack from murder to an act of terrorism, and debate over the incident soon spread rapidly on social media with many users posting hateful comments aimed at further injuring the LGBTQ+ community.

In an effort to mitigate future damage and incitements to violence after the attack, the Slovak regulator – the Council for Media Services (CMS) – sought to collaborate with Facebook, YouTube, and Twitter by flagging content and engaging in intensive bilateral communications. Despite the urgency, the platforms’s content moderation efforts were very slow, even in clear cases.

Given this troublingly low degree of cooperation, this report analyses the content moderation policies and responsiveness of Twitter, Facebook, Instagram, and YouTube in the context of the shooting in Bratislava. Particular insight was gained for the report by collaborating with the CMS in order to share process details and results, and to push for greater oversight of the analysed platforms. Key findings are as follows:

  • Content Moderation System Failures: The content moderation systems of the analysed platforms failed to identify extremist, terrorist, and hateful content both before and after the terrorist attack in Bratislava, regardless of language used.
    • Pre-Attack: Even though these systems perform best when analysing English-language content, Twitter’s content moderation tool failed to detect problematic content published by the terrorist prior to his attack – despite the posts clearly violating both platform Terms of Service (ToS) and local law.
    • Post-Attack: Platforms like Facebook, Instagram, and YouTube failed to sufficiently moderate hateful content after the attack and efficiently enforce its content policies. On these platforms, more than 10% of the 300 most-toxic comments detected under posts related to the terrorist attack contained hate speech against the LGBTQ+
  • Slow and Insufficient Platform Responses: According to the CMS, Facebook’s response to the national regulator’s inquiries to remove inciting content related to the attack was slow and contradictory. In the first three weeks after the shooting, the CMS flagged 66 relevant posts that it deemed to be in violation of Facebook’s community standards. Facebook removed only six of the flagged posts, with an average response time of eleven days, and failed to provide any reasoning as to why the remaining content was not removed.
  • Repeated ToS Offenders Not Penalised: The most frequent authors of potentially illegal or harmful content were “repeat offenders” who had recurrently violated platform Terms of Service previously and repeatedly been reported by the CMS prior to the terrorist attack.
  • Insufficient Content Moderation Resources: The problematic content reported by the CMS was usually sent to a third-party fact-checker hired by Facebook to perform the review. In a stark signalling of market priority, there is only one Facebook-contracted fact-checker for all of Slovakia. The limited resources Facebook dedicates to fact-checking in small markets poses yet another obstacle for the quick and efficient review and removal of potentially harmful content.
  • Outdated Counter-Terrorism Policies: Platform policies on extremism and terrorism differ from platform to platform, resulting in inconsistent content moderation policy application on posts that can relate to life-and-death situations. Worse, current platform ToS tend to focus on terrorist organisations despite the recent surge in attacks perpetrated by lone actors not formally affiliated with a specific terrorist organisation. While both Meta and Twitter do account for such a possibility in their respective ToS, the scope of existing content moderation policies that address lone actors is far too limited and vague when compared to those that apply to terrorist groups. For example, in Meta’s policy on terrorism are the only references to lone actors in the examples, and not in the actual wording of the policy. Together, these failures work to stymie counter-radicalization and counter-terrorism efforts.
  • Inefficient Content Moderation Poses Systemic Risk: Major mainstream social media platforms have repeatedly failed to remove illegal content and other content in clear violation of terms of service even when it was proactively reported by users, as demonstrated in Italy, Germany, and France. The inability to prevent the spread of terrorist content online is not limited to the instant case, but rather represents a pervasive systematic problem embedded in the functioning of social media platforms, especially when it comes to smaller markets deemed to be politically less significant.

In the sections that follow, this report provides further background regarding the Bratislava terrorist attack and the social media platforms that were utilised by the shooter prior to and after the assault. The role of the CMS as the designated national regulatory authority in Slovakia is then discussed before moving to an evaluation of platform policy and policy implementation in the context of the shooting. Next, an analysis of platform cooperation with the CMS is offered – highlighting disparities in treatment for small-market countries and inadequacies in compliance with the future obligations of the Digital Services Act. The report concludes with a series of lessons learned and recommendations intended to tackle societal risks, including radicalisation and polarisation, and improve content moderation efficacy of digital platforms.

Download assets

  • Full Report