• Report

Recommendations for protecting the German election

Decorative image

Reset and HateAid conducted nationwide research on German internet users to better understand social media effects on the culture of debate, disinformation, and hate in Germany. In summary, 9 months after the Capitol Hill insurrection, Big Tech continues to put the German election at risk. See the full report for findings and recommendations.

Summary of findings

Insufficient protection against criminal content

A recent nationwide survey amongst eligible voters shows that three out of four Germans expect platform operators to do more against hateful content on social networks.3 Meanwhile, Facebook does not even comply with the existing legal requirements of the NetzDG.

Insufficient enforcement of community standards

“We don’t allow hate speech on Facebook. It creates an environment of intimidation and exclusion, and in some cases may promote offline violence (Facebook Community Standards)." And yet, comments that we reported for violations of Facebook’s hate speech policy, were deleted by the platform only in 50 per cent of the cases—if the company reacted at all.

Recommendations

In the medium term, the German federal government should advocate for systemic regulatory approaches at the EU level, such as those contained in the EU Commission’s proposals for a “Digital Services Act” and for AI regulation.

The federal government and all parties should push platforms to implement the following measures (the proposals are based on the experience of the US elections, the work of HateAid and other NGOs and other NGOs and the expertise of various international research institutions):

Above all else, Facebook must comply with the law and the associated obligation to delete and report illegal content. Content that is reported by users as breaking the law must be recorded and processed as such, instead of letting it “disappear” with reference to the company’s own guidelines. Lawmakers should ensure the supremacy of legal regulation over private company rules.

Prevention of (digital) violence

  • The platform operators carry out risk analyses and inform parties, candidates and authorities about which persons or groups of persons are particularly affected by hate and provide early information, for example about increased calls for violence.- Use of human moderators in groups and on pages with more than 20,000 members where the risk of hate speech, disinformation, verbal or actual violence is particularly high.- No automated recommendations for political pages, groups and profiles or from websites that spread disinformation or incite hate and violence.- All corporate policies to protect the federal election come into effect before September—not after problems are already out of control. Consistent and comprehensible effective enforcement of own policies by psychologically supervised and trained staff who speak German and understand cultural and contextual nuances of speech.- Protection of women from image-based sexualised violence in particular: If intimate photos or manipulated images get on the Internet (including deep fakes), it is often almost impossible to stop them from spreading. Even if they are removed, they are reuploaded over and over again. Tech companies have all the possibilities to prevent this, but they do not make use of them. Companies should therefore immediately set up a cross-platform database of images marked as illegal. Databases for terrorist and child pornography content could serve as a role model.- Real-time transparency of online election advertising12, including all targeting parameters and friction in the advertising system that would not allow profits from ads that are clearly inflammatory and racist.- Use of the News Ecosystem Quality Score to curate and recommend newsworthy content, as Facebook did before the US election. 13 Documentation and sanctioning of (digital) violence
  • To ensure the documentation and reporting of offending content, independent researchers and auditors are given access to all posts and comments on public pages and public groups associated with political parties or dedicated to political issues14 via programming interfaces.- Improved cooperation with law enforcement, especially regarding fake and multiple profiles, that spread the majority of illegal content according to our research. So far, requests from law enforcement authorities are answered only arbitrarily. In most cases, reference is made to the headquarters in Ireland or even the USA as the place of data storage. If data is obtained, it is usually worthless.- Systematic recording of contributions that contain digital violence via an independent agency (e.g. Lumendatabase). In addition, platforms should offer differentiated transparency reports with thematic breakdown.- Weekly reporting on the measures taken to prevent the (automated) dissemination of punishable content, as Twitter has done recently. 15 Please see the following report for more details and information.

Download assets

  • Recommendations for protecting the German election