EU's Digital Services Act & U.S. Democracy

dsa-principle-3.png

The EU’s proposed laws would make for more prepared and resilient democracies in the face of algorithmically supercharged violence and hate

The ominous attacks on American democracy, followed by a private company’s unilateral decision to de-platform a sitting President, requires innovative solutions to prevent the poisoning of our democracies, both by white supremacists and the companies that profit from giving them such a platform. The EU’s proposed regulation for online platforms, the Digital Services Act, offers an example of how governments can move beyond simple binary discussions about whether content should stay up or be removed, and who should have the power to make that decision. These new rules could incentivize platforms to crack down on hate speech and coordinated attempts to spread electoral disinformation far more forcefully. At the same time, they would introduce checks on Big Tech’s excessive powers over free speech and civic discourse.

1) Tech companies would have done more, and they would have done it sooner

  • The tech companies did delete, demote or demonetize some of the lies and hatred propagated by Trump and his supporters ahead of the U.S. elections. But they did far from enough, despite researchers and civil society organisations warning of unfettered hate campaigns for months (such as hundreds of thousands of Facebook comments inciting violence against Democratic candidates). Under a regulatory framework like the DSA the tech companies would not have gotten away with this gross negligence - no matter if we are talking about strictly defined illegal content such as incitement to riot, or about large-scale manipulation such as the use of fake accounts to spread electoral disinformation.
  • First, the DSA would establish legal certainty for the companies. Whenever Facebook or Twitter did a crackdown on hate and disinformation, Trump essentially threatened to pull the plug on their business model by making them legally liable for their content moderation decisions. Similarly, in the EU the tech companies currently have to fear being held liable if they proactively search for illegal content, such as incitement to riot, unless the government has explicitly told them to do so. If enacted, the DSA would reassure platforms that liability exemptions would remain intact when they “carry out own-initiative investigations” (Art. 6) - essentially a guarantee that the law would continue not to treat them as “publishers” when they take more control of content on their services.
  • Second, under the DSA the companies could in fact face expensive fines for failing to delete illegal content, such as calls for violence against election candidates if they had been notified about that content in advance by users or third parties. The law would have expected them to pay special attention to profiles with “particularly wide reach”, such as Trump’s Twitter account. The burden of proof would now be on the platforms to show that they have done enough to prevent the spread of that content on their services.
  • Third, the new EU rules would go beyond strictly defined illegal content and also hold them accountable for harmful-but-not-illegal content. As laid out in the European Democracy Action Plan, tech companies would have to commit to specific and verifiable KPIs for tackling disinformation and deceptive online behaviour. Also, for the first time ever, the EU would define the intentional manipulation of civic discourse as a “systemic risk”. This would come with potent ex-ante risk management requirements and strong verification audits to ensure that they are doing what they say. Essentially, Facebook, YouTube or Twitter would have been forced to conduct risk assessments long before the US elections and outline how they would make sure that their algorithms do not amplify political violence, or how they would prevent domestic and foreign groups from building networks of fake accounts to skew the electoral outcome. Those risk assessments would have been published - with the companies’ shareholders and app store providers among the audience. This would naturally increase pressure on platforms to keep their house in order.
  • Tech companies would have been audited to verify whether their self-made transparency reports are truthful and whether their measures to prevent harm to the public are sufficient. Regulators could have requested interviews with Zuckerberg or Dorsey on short notice, and if their answers were dissatisfying, the regulator could have imposed interim measures. In fact, the regulator could have raided Facebook and Twitter European headquarters without prior announcement to check if platforms are complying with the law and their own commitments - such as their bold claims about how they were cracking down on violent cult QAnon or electoral misinformation.
  • Under the DSA, the regulator could have called on platforms to activate so-called crisis protocols, allowing them, for instance, to exchange information in real-time, such as on new groups or hashtags that serve the organisation of violence against elected representatives (#StormTheCapitol), or to access and promote verified information about election outcomes.

2) Free speech would have been strengthened

  • While morally appropriate, Twitter’s Trump ban rightly caused concern about the unchecked power a few large companies wield over democratic discourse - globally. Under the new EU rules, tech companies would be required to enforce and communicate their community standards transparently and consistently - including for heads of state. Moreover, Twitter could be requested to share historical archives of Trump’s Tweets with researchers, making sure they remain on the public record. \
  • Crucially, the DSA would strengthen the rights of all the users whose posts have been falsely suspended. In fact the companies would break the law if they did not provide users with explanations of their takedown decisions, and did not offer affected users access to complaint handling mechanisms and external dispute settlement. Such fundamental rights safeguards would have delegitimized the censorship cries of Trump supporters during the election campaigns, which further deterred companies to take action against organised hate and electoral disinformation.

3) Civic watchdogs could have done their jobs

  • Many of the atrocities were planned in the open - on public social media groups and pages. Under the DSA, regulators could have forced the platforms to cooperate with researchers and civil society organizations focused on election-related disinformation and incitement. For instance, platforms would have been bound to cooperate with external “flaggers” - including fact-checking organizations that would have been authorized to flag electoral disinformation or hate speech to the platforms. Such a legal duty to cooperate could have dissuaded Facebook from failing systematically to identify and label the disinformation spread by Trump and his supporters ahead of the elections.
  • Also, platforms could have been obliged to share public interest data with vetted researchers, so they could study the impact of platform technology on election integrity, public security or the rights and safety of vulnerable groups. So far, platforms have failed to deliver on their promises to provide researchers with even the most basic data sets, causing outrage and frustration among the academic community. Without such data access, the U.S. will not be able to draw sensible lessons from the horrific attacks on democracy.

4) Law enforcement would have been better prepared

  • The proposed DSA would oblige companies to inform law enforcement if they become aware of information giving rise to a suspicion of serious criminal offences involving a threat to the life or safety of persons (Art. 21). The companies would have had to notify law enforcement when they gained knowledge of the organization of violence on their services weeks in advance - and it would have been hard for them to deny such knowledge.

While the DSA would bind companies to democratic principles, independently of their relative market shares, the EU has also proposed a complementary law, the Digital Market Act (DMA), to crack down on Big Tech’s abusive business practices. By challenging the oligopoly, the DMA would further curtail Big Tech’s arbitrary powers over democratic discourse.