Offering a broad policy framework to address the digital threat to democracy, and recommending specific proposals.
Originally posted at New America Foundation.
The crisis for democracy posed by digital disinformation demands a new social contract for the internet rooted in transparency, privacy and competition. This is the conclusion we have reached through careful study of the problem of digital disinformation and reflection on potential solutions. This study builds off our first report—Digital Deceit—which presents an analysis of how the structure and logic of the tracking-and-targeting data economy undermines the integrity of political communications. In the intervening months, the situation has only worsened—confirming our earlier hypotheses—and underlined the need for a robust public policy agenda.
Digital media platforms did not cause the fractured and irrational politics that plague modern societies. But the economic logic of digital markets too often serves to compound social division by feeding pre-existing biases, affirming false beliefs, and fragmenting media audiences. The companies that control this market are among the most powerful and valuable the world has ever seen. We cannot expect them to regulate themselves. As a democratic society, we must intervene to steer the power and promise of technology to benefit the many rather than the few.
We have developed here a broad policy framework to address the digital threat to democracy, building upon basic principles to recommend a set of specific proposals.
Transparency: As citizens, we have the right to know who is trying to influence our political views and how they are doing it. We must have explicit disclosure about the operation of dominant digital media platforms -- including:
- Real-time and archived information about targeted political advertising;
- Clear accountability for the social impact of automated decision-making;
- Explicit indicators for the presence of non-human accounts in digital media.
Privacy: As individuals with the right to personal autonomy, we must be given more control over how our data is collected, used, and monetised -- especially when it comes to sensitive information that shapes political decision-making. A baseline data privacy law must include:
- Consumer control over data through stronger rights to access and removal;
- Transparency for the user of the full extent of data usage and meaningful consent;
- Stronger enforcement with resources and authority for agency rule-making.
Competition: As consumers, we must have meaningful options to find, send and receive information over digital media. The rise of dominant digital platforms demonstrates how market structure influences social and political outcomes. A new competition policy agenda should include:
- Stronger oversight of mergers and acquisitions;
- Antitrust reform including new enforcement regimes, levies, and essential services regulation;
- Robust data portability and interoperability between services.
There are no single-solution approaches to the problem of digital disinformation that are likely to change outcomes. Only a combination of public policies—all of which are necessary and none of which are sufficient by themselves—that truly address the nature of the business model underlying the internet will begin to show results over time. Despite the scope of the problem we face, there is reason for optimism. The Silicon Valley giants have begun to come to the table with policymakers and civil society leaders in an earnest attempt to take some responsibility. Most importantly, citizens are waking up to the reality that the incredible power of technology can change our lives for the better or for the worse. People are asking questions about whether constant engagement with digital media is healthy for democracy. Awareness and education are the first steps toward organising and action to build a new social contract for digital democracy.