Gouverner la parole en ligne
How do we want to be moderate?
The turning point in the governance of freedom of expression in the era of digital platforms invites us to question the status of "private" regulation of digital content. This necessarily raises the question "What kind of Internet do we want? How can we ensure responsible governance of the major digital players that would make it possible to articulate freedom of expression and protection of the public? Translated with www.DeepL.com/Translator (free version)
Considering that the public problem of digital freedoms is a collective undertaking of knowledge production, this research project aims at examining these issues from the point of view of regulators and public actors, but also and above all of Internet users.
Context and objectives
In recent years, social media platforms have been at the heart of a major debate on content moderation. Many concerns have been raised about the spread of hate speech, jihadist propaganda and false information on digital platforms. In this context, the regulation of web giants has often been presented as a necessary response to guarantee democratic balance and geopolitical stability within our societies. Faced with pressure from public authorities and civil society, platforms have been forced to broaden their regulations and policies, expand their moderation teams, outsource moderation operations as well as add new algorithmic detection technologies based on artificial intelligence (AI). Despite this strengthening of moderation, the public debate remains particularly divided, between those who criticize platforms for their inaction and passivity and those who see it as an attack on freedom of expression.
Objectives and methodology
This research project aims to explore the norms of the dicible in the making: its boundaries and complexities, its controversies and debates, its normative practices and expectations. By pursuing this type of questioning, the project seeks to identify and explore possible forms of democratic regulation of content on the Internet, so that it remains a space for debate, engagement and freedom. Using different methodologies in social sciences and design research, this project will aim to
- To qualify and identify the boundaries of the dicible, from the point of view of regulators, digital actors and Internet users.
- To explore the behaviors and practices of users with respect to speech they find offensive or hateful.
- To experiment collectively with content moderation through participatory workshops.
This research project is funded by the Good In Tech Chair and the McCourt Institute.
This work is supported by the European Union – Horizon 2020 Program under the scheme INFRAIA-01-2018-2019 – Integrating Activities for Advanced Communities”, Grant Agreement n.871042, SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics.