1. médialab Sciences Po
  2. Actualités
  3. Séminaire de doctorants

Séminaire de doctorants

Ducan Cassells et Tim Faverjon, tous deux doctorants au médialab, présenteront leurs travaux de recherche lors du séminaire du 26 novembre 2024.

Rendez-vous, Séminaire de recherche

Salle H405, 28 rue des Saints Pères, 75007 Paris

Connecting opinion dynamics with group identity and empirical data - by Duncan Cassells

Opinion dynamics - theoretical models of opinion change, informed by social psychology - is typically employed as a means to study opinion formation or change within populations, given that observing both opinions and interactions in real-time is challenging. However, a question of this approach is what can simulated experiments and fictitious populations actually tell us about reality? Two efforts to bridge this gap are presented in this talk: the first is to introduce a notion of group identity leading to consideration of differing out-group and in-group behaviours, while the other is to test existing models with empirical opinion distributions that are found in social media data. These two strands of work combine to bring theoretical models towards a closer link with reality.

Political Patterns in Algorithms: Explaining and Mitigating Bias in Content Recommendations - by Tim Faverjon

Recommendation algorithms use traces of users' behaviors online to suggest content. Research suggests that these behaviors can vary depending on users' political attitudes, which in turn impact the recommendations received. Using methods of algorithmic explanation, it is possible to reveal the mechanisms by which models leverage such information. This raises the question: What happens to recommendations if we attempt to remove political information from the model?

We present a method of "political explanation" to open the black box and measure political information embedded in the algorithmic representation space. This approach enables us to assess and adjust the influence of political features on content recommendations directly from within the model. In a case study using URL-sharing data from X (formerly Twitter), we trained a recommendation algorithm and leveraged political attitude estimates to identify and analyze political patterns embedded within the model. Our findings reveal that certain dimensions of the model capture specific political attitudes of users, influencing recommendations toward partisan content. When we adjusted these politically sensitive dimensions, we effectively reduced political bias in recommendations but also observed a reduction in content diversity, as recommendations shifted toward mainstream sources. We discuss these results and their potential impact on compliance with the new DSA regulation.