1. médialab Sciences Po
  2. Productions
  3. Discovering ideological structures in representation learning spaces in recommender systems on social media data

Discovering ideological structures in representation learning spaces in recommender systems on social media data

Tim Faverjon, Pedro Ramaciotti

Recommender systems in social platforms attract attention in part because of their potential impact over political phenomena, such as polarization or fragmentation of online communities. These research topics are also important because of the need for understanding systemic effects in view of upcoming risk-oriented AI regulation in the EU and the US. A common approach leverages outcomes of recommendations to audit recommender systems. A different approach is that of explainability, seeking to render recommendation mechanisms intelligible to humans, potentially enabling both auditing and actionable design tools. This second approach is particularly challenging in the context of online systems of political opinions because of the intrinsic unobservability of opinions. In this article we leverage multi-dimensional political opinion estimation of large online populations (along a left-right dimension but also along other political dimensions) to investigate latent spaces in representation learning computed by recommender systems. We train a recommender based on ubiquitous collaborative filtering principles using data on content sharing on Twitter by a large population, evaluating accuracy and extracting a latent space representation leveraged by the recommender. On the other hand, we leverage multi-dimensional political opinion inference to position users in political spaces representing their opinions. We then show for the first time the relation between latent representations leveraged by a recommender system and the spatial representation of users. We show that some dimensions learned by the recommender capture ideological positions of users, bridging politics and algorithmics in our social and al-gorithmic system, opening a path towards political explainability of AI.