Do AI systems learn and leverage political opinions of users in recommendations?
AIPM as a project is a point of articulation between algorithmic and social dynamics, and thus proposes a computer science and sociological angle to the question of the political impact of algorithms on social media.
The last two decades have seen the emergence of different hypotheses about disorders of digital public spaces, such as fragmentation (or “bubbles”), polarization, extremism, and the role that algorithms mediating this space might have on them by pushing the visibility and virality of particular contents. Conclusive results, however, have proved elusive, as a growing body of research seems to present a contradicting picture: disorders might be pervasive, capturing the attention of policy makers and the general public, but with no widely-accepted definitions or metrics emerging to quantify them or the role algorithm might have on them, let alone actionable means to design better algorithms that would minimize identified negative outcomes. Meanwhile, growing evidence suggests that algorithmic recommendation mediating activity on social platforms might be leveraging political opinions of users and other features of public debate associated with social divides.
Method & research questions
Project AIPM builds on ideological social network embedding methods and political surveys research to propose a double network and opinion space modeling of digital public spaces. Using this double network and spatial opinion analysis, this project proposes to test whether algorithms learn and leverage political opinions of users through algorithmic explainability, how they affect information dynamics in public debate, and to open a path towards actionable tools capable of guiding algorithm design, governance, and policy.
Some of the questions at the heart of project AIPM are:
- What is the role of algorithms in the emergence of political identities and informational circuits online?
- If modern recommender systems use data traces to create models (representation learning) with which to compute recommendations: Do these models contain political preferences of individuals?
This project has received a McCourt Institute Grant for research.