Digital Platforms’ Governance: missing data & information to monitor, audit & investigate platforms’ misinformation interventions
Shaden Shabayek, Emmanuel Vincent, Héloïse Théro
Publications – Grey literature
There is a growing concern in society about the spread of misinformation on online platforms and its potential impact on democratic debates and public health. To address this concern, online platforms have been expanding their rules in order to tackle the spread of misleading information. During the COVID-19 global health pandemic, platforms have shown a willingness to ensure the access to reliable health information by implementing further new policies. Moreover, regulators on a national and European level are making progress on the design of a legal framework specifically tailored to tackle disinformation1. Namely large platforms in operation have signed the “EU Code of Practice on Disinformation” (2018). This code lists a number of actions that large platforms have agreed to implement, such as to “reduce revenues of the purveyors of disinformation”, “prioritize relevant, authentic, and accurate and authoritative information” or “dilute the visibility of disinformation by improving the findability of trustworthy content”. Since then, several new regulatory guidelines have been adopted at the level of the European Union or are awaiting entry into force; such as the Strengthened Code of Practice on Disinformation (June 2022) and the Digital Services Act2 (hereafter the DSA) which includes an obligation for very large platforms to give “access to data that are necessary to monitor and assess compliance with this Regulation” (see Article 31 of the DSA proposal) to vetted researchers3 according to specified requirements by the act. Along similar lines, Trans-atlantic initiatives have emerged, such as the Platform Accountability and Transparency Act (PATA), a bipartisan bill introduced by US senators Coons, Portman and Klobuchar (December 2021). This bill requires that platforms make certain key information available to independent researchers4. At the European level, the DSA adds up to the GDPR applied since May 2018, which offers further guarantees for the respect of privacy and the ethical use of data for research purposes. In that context, a variety of actors are reflecting and organizing the practicalities of such legal frameworks to meet up with ethical concerns related to data circulation between platforms and many members of the society. In particular, the provisions of Article 40 of the GDPR encourage the drawing up of codes of conduct. The working group on Platform-to-Researcher Data Access of the European Digital Media Observatory (EDMO) has recently (May 2022) drafted such a code of conduct so that data circulation between platforms and researchers can be organized in practice. At a national level, regulators such as the ARCOM in France are gathering5 information about the type of data that would be needed so that researchers can effectively investigate the impact of digital platforms on our informational ecosystem.
Now from the perspective of researchers, assessing regularly the impact and pertinence of misinformation related interventions by online platforms and monitoring their implementation, as well as a careful investigation of the phenomenon of misinformation itself, are necessary safe-guards for democratic societies with growing digital spheres. Since their early days, online platforms have emerged as digital spaces where information and opinions can circulate freely. The task of ensuring a balance between freedom of expression and access to reliable information regarding political life or public health, is tremendously intricate. In spite of transparency efforts by digital platforms, a number of issues still remain. There is limited access to specific data and information which would enable the academic community, NGOs, the civil society and data journalists to successfully study online misinformation along with the related interventions. In what follows, we provide illustrations of ways to monitor most common misinformation related interventions with currently available data, which precisely demonstrate the scarcity of pertinent data.
We further lay out a list of missing data and information that would enable more effective monitoring, auditing and investigation of misinformation-related interventions by platforms, along with the breadth of misinformation. Clearly, this list intersects items which are present in the above mentioned legal Acts (e.g. Article 30 of the DSA on Additional online advertising transparency). However, our list is meant to not only enumerate missing data with precision but also propose a differential level of access to this data, ranging from exclusive access to vetted researchers within a legal framework when it comes to sensitive data, to a broader access by enriching the fields in currently available APIs. Designing different levels of access to different types of data is meant to attain two goals: (1) preserve privacy of users and address potential concerns of digital platforms regarding legal immunity when giving extended access to their data or information which might put at stake the functioning of their business model (2) provide access to richer data to a wider set of actors when ethical concerns are not at stake, because the task at hand is considerable and combining the results of a variety of actors using different tools of analysis and perspectives, ranging from vetted researchers to journalists and NGOs, can yield a richer understanding of the functioning of platforms and their impact on our informational ecosystem.