A self-regulatory framework for ia ethics: opportunities and challenges
Jean-Marie John-Mathews
Publications – Article/chapter
We propose a self-regulatory tool for AI design that integrates societal metrics such as fairness, interpretability, and privacy. To do so, we create an interface that allows data scientists to visually choose the Machine Learning (ML) algorithm that best fits the AI designers’ ethical preferences. Using a Design Science methodology, we test the artifact on data scientist users and show that the interface is easy to use, gives a better understanding of the ethical issues of AI, generates debate, makes the algorithms more ethical, and is operational for decision-making. Our first contribution is to build a bottom-up AI regulation tool that integrates not only users’ ethical preferences, but also the singularities of the practical case learned by the algorithm. The method is independent of ML use cases and ML learning procedures. Our second contribution is to show that data scientists can select freely to sacrifice some performance to reach more ethical algorithms if they use appropriate regulatory tools. We then provide the conditions under which this technical and self-regulatory approach can fail. This paper shows how it is possible to bridge the gap between theories and practices in AI Ethics using flexible and bottom-up tools.