1. médialab Sciences Po
  2. News
  3. Learning to Love Opacity: Decision Trees and the Genealogy of the Algorithmic Black Box

Learning to Love Opacity: Decision Trees and the Genealogy of the Algorithmic Black Box

Matthew L. Jones (Columbia University) will present the genesis and the development of one of the foremost kinds of algorithms for supervised learning: decision trees.

Event, Research Seminar

Salle H405, 28 rue des Saints Pères, 75007 Paris

On Thursday June 6th, 2019, médialab in partnership with The Centre Alexandre-Koyré, is hosting Matthew L. Jones (Columbia University) to talk about the genesis and the development of one of the foremost kinds of algorithms for supervised learning: decision trees.

Résumé

Matthew L. Jones specializes in the history of science and technology, focused on early modern Europe and on recent information technologies. He was a Guggenheim Fellow in 2012-13 and a Mellon New Directions fellow in 2012-15. He is finishing two books, Great Exploitations: Data Mining, Legal Modernization, and the NSA and  Data Mining: The Critique of Artificial Reason, 1963-2005, a study of "big data" and its growth as a new form of technical expertise in business and scientific research.  Reckoning with Matter: Calculating Machines, Innovation, and Thinking about Thinking from Pascal to Babbage is appearing this fall from the University of Chicago Press. His first book The Good Life in the Scientific Revolution (University of Chicago Press, 2006) focused on the mathematical innovations of Descartes, Pascal, and Leibniz.

Abstract

A series of researchers, each slightly askew to the dominant practices and epistemic virtues of their fields, came obliquely to trees in the 1970s: a data-driven statistician, a machine learning expert focused on large data sets, social scientists unhappy with multivariate statistics, a physicist interested mostly in computers who eventually was tenured in a statistics department. In case after case, the creators of different forms of trees deployed “applied” philosophies of science in critiquing contemporary practices, epistemic criteria and even promotion practices in academic disciplines. Faced with increasing amounts of high-dimensional data, these authors time and again advocated a data-focused positivism. The history of trees does not cleanly divide into a theoretical and an applied stage; an academic and a commercial phase; a statistical and a computational stage; or even an algorithm design and an implementation stage.  This history is iterative: the implementation of algorithms on actually existing computers with various limitations drives the development and transformation of the techniques. Before the very recent renaissance and current triumph of neural networks, decision trees were central to the transformation of artificial intelligence and machine learning of recent years: the shift in the central goal to a focus on prediction at the expense of concerns with human intelligibility, and of a shift from symbolic interpretation to potent but inscrutable black-boxes. Trees exploded in the late 1980s and 1980s as paragons of interpretable algorithms but developed in the late 1990s into a key example of powerful but opaque ensemble models, predictive but almost unknowable. We need to explain, rather than take as given, the shift in values to prediction—to an instrumentalism—central to the ethos and practice of the contemporary data sciences. Opacity needs its history—just as transparency does.

Practical Information

Thursday June 6th, 2019 – 10am to 12pm.

This seminar is open to all within the limits of available places. Please register in advance.

WARNING, due to renovation works, we had to change the venueThe seminar will take place in Room H405, 28 rue des Saints Pères, 75007 Paris