Max-Planck-Institut für Innovation und Wettbewerb, München, Raum 313
The increasing use of algorithms in legal and economic decision-making has led to calls for a “right to explanation” to be given to the subjects of automated decision-making. A growing literature in computer science has proposed a vast number of methods to generate such explanations. At the same time, legal and social science scholars have discussed what characteristics explanations should have to make them legally and ethically acceptable. These debates suffer from two shortcomings. First, very little connection exists between these two strands of literature. Second, we do not know what effects such explanations would have on the behavior of decision subjects and on their perception of decision-making algorithms. In this field experiment, we aim to address these gaps by empirically testing how different types of explanations affect the subjects’ attitude towards decision-making algorithms. Distilling various factors that constitute a good explanation of algorithmic decision-making, we collect data on which factors are useful to decision subjects: local or global explanations, explanations which are selective, contrastive and/or are displayed as conditional control statements versus correlations. In the setting of a scholarship awarded by a machine learning algorithm to promising students, our experiment thus investigates which kind of explanations can lead to increased acceptance of algorithmic decision-making.
Ansprechpartner: Dr. Marina Chugunova