Tuesday, April 5, 2016 - 12:00pm
Location:3305 Newell-Simon Hall
Speaker:YAIR ZICK, Ph.D. Student http://www.cs.cmu.edu/~yairzick/
For More Information, Contact:firstname.lastname@example.org
Algorithmic systems that employ machine learning play an ever-increasing role in making substantive decisions in modern society, ranging from online personalization to insurance and credit decisions to predictive policing. But their decision making processes are often opaque -- it is difficult to explain why a certain decision was made -- thus raising concerns about inadvertent introduction of harms. We describe a new research agenda, applying game-theoretic centrality to the algorithmic transparency problem. We develop a formal foundation to improve the transparency of such decision-making systems that operate over large volumes of personal information about individuals. First, we describe an axiomatic approach to measuring feature importance in datasets; that is, we derive a function that uniquely satisfies a set of reasonable properties for the measurement of feature influence. Next, we introduce a family of Quantitative Input Influence (QII) measures that capture the degree of influence of inputs on outputs of machine learning algorithms. Our causal QII measures carefully account for correlations among inputs and capture input influence on aggregate effects on groups of individuals (e.g., disparate impact based on race). The QII measures also capture the joint and marginal influence of a set of inputs on outputs using an aggregation method with a strong theoretical justification. Apart from demonstrating general trends in a system, QII guides the construction of personalized transparency reports that provide insights into an individual's classification outcomes. Our empirical validation demonstrates that our QII measures are a useful transparency mechanism when black box access to the learning system is available; in particular, they provide better explanations than standard associative measures for a host of scenarios that we consider. This work is based on two papers Amit Datta, Anupam Datta, Ariel D. Procaccia and Yair Zick Influence in Classification via Cooperative Game Theory, appeared in the 24th International Joint Conference on Artificial Intelligence (IJCAI 2015). Anupam Datta, Shayak Sen and Yair Zick, Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems, to appear in the 37th IEEE Symposium on Security and Privacy (Oakland 2016).