Tuesday, April 12, 2016 - 2:00pm to 3:00pm
Location:7101 Gates & Hillman Centers
Speaker:SHAYAK SEN, Ph.D. Student http://www.cs.cmu.edu/afs/cs/Web/People/shayaks/
For More Information, Contact:firstname.lastname@example.org
Algorithmic systems that employ machine learning play an ever-increasing role in making substantive decisions in modern society, ranging from online personalization to insurance and credit decisions to predictive policing. But their decision-making processes are often opaque---it is difficult to explain why a certain decision was made---thus raising concerns about inadvertent introduction of harms. We develop a formal foundation to improve the transparency of such decision-making systems that operate over large volumes of personal information about individuals. Specifically, we introduce a family of Quantitative Input Influence (QII) measures that capture the degree of influence of inputs on outputs of machine learning algorithms. Our causal QII measures carefully account for correlations among inputs and capture input influence on aggregate effects on groups of individuals (e.g., disparate impact based on race). The QII measures also capture the joint influence of a set of inputs on outputs using an aggregation method with a strong theoretical justification. Apart from demonstrating general trends in a system, QII guides the construction of personalized transparency reports that provide insights into an individual's classification outcomes. Our empirical validation demonstrates that our QII measures are a useful transparency mechanism when black box access to the learning system is available; in particular, they provide better explanations than standard associative measures for a host of scenarios that we consider. Further, we show that in the situations we consider, QII is efficiently approximable and can be made differentially private with very little addition of noise. Joint work with Anupam Datta and Yair Zick. Presented in Partial Fulfillment of the CSD Speaking Skills Requirement.