Doctoral Thesis Proposal - Caspar Oesterheld

— 2:30pm

Location:
In Person - Traffic21 Classroom, Gates Hillman 6501

Speaker:
CASPAR OESTERHELD, Ph.D. Student, Computer Science Department, Carnegie Mellon University
https://www.andrew.cmu.edu/user/coesterh/

My doctoral research addresses two fundamental obstacles to beneficial outcomes from strategic interactions between multiple parties: strategic incentives against cooperation (as in the Prisoner's Dilemma) and the multiplicity of solutions (sometimes called the equilibrium selection problem). As AI systems are increasingly involved in consequential decision making processes on behalf of human principals, understanding how to achieve desirable outcomes in multi-agent AI settings becomes critical. My research leverages unique features of AI systems — including their transparency, reproducibility, and malleability — to develop novel game-theoretic approaches that enable better, more cooperative outcomes.      

Three primary research directions form the core of this dissertation. First, the concept of safe Pareto improvements provides a rigorous framework for improving outcomes without resolving equilibrium selection problems. Unlike traditional solution concepts, safe Pareto improvements make qualitative assumptions about pairs of games rather than individual games. This sometimes allows us to prefer playing one game over another, without any judgment about how each of the individual games is played. Second, the concept of program equilibrium explores how the use of mutually transparent decision-making algorithms can allow for cooperation. Third, my research on so-called Newcomb-like decision problems takes inspiration from philosophical branches of decision theory. I investigate how cooperation can be achieved when different parties deploy similar AI systems.

Current and planned work extends these directions through several projects, including: connecting program equilibrium with mediated equilibrium; exploring sequential program/mediated equilibrium-type settings; investigating the relationship between self-locating beliefs and decision theory; developing theoretical foundations for safe Pareto improvements, as well as analyzing safe Pareto improvements in a new setting. I've also started to implement some of these theoretical ideas in language models to test their practical applicability. 

Thesis Committee

Vincent Conitzer (Chair)
Tuomas Sandholm
Fei Fang
Stuart Russell (University of California, Berkeley)
Ben Levinstein (University of Illinois at Urbana-Champaign)

Additional Information


Add event to Google
Add event to iCal