Monday, May 4, 2015 - 12:00pm to 1:00pm
Location:Traffic 21 Classroom 6501 Gates & Hillman Centers
Speaker:MICHAEL J. SULLIVAN, Ph.D. Student http://www.cs.cmu.edu/~mjsulliv/
For More Information, Contact:email@example.com
Writing programs with shared memory concurrency is difficult even under the best of circumstances, in which accesses to memory are sequentially consistent; that is, when threads can be viewed as strictly interleaving accesses to a shared memory. Unfortunately, sequential consistency can be violated by CPU out-of-order execution and memory subsystems as well as many very standard compiler optimizations. Traditionally, languages approach this by guaranteeing that data race free code will behave in a sequentially consistent manner. Programmers can then use locks and other techniques to synchronize between threads and rule out data races. However, for performance critical code and library implementation this may not be good enough, requiring languages that target these domains to provide a well-defined low-level mechanism for shared memory concurrency. C and C++ (since the C++11 and C11) standards provide a mechanism based around specifying "memory orderings" when accessing concurrently modified locations. These memory orderings induce constraints that constrain the behavior of programs. In this talk, we propose a much different approach: to have the programmer explicitly specify constraints on the order of execution of operations and on the visibility of memory writes. The compiler then is responsible for ensuring these constraints hold. We will discuss this core idea, its realization in C/C++, some potential use cases, and the basics of the compiler implementation. Joint work with Karl Crary.