Computer Science Thesis Proposal

Friday, October 14, 2016 - 3:00pm

Location:

McWilliams Classroom 4303 Gates & Hillman Centers

Speaker:

MU LI, Ph.D. Student http://www.cs.cmu.edu/~muli

For a lot of important machine learning problems, due to the rapid growth of data and the ever increasing model complexity, which often manifests itself in the large number of model parameters, no single machine can solve them fast enough.  Therefore, distributed optimization and inference is becoming more and more inevitable for solving large scale machine learning problems in both academia and industry. Obtaining an efficient distributed implementation of an algorithm, however, is far from trivial. Both intensive computational workloads and the volume of data communication demand careful design of distributed computation systems and distributed machine learning algorithms.
In this thesis, we focus on the co-design of distributed computing systems and distributed optimization algorithms that are specialized for large machine learning problems.  We propose two distributed computing frameworks: a parameter server framework which features efficient data communication, and MXNet, a multi-language library aiming to simplify the development of deep neural network algorithms.  In less than two years, we have witnessed the wide adoption of the proposed systems. We believe that as we continue to develop these systems, they will enable more people to take advantage of the power of distributed computing to design efficient machine learning applications to solve large-scale computational problems.  Leveraging the two computing platforms, we examine a number of distributed optimization problems in machine learning. We present new methods to accelerate the training process, such as data partitioning with better locality properties, communication friendly optimization methods, and more compact statistical models.  We implement the new algorithms on the two systems and test on large scale real data sets. We successfully demonstrate that careful co-design of computing systems and learning algorithms can greatly accelerate large scale distributed machine learning.
Thesis Committee: David G. Andersen (Co-Chair) Alexander J. Smola (Co-Chair)Barnabás Póczos Ruslan Salakhutdinov Jeffrey Dean (Google, Inc.)
Copy of Thesis Summary 

For More Information, Contact:

deb@cs.cmu.edu

Keywords:

Thesis Proposal