SCS Student Seminar / Computer Science Speaking Skills Talk

Thursday, December 10, 2015 -
12:00pm to 1:00pm

Location:

8102 Gates & Hillman Centers

Speaker:

JIN KYU KIM, Ph.D. Student http://www.pdl.cmu.edu/PEOPLE/jkim.shtml

Event Website:

http://www.cs.cmu.edu/~sss

For More Information, Contact:

deb @ cs.cmu.edu

Machine learning (ML) methods are used to analyze data which are collected from various sources. As the problem size grows, cluster computing technology has been widely adopted for solving ML problems. There are two driving factors behind this trend: Big-data: computation and storage capacity of a single machine becomes not-enough to process data; Big-model: the number of ML parameters to learn becomes large to the extent that a single machine can not finish learning in a reasonable amount of time. In this talk, we focus on big model problem. A natural solution is to turn to parallelizing parameter updates in a cluster. However, naive parallelization of ML algorithms often hurts the effectiveness of parameter updates due to the dependency structure among model parameters and a subset of model parameters are often bottlenecks to the completion of ML algorithms due to the uneven convergence rate.
In this talk, we will present Scheduled Model-Parallel approach for addressing these challenges of parallelizing big model problems efficiently, and a distributed framework called STRADS that facilitates development and deployment of scheduled model-parallel ML applications in distributed systems. I will first talk about scheduled model-parallel approach with two specific scheduling schemes: 1) model parameter dependency checking to avoid updating conflicting parameters concurrently; 2) parameter prioritization to give more update chances to the parameters far from its convergence point. To realize scheduled model-parallel in a distributed system, we implement a prototype framework called STRADS. STRADS improves updates executed per second by pipelining iterations and overlapping update computation with network communication for parameter synchronization. With scheduled model-parallel and STRADS, we improve convergence per update and improved updates per second. As a result, we substantially improves convergence per second and achieve faster ML execution time. For benchmark, we implement various ML algorithms such as MF, LDA, Lasso, Logistic Regression in the form of scheduled model-parallel on top of STRADS.
Presented in Partial Fulfillment of the CSD Speaking Skills Requirement.

Keywords:

Speaking Skills