[talks] Colloquium Speaker: Dimitris Papailiopoulos Monday, March 7- 12:30pm- reminder

Nicole E. Wagenblast nwagenbl at CS.Princeton.EDU
Sun Mar 6 10:00:00 EST 2016


​Please note: The Large Auditorium does not permit food, therefore lunch will not be served. 

Colloquium Speaker 
Dimitris Papailiopoulos , University of California, Berkeley 
Monday, March 7, 2016 - 12:30pm 
Computer Science Large Auditorium, 104 

Less Talking, More Learning: Avoiding Coordination In Parallel Machine Learning Algorithms 

The recent success of machine learning (ML) in both science and industry has generated an increasing demand to support ML algorithms at scale. In this talk, I will discuss strategies to gracefully scale machine learning on modern parallel computational platforms. A common approach to such scaling is coordination-free parallel algorithms, where individual processors run independently without communication, thus maximizing the time they compute. However, analyzing the performance of these algorithms can be challenging, as they often introduce race conditions and synchronization problems. 

In this talk, I will introduce a general methodology for analyzing asynchronous parallel algorithms. The key idea is to model the effects of core asynchrony as noise in the algorithmic input. This allows us to understand the performance of several popular asynchronous machine learning approaches, and to determine when asynchrony effects might overwhelm them. To overcome these effects, I will propose a new framework for parallelizing ML algorithms, where all memory conflicts and race conditions can be completely avoided. I will discuss the implementation of these ideas in practice, and demonstrate that they outperform the state-of-the-art across a large number of ML tasks on gigabyte-scale data sets. 

Dimitris Papailiopoulos is a postdoctoral researcher in the Department of Electrical Engineering and Computer Sciences at UC Berkeley and a member of the AMPLab. His research interests span machine learning, coding theory, and parallel and distributed algorithms, with a current focus on coordination-free parallel machine learning, large-scale data and graph analytics, and the use of codes to speed up distributed computation. Dimitris completed his Ph.D. in electrical and computer engineering at UT Austin in 2014. At Austin he worked under the supervision of Alex Dimakis. In 2015, he received the IEEE Signal Processing Society, Young Author Best Paper Award. 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cs.princeton.edu/pipermail/talks/attachments/20160306/37c7aa97/attachment.html>


More information about the talks mailing list