Colloquium Speaker

Tianqi Chen from the University of Washington

Tuesday, March 5 - 12:30pm

Computer Science - Room 105

Host: Ryan Adams

http://www.cs.princeton.edu/events/25820

 

Learning-based Learning Systems

 

Data, models, and computing are the three pillars that enable machine learning to solve real-world problems at scale. Making progress on these three domains requires not only disruptive algorithmic advances but also systems innovations that can continue to squeeze more efficiency out of modern hardware. Learning systems are in the center of every intelligent application nowadays. However, the ever-growing demand for applications and hardware specialization creates a huge engineering burden for these systems, most of which rely on heuristics or manual optimization.

 

In this talk, I will present a new approach that uses machine learning to automate system optimizations. I will describe our approach in the context of deep learning deployment problem. I will first discuss how to design invariant representations that can lead to transferable statistical cost models, and apply these representations to optimize tensor programs used in deep learning applications. I will then describe the system improvements we made to enable diverse hardware backends. TVM, our end-to-end system, delivers performance across hardware back-ends that are competitive with state-of-the-art, hand-tuned deep learning frameworks. Finally, I will discuss how to generalize our approach to do full-stack optimization of the model, system, hardware jointly, and how to build systems to support life-long evolution of intelligent applications.

Bio:

Tianqi Chen is a Ph.D. candidate in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, working with Carlos Guestrin on the intersection of machine learning and systems. He has created three major learning systems that are widely adopted: XGBoost, TVM, and MXNet(co-creator). He is a recipient of the Google Ph.D. Fellowship in Machine Learning.