The members of his committee are as follows: Ryan Adams (Adviser); Readers: Ryan Adams and Rob Schapire (MSR); Examiners: Barbara Engelhardt, Szymon Rusinkiewicz, and Akshay Krishnamurthy (MSR)
Everyone is invited to attend his talk. The talk title and abstract follow below.
Title: Towards Flexible Active and Online Learning With Neural Networks
Abstract:
Deep learning has elicited breakthrough successes on a wide array of machine learning tasks. Outside of the fully-supervised regime, however, many deep learning algorithms are brittle and unable to reliably perform across model architectures, dataset types, and optimization parameters. As a consequence, these algorithms are not easily usable by non-machine-learning experts, limiting their ability to meaningfully impact science and society.
This talk consists of two parts. First, I’ll overview an in-progress project involving machine learning for 3D printing. I will then move to the main part of the talk, and address some nuanced pathologies around the use of deep learning for active and passive online learning. I propose a practical active learning approach for neural networks that is robust to environmental variables: Batch Active learning by Diverse Gradient Embeddings (BADGE). I also discuss the deleterious generalization effects of warm-starting the optimization of neural networks in sequential environments, why this is a major problem for deep learning, and provide a simple solution.