Jordan Ash will present his FPO "Towards Flexible Active and Online Learning With Neural Networks" on Thursday, 9/10/2020 at 2:00pm via Zoom.

Link: https://princeton.zoom.us/j/7443698693?pwd=TXBSektKRVZ4ODhoeWpwVWtOMHVEQT09
Meeting ID: 744 369 8693
Passcode: jordan2020

The members of his committee are as follows: Readers: Ryan Adams (Adviser) and Barbara Engelhardt; Examiners: Szymon Rusinkiewicz, Robert Schapire, and Akshay Krishnamurthy (Microsoft Research).

A copy of his thesis, is available upon request. Please email ngotsis@cs.princeton if you would like a copy of the thesis.

Everyone is invited to attend his talk.  Abstract follows below.

Abstract:
Deep learning has elicited breakthrough successes on a wide array of machine learning
tasks. Outside of the fully-supervised regime, however, many deep learning algorithms
are brittle and unable to reliably perform across model architectures, dataset types,
and optimization parameters. As a consequence, these algorithms are not easily
usable by non-machine-learning experts, limiting their ability to meaningfully impact
science and society.
This thesis addresses some nuanced pathologies around the use of deep learning for
active and passive online learning. We propose a practical active learning approach for
neural networks that is robust to environmental variables: Batch Active learning by Diverse Gradient Embeddings (BADGE). We also discuss the deleterious generalization
effects of warm-starting the optimization of neural networks in sequential environments,
and why this is a major problem for deep learning. We introduce a simple method that
remedies this problem, and discuss some important ramifications of its application.