[talks] Ari Seff will present his general exam on Monday, May 8th at 9am in CS 302
ngotsis at CS.Princeton.EDU
Mon May 1 16:14:37 EDT 2017
Ari Seff will present his general exam on Monday, May 8th at 9am in CS 302.
The members of his committee are: Han Liu (adviser), Thomas Funkhouser, and Barbara Engelhardt.
Everyone is invited to attend his talk, and those faculty wishing to remain for the oral exam following are welcome to do so. His abstract and reading list follow below.
Title: Continual Learning in Deep Generative Models
Developments in deep generative models have allowed for tractable learning of high-dimensional data distributions. Recently introduced frameworks, such as generative adversarial networks and variational autoencoders, can map a noise distribution to the data space, producing realistic samples. In addition, these deep generative models are amenable to conditional training, where a conditional input (e.g., class label, data from another modality) guides the sampling process. At test time, manually selected conditional inputs lead to samples with desired attributes on demand.
However, the standard training regime for deep generative models assumes that training data is i.i.d with respect to the conditional inputs. For example, when class labels serve as the conditional inputs, if we wish to extend the capability of a trained network to a new class, training solely on data from the new class will interfere with previously learned distributions. This is due to the general susceptibility of neural networks to catastrophic forgetting when trained on multiple tasks sequentially. Instead, the standard training regime requires that while the network trains on the new class, it simultaneously retrains on the old to circumvent this phenomenon.
In this work, we propose a scalable approach to training deep generative models on multiple tasks sequentially, where training data for any previous task (a set of target conditional inputs) is assumed to be inaccessible. To this end, we leverage recent work with discriminative models where plasticity is reduced for network parameters determined to be salient for previously encountered tasks. Experimental evaluation of our approach demonstrates that deep generative models may be extended to settings where the observed data distribution changes over time, enabling continual learning of novel generative tasks.
I. Goodfellow, Y. Bengio, and A. Courville: Deep learning
Y. LeCun, J. S. Denker, and S. A. Solla: Optimal brain damage
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio: Generative adversarial nets
D. P. Kingma and M. Welling: Auto-encoding variational bayes
J. Kirkpatrick, R. Pascanu, N. C. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, D. Hassabis, C. Clopath, D. Kumaran, and R. Hadsell: Overcoming catastrophic forgetting in neural networks
M. McCloskey and N. J. Cohen: Catastrophic interference in connectionist networks: The sequential learning problem
M. Mirza and S. Osindero: Conditional generative adversarial nets
K. Sohn, H. Lee, and X. Yan: Learning structured output representation using deep conditional generative models
F. Zenke, B. Poole, and S. Ganguli: Improved multitask learning through synaptic intelligence
D. Hoiem and Z. Li: Learning without forgetting
A. A. Rusu, N. C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, and R. Hadsell: Progressive neural networks
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the talks