Zoe Ashwood will present her General Exam on Thursday, October 17, 2019 at 10am in PNI 159.

The members of her committee are as follows: Jonathan Pillow (adviser), Sebastian Seung,  and Tom Griffiths

Everyone is invited to attend her talk, and those faculty wishing to remain for the oral exam following are welcome to do so.  Her abstract and reading list follow below.

Title:
Characterizing decision-making with Hidden Markov Models and Generalized Linear Models
 
Abstract:
Think about the number of decisions you made this morning, even before leaving the house. You had to decide what to wear, what to eat for breakfast, and whether to bike to work or take the bus.  Why do we make the decisions that we do?  How do things like past experience and internal state (whether we feel stressed or elated, for example) influence our choices? What if the options for the decision are perturbed slightly, and instead of having to choose between our favorite breakfast cereal and toast, the choice is now between the supermarket’s own-brand version of this cereal and toast?  Would we still disregard the toast?
 
In my talk, I will describe our approach to modeling the choice behavior of mice and humans in two laboratory perceptual decision-making tasks.  Whereas existing models of choice behavior (Busse et al., 2011; Akrami et al., 2018) assume that an animal is either engaged in the task or guessing randomly, we develop a discrete state-space model that enables us to (1) characterize the decision-making strategies used by participants and (2) label trials according to the strategy used. Our model is a Hidden Markov Model (HMM) with multinomial Generalized Linear Models (GLMs) parameterizing the emission probabilities, which we refer to as the GLM-HMM. The model has K discrete internal states, each of which corresponds to a different mode of decision-making behavior. The animal or human’s choice on a particular trial is governed by state-specific GLMs, which describe the mapping from task covariates (sensory evidence as well as well as task-irrelevant variables like past choice and bias) to the decision. I will demonstrate that our model explains the choice behavior in these tasks considerably better than existing models, and I will discuss some of the retrieved decision-making states. As well as a useful tool for neuroscientists, we hope that enumerating and characterizing animal and human strategies may prove useful for computer scientists training artificial agents to perform decision-making tasks.
 
Reading list:
Textbook: Christopher M. Bishop, Pattern Recognition and Machine Learning (Springer, 2006).

Papers:
1. Sean Escola et al., ‘Hidden Markov Models for the Stimulus-Response Relationships of Multistate Neural Systems’, Neural Computation 23, no. 5 (2011): 1071–1132.
2. Radford M. Neal and Geoffrey E. Hinton, ‘A View of the EM Algorithm That Justifies Incremental, Sparse, and Other Variants’, in Learning in Graphical Models (Springer, 1998), 355–368.
3. Ruslan Salakhutdinov, Sam T. Roweis, and Zoubin Ghahramani, ‘Optimization with EM and Expectation-Conjugate-Gradient’, in Proceedings of the 20th International Conference on Machine Learning (ICML-03), 2003, 672–679.
4. Naonori Ueda and Ryohei Nakano, ‘Deterministic Annealing EM Algorithm’, Neural Networks 11, no. 2 (1998): 271–282.
5. Zoubin Ghahramani, ‘An Introduction to Hidden Markov Models and Bayesian Networks’, in Hidden Markov Models: Applications in Computer Vision (World Scientific, 2001), 9–41.
6. Gianluigi Mongillo and Sophie Deneve, ‘Online Learning with Hidden Markov Models’, Neural Computation 20, no. 7 (2008): 1706–1716.
7. Matthew J. Beal, Zoubin Ghahramani, and Carl E. Rasmussen, ‘The Infinite Hidden Markov Model’, in Advances in Neural Information Processing Systems, 2002, 577–584.
8. Scott W. Linderman et al., ‘Hierarchical Recurrent State Space Models Reveal Discrete and Continuous Dynamics of Neural Activity in C. Elegans’, BioRxiv, 2019, 621540.
9. Alexander B. Wiltschko et al., ‘Mapping Sub-Second Structure in Mouse Behavior’, Neuron 88, no. 6 (16 December 2015): 1121–35, https://doi.org/10.1016/j.neuron.2015.11.031.https://doi.org/10.1016/j.neuron.2015.11.031.
10. Nicholas A. Roy et al., ‘Efficient Inference for Time-Varying Behavior during Learning’, in Advances in Neural Information Processing Systems, 2018, 5695–5705.