
Farhan Damani will present his General Exam " Inferring cognitive learning rules from behavior with probabilistic models" on Thursday, May 9, 2019 at 1pm in CS 401. The members of his committee are as follows: Ryan Adams (adviser), Jonathan Pillow, and Tom Griffiths Everyone is invited to attend his talk, and those faculty wishing to remain for the oral exam following are welcome to do so. His abstract and reading list follow below. Title: Inferring cognitive learning rules from behavior with probabilistic models Abstract: High-throughput behavioral neuroscience presents an exciting opportunity to investigate the dynamics of learning during decision making. We describe a probabilistic model that captures a learner's underlying time-dependent decision making strategy while learning a forced choice discrimination task. Through priors that explicitly characterize how learner's update their knowledge state, our approach presents the opportunity to infer an evolving decision-making strategy despite observing only a single sequence of decisions. Importantly, our model is probabilistic, which means we can reason about a learner's competence, learning rate, and preference for simpler models using the marginal likelihood. We apply our method to large-scale behavioral datasets and discover personalized learning rules explaining vast differences in the learning dynamics across learners while also finding shared explanations for simpler models and skill-specific learning rates. Reading list: Textbook: Koller, D., Friedman, N., & Bach, F. (2009). Probabilistic graphical models: principles and techniques . MIT press 1. Hoffman, M. D., Blei, D. M., Wang, C., & Paisley, J. (2013). Stochastic variational inference. The Journal of Machine Learning Research , 14 (1), 1303-1347. 2. Kucukelbir, A., Tran, D., Ranganath, R., Gelman, A., & Blei, D. M. (2017). Automatic differentiation variational inference. The Journal of Machine Learning Research , 18 (1), 430-474. 3. Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 . 4. Archer, E., Park, I. M., Buesing, L., Cunningham, J., & Paninski, L. (2015). Black box variational inference for state space models. arXiv preprint arXiv:1511.07367 . 5. Johnson, M., Duvenaud, D. K., Wiltschko, A., Adams, R. P., & Datta, S. R. (2016). Composing graphical models with neural networks for structured representations and fast inference. In Advances in neural information processing systems (pp. 2946-2954). 6. Krishnan, R. G., Shalit, U., & Sontag, D. (2017, February). Structured inference networks for nonlinear state space models. In Thirty-First AAAI Conference on Artificial Intelligence 7. Smith, A. C., Frank, L. M., Wirth, S., Yanike, M., Hu, D., Kubota, Y., ... & Brown, E. N. (2004). Dynamic analysis of learning in behavioral experiments. Journal of Neuroscience , 24 (2), 447-461. 8. Bak, J. H., Choi, J. Y., Akrami, A., Witten, I., & Pillow, J. W. (2016). Adaptive optimal training of animal behavior. In Advances in neural information processing systems (pp. 1947-1955) 9. Paninski, L., Ahmadian, Y., Ferreira, D. G., Koyama, S., Rad, K. R., Vidne, M., ... & Wu, W. (2010). A new look at state-space models for neural data. Journal of computational neuroscience , 29 (1-2), 107-126. 10. Prerau, M. J., Smith, A. C., Eden, U. T., Kubota, Y., Yanike, M., Suzuki, W., ... & Brown, E. N. (2009). Characterizing learning by simultaneous analysis of continuous and binary measures of performance. Journal of neurophysiology , 102 (5), 3060-3072.