[Ml-stat-talks] statistics/machine learning courses?

David Blei blei at CS.Princeton.EDU
Wed Nov 16 17:57:31 EST 2011


hi ml-stat-talks

we're (we = the organizers of last year's stat/ml symposium) are
collecting a list of courses on campus that have to do with statistics
and machine learning.

can you drop me a line if you took or taught a course that had
something to do with these subjects?  please send me the course title,
department, number, and when you took/taught it.  i'd like to hear
about all courses, those that were only taught once and those that are
regularly offered.

and, don't forget, emily fox is speaking tomorrow at 12:30.  not to be
missed for enthusiasts of multivariate statistics, bayesian
nonparametrics, and predicting flu epidemics.  i've pasted her
abstract below my signature.

best
dave

---

Thurs, Nov 17, 12:30 in CS402.

Title: Bayesian Covariance Regression and Autoregression

Abstract:

Although there is a rich literature on methods for allowing the
variance in a univariate regression model to vary with predictors,
time and other factors, relatively little has been done in the
multivariate case.  A number of multivariate heteroscedastic time
series models have been proposed within the econometrics literature,
but are typically limited by lack of clear margins, computational
intractability, and curse of dimensionality.  In this talk, we first
introduce and explore a new class of time series models for covariance
matrices based on a constructive definition exploiting inverse Wishart
distribution theory.  The construction yields a stationary,
first-order autoregressive (AR) process on the cone of positive
semi-definite matrices.

We then turn our focus to more general predictor spaces and scaling to
high-dimensional datasets.  Our proposed Bayesian nonparametric
covariance regression framework harnesses a latent factor model
representation.  In particular, the predictor-dependent factor
loadings are characterized as a sparse combination of a collection of
unknown dictionary functions (e.g, Gaussian process random functions).
The induced predictor-dependent covariance is then a regularized
quadratic function of these dictionary elements. Our proposed
framework leads to a highly-flexible, but computationally tractable
formulation with simple conjugate posterior updates that can readily
handle missing data. Theoretical properties are discussed and the
methods are illustrated through an application to the Google Flu
Trends data and the task of word classification based on single-trial
MEG data.

Joint work with David Dunson and Mike West.


More information about the Ml-stat-talks mailing list