[Ml-stat-talks] Fwd: Monday, 11AM: Geoff Hinton on neural networks

David Blei blei at CS.Princeton.EDU
Sun May 15 21:47:25 EDT 2011


hi ml-stat-talks

david's comparison to bach is fitting.  do not miss geoff hinton's talk
tomorrow.  note the unusual time, 11am.  the talk is in CS 105 (also known
as the "small auditorium").

best
dave

---------- Forwarded message ----------
From: David Mimno <mimno at cs.princeton.edu>
Date: Thu, May 12, 2011 at 2:57 PM
Subject: [Ml-stat-talks] Monday, 11AM: Geoff Hinton on neural networks
To: ml-stat-talks <ml-stat-talks at lists.cs.princeton.edu>



For those not familiar with his work: "Geoffrey Hinton talks about neural
networks" is roughly equivalent to "J.S. Bach talks about fugues".
This is not to be missed. Drag your friends.

Geoff's schedule is filling up, but if you would like to meet with him
before the talk or in the afternoon, please let me know.

Monday, May 16, 11AM (note the change), CS105

=============================

How to force unsupervised neural networks to discover
the right representation of images

Geoffrey Hinton
University of Toronto

One appealing way to design an object recognition system is to define
objects recursively in terms of their parts and the required spatial
relationships between the parts and the whole. These relationships can
be represented by the coordinate transformation between an intrinsic
frame of reference embedded in the part and an intrinsic frame
embedded in the whole. This transformation is unaffected by the
viewpoint so this form of knowledge about the shape of an object is
viewpoint invariant. A natural way for a neural network to implement
this knowledge is by using a matrix of weights to represent each
part-whole relationship and a vector of neural activities to represent
the pose of each part or whole relative to the viewer. The pose of the
whole can then be predicted from the poses of the parts and, if the
predictions agree, the whole is present. This leads to neural networks
that can recognize objects over a wide range of viewpoints using
neural activities that are ``equivariant'' rather than invariant: as
the viewpoint varies the neural activities all vary even though the
knowledge is viewpoint-invariant. The ``capsules'' that implement the
lowest-level parts in the shape hierarchy need to extract explicit
pose parameters from pixel intensities and these pose parameters need
to have the right form to allow coordinate transformations to be
implemented by matrix multiplies. These capsules are quite easy to
learn from pairs of transformed images if the neural net has direct,
non-visual access to the transformations, as it would if it controlled
them.  (Joint work with Sida Wang and Alex Krizhevsky)
_______________________________________________
Ml-stat-talks mailing list
Ml-stat-talks at lists.cs.princeton.edu
https://lists.cs.princeton.edu/mailman/listinfo/ml-stat-talks
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cs.princeton.edu/pipermail/ml-stat-talks/attachments/20110515/bec89f78/attachment.html>


More information about the Ml-stat-talks mailing list