[Ml-stat-talks] Fwd: [Theory-Read] Nina Balcan @theory lunch 4/25 *UNUSUAL ROOM*

Philippe Rigollet rigollet at Princeton.EDU
Thu Apr 24 09:33:55 EDT 2014


The Theory lunch is going full speed ML this year with a talk by Nina Balcan on new paradigms for learning.
Should be fun!
Philippe
--
Philippe Rigollet
www.princeton.edu/~rigollet<http://www.princeton.edu/~rigollet>



Begin forwarded message:

From: Mark Braverman <mbraverm at CS.Princeton.EDU<mailto:mbraverm at CS.Princeton.EDU>>
Subject: [Theory-Read] Nina Balcan @theory lunch 4/25 *UNUSUAL ROOM*
Date: April 23, 2014 at 11:28:36 PM EDT
To: Theory list <theory-read at lists.cs.princeton.edu<mailto:theory-read at lists.cs.princeton.edu>>
Cc: <ninamf at cc.gatech.edu<mailto:ninamf at cc.gatech.edu>>



Hi Everyone,

We're very happy to have Nina Balcan (Georgia Tech) speak on Friday, April 25, at the theory lunch.

IMPORTANT: Note room change this week to Computer Science Building room *302*.

As usual, lunch will be served ~11:45, and the talk will start promptly after noon.
Title & abstract are below.

See you there!
-Mark

Title:  Foundations For Learning in the Age of Big Data

Abstract:

With the variety of applications of machine learning across science, engineering, and computing in the age of Big Data, re-examining the underlying foundations of the field has become imperative. In this talk, I will describe new models and algorithms for important emerging paradigms, specifically, interactive learning and distributed learning.

For active learning, where the algorithm itself can ask for labels of carefully chosen examples from a large pool of unannotated data with the goal of minimizing human labeling effort, I will present results giving computationally efficient, optimal label complexity algorithms. I will also discuss learning with more general forms of interaction, as well as unexpected implications of these results for classic supervised learning paradigms.

For distributed learning, I will discuss a model that for the first time addresses the core question of what are the fundamental communication requirements for achieving accurate learning.  Broadly, we consider a framework where massive amounts of data is distributed among several locations, and our goal is to learn a low-error predictor with respect to the overall distribution of data using as little communication, and as few rounds of interaction, as possible.  We provide general upper and lower bounds on the amount of communication needed to learn a given class, as well as broadly-applicable techniques for achieving communication-efficient learning.

_______________________________________________
Theory-Read mailing list
Theory-Read at lists.cs.princeton.edu<mailto:Theory-Read at lists.cs.princeton.edu>
https://lists.cs.princeton.edu/mailman/listinfo/theory-read

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cs.princeton.edu/pipermail/ml-stat-talks/attachments/20140424/296f3013/attachment.htm>


More information about the Ml-stat-talks mailing list