Here is the full list of CS Colloquium talks for next week.
All talks will be recorded.
~~~~~

CS Colloquium Speaker
Speaker: Tej Chajed, Massachusetts Institute of Technology
Date: Monday, April 4, 2022
Time: 12:30pm EST
Location: CS 105
Host: Amit Levy
Event page: https://www.cs.princeton.edu/events/26180 
This talk will be live-streamed at https://mediacentrallive.princeton.edu/ 

Title: Formal verification of a concurrent file system

Abstract:  Bugs in systems software like file systems, databases, and operating systems can have serious consequences, ranging from security vulnerabilities to data loss, and these bugs affect all the applications built on top. Systems verification is a promising approach to improve the reliability of our computing infrastructure, since it can eliminate whole classes of bugs through machine-checked proofs that show a system always meets its specification.

In this talk, I’ll present a line of work culminating in a verified, concurrent file system called DaisyNFS. The file system comes with a proof that shows operations appear to execute correctly and atomically (that is, all-or-nothing), even if the computer crashes and when processing concurrent operations. I’ll describe how a combination of design and verification techniques make it possible to carry out the proof for an efficient implementation.

Bio: Tej Chajed is a final-year PhD student at MIT advised by Frans Kaashoek and Nickolai Zeldovich. His research is on systems verification, ranging from developing new foundations through designing and verifying high-performance systems. Before MIT, he completed his undergraduate degree in Electrical Engineering and Computer Science at UIUC. His work has been in part supported by an NSF graduate research fellowship.
~~~~~

CS Colloquium Speaker
Speaker: Alex Lombardi, Massachusetts Institute of Technology
Date: Tuesday, April 5, 2022
Time: 12:30pm EST
Location: CS 105
Host: Ran Raz
Event page: https://www.cs.princeton.edu/events/26181 
This talk will be live-streamed at https://mediacentrallive.princeton.edu/  

Title: Foundations of Cryptographic Proof Systems

Abstract:  One of computer science's greatest insights has been in understanding the power and versatility of *proofs*, which were revolutionized in the 1980s to utilize *interaction* as well as other resources such as randomization and computational hardness. Today, they form the backbone of both theoretical and practical cryptography and are simultaneously the source of deep connections to areas such as complexity theory, game theory, and quantum computation.

In this talk, I will discuss general-purpose tools, techniques, and abstractions for two key aspects of cryptographic proof systems that have been poorly understood for decades:

1) Can we remove interaction from interactive proofs? Already in the 1980s, Fiat and Shamir proposed a heuristic *but unproven* methodology for removing interaction from interactive proofs, which is now ubiquitous and essential for practical applications. However, it remained open for over 30 years to prove the security of this transformation in essentially any setting of interest. 

My work on the Fiat-Shamir transform has led to resolutions to many long-standing open problems, including (i) building non-interactive zero knowledge proof systems based on lattice cryptography, (ii) establishing the existence of highly efficient and succinct non-interactive proof systems, and (iii) demonstrating that foundational protocols from the 80s fail to compose in parallel.

2) Are classical interactive protocols secure against quantum computers? At its heart, the problem of analyzing and ruling out quantum attacks on cryptographic protocols is the issue of “rewinding.” The inability to rewind a quantum attack stems from the no-cloning theorem, a fundamental property of quantum information. As a result, very few classical protocols were known to be secure against quantum attacks. 

In a recent work, I showed how to overcome these difficulties and settle many foundational questions on post-quantum cryptographic proof systems. Our main technique is showing how to efficiently extract certain pieces of (classical) information from a quantum attacker without disturbing its internal state. 

Bio: Alex Lombardi is a graduate student at MIT advised by Vinod Vaikuntanathan. He is the recipient of an MIT Presidential fellowship, an NDSEG fellowship, and a Charles M. Vest NAE Grand Challenges fellowship. He is broadly interested in cryptography and theoretical computer science with a focus on cryptographic proof systems and post-quantum security. 
~~~~~

CS Colloquium Speaker
Speaker: Natasha Jaques, Google Brain / University of California, Berkeley
Date: Thursday, April 7, 2022
Time: 12:30pm EST
Location: CS 105
Host: Karthik Narasimhan
Event page: https://www.cs.princeton.edu/events/26182 
This talk will be live-streamed at https://mediacentrallive.princeton.edu/  

Title: Social Reinforcement Learning

Abstract: Social learning helps humans and animals rapidly adapt to new circumstances, coordinate with others, and drives the emergence of complex learned behaviors. What if it could do the same for AI? This talk describes how Social Reinforcement Learning in multi-agent and human-AI interactions can address fundamental issues in AI such as learning and generalization, while improving social abilities like coordination. I propose a unified method for improving coordination and communication based on causal social influence. I then demonstrate that multi-agent training can be a useful tool for improving learning and generalization. I present PAIRED, in which an adversary learns to construct training environments to maximize regret between a pair of learners, leading to the generation of a complex curriculum of environments. Agents trained with PAIRED generalize more than 20x better to unknown test environments. Ultimately, the goal of my research is to create intelligent agents that can assist humans with everyday tasks; this means leveraging social learning to interact effectively with humans. I show that learning from human social and affective cues scales more effectively than learning from manual feedback. However, it depends on accurate recognition of such cues. Therefore I discuss how to dramatically enhance the accuracy of affect detection models using personalized multi-task learning to account for inter-individual variability. Together, this work argues that Social RL is a valuable approach for developing more general, sophisticated, and cooperative AI, which is ultimately better able to serve human needs. 


Bio: Natasha Jaques holds a joint position as a Senior Research Scientist at Google Brain and Visiting Postdoctoral Scholar at UC Berkeley. Her research focuses on Social Reinforcement Learning in multi-agent and human-AI interactions. Natasha completed her PhD at MIT, where her thesis received the Outstanding PhD Dissertation Award from the Association for the Advancement of Affective Computing. Her work has also received Best Demo at NeurIPS, an honourable mention for Best Paper at ICML, Best of Collection in the IEEE Transactions on Affective Computing, and Best Paper at the NeurIPS workshops on ML for Healthcare and Cooperative AI. She has interned at DeepMind, Google Brain, and was an OpenAI Scholars mentor. Her work has been featured in Science Magazine, Quartz, IEEE Spectrum, MIT Technology Review, Boston Magazine, and on CBC radio. Natasha earned her Masters degree from the University of British Columbia, and undergraduate degrees in Computer Science and Psychology from the University of Regina.