Here is the full list of CS Colloquium talks for next week.
All talks will be recorded.
~~~~~

Speaker: Kostis Kaffes, Stanford University
Date: Monday, March 14, 2022
Time: 12:30pm EST
Location: Zoom Webinar
Host: Ravi Netravali
Event page: https://www.cs.princeton.edu/events/26171  
Please register here: https://princeton.zoom.us/webinar/register/WN_Nn4iOb-mRqSEJbGsbzg5uQ  

Title: Solving the Cloud Efficiency Crisis with Fast and Accessible Scheduling

Abstract:  Operating systems (OS) specialization is necessary as the one-size-fits-all approach of fundamental OS operations such as scheduling is incompatible with today's diverse application landscape. Such specialization can improve application performance and cloud platform efficiency by an order of magnitude or more. Towards this goal, I will first discuss Shinjuku, a specialized OS that supports an order of magnitude higher load and lower tail latency than state-of-the-art systems by enabling better scheduling. Shinjuku leverages hardware support for virtualization to preempt as often as every 5 microseconds and disproves the conventional wisdom that interrupts are incompatible with microsecond timescales. Then, I will present Syrup, a framework that enables everyday application developers to specify custom scheduling policies easily and safely deploy them across different layers of the stack over existing operating systems like Linux, bringing the benefits of specialized scheduling to everyone. For example, Syrup allowed us to implement policies that previously required specialized dataplanes in less than 20 lines of code and improve the performance of an in-memory database by 8x without needing any application modification.

Bio: Kostis Kaffes is a final-year Ph.D. candidate in Electrical Engineering at Stanford University, advised by Christos Kozyrakis. He is broadly interested in computer systems, cloud computing, and scheduling. His thesis focuses on end-host, rack-scale, and cluster-scale scheduling for microsecond-scale tail latency with the goal of improving efficiency in the cloud. Recently, he has been looking for ways to make it easier to implement and deploy custom scheduling policies across different layers of the stack. Kostis's research has been supported by a Facebook Research Award and various scholarships and fellowships from Stanford, A.G. Leventis Foundation, and Gerondelis Foundation. Prior to Stanford, he received his undergraduate degree in Electrical and Computer Engineering from the National Technical University of Athens in Greece.
~~~~~

Speaker: Gilbert Bernstein, University of California, Berkeley and MIT 
Date: Tuesday, March 15, 2022
Time: 12:30pm EST
Location: CS 105
Host: Felix Heide
This talk will be live-streamed at https://mediacentrallive.princeton.edu/   

Title: High-Performance Languages for Visual Computing

Abstract:  Computer Graphics and Visual Computing problems challenge us with a near inexhaustible demand for more resolution and scale in order to simulate the climate, reconstruct 3d environments, train neural networks, and produce games & movies.  Building such applications requires integrating disciplinary expertise (e.g. physics, numerical methods, geometry and custom hardware) into a single system.  However, abstraction barriers are regularly discarded in the name of higher-performance, leading to code that must be written and maintained by super-experts—programmers who simultaneously possess deep knowledge of all relevant disciplines.  Programming languages, especially Domain Specific Languages (DSLs) are perhaps the most promising approach to recovering a separation of concerns in such high-performance systems.

In this talk, I will first describe my work on DSLs to enable parallel portability of physical simulation and optimization programs, including the use of relational algebra and automatic differentiation to structure these problem domains.  Then I will discuss more recent work on “horizontal DSLs” designed to dig into specific sub-problems: maximizing utilization of novel hardware accelerators, formally verifying optimizations of tensor programs, and extending automatic differentiation to correctly handle discontinuities in inverse rendering and simulation problems.

Bio: Gilbert Bernstein is a Postdoctoral Scholar at the University of California, Berkeley and MIT CSAIL, working with Professor Jonathan Ragan-Kelley.  His research lies in Computer Graphics and Programming Languages, especially the design of high-performance domain specific languages for numeric computing applications such as physical simulation, optimization and inverse problems.  His work spans the gamut from user interfaces, to differentiable programming, parallel-portability, and new hardware design languages.  His work has been published at SIGGRAPH, POPL, PLDI, & OOPSLA, as well as being incorporated into products at Adobe, Autodesk, and Disney.  He holds a Ph.D. in Computer Science from Stanford University, where he was advised by Pat Hanrahan.
~~~~~

Speaker: Adam Yala, Massachusetts Institute of Technology
Date: Wednesday, March 16, 2022
Time: 12:30pm EST
Location: CS 105
Host: Karthik Narasimhan
This talk will be live-streamed at https://mediacentrallive.princeton.edu/    

Title: Seeing into the future: Machine learning methods for personalized screening

Abstract: While AI has the potential to transform patient care: improving outcomes, reducing costs and eliminating health disparities. However, the development of equitable clinical AI models and their translation to hospitals remains difficult. From a computational perspective, these tools must deliver consistent performance across diverse populations while learning from biased and scarce data.  In this talk, I will discuss approaches addressing the above challenges in three areas: 1) cancer risk assessment from imaging, 2) personalized screening policy design and 3) private data sharing through neural obfuscation. I’ve demonstrated that these clinical models offer significant improvements over the current standard of care across globally diverse patient populations.

Bio:  Adam Yala is a PhD student in Electrical Engineering and Computer Science at MIT.  He is a member of MIT Jameel Clinic for AI and Health and a member of CSAIL. His research focuses on developing machine learning methods for personalized medicine and translating them to clinical care. Adam's tools have been deployed at multiple health systems around the world and his research has been featured in the Washington Post, New York Times, Boston Globe and Wired.
~~~~~

Speaker: Pang Wei Koh, Stanford University
Date: Thursday, March 17, 2022
Time: 12:30pm EST
Location: CS 105
Host: Sanjeev Arora
This talk will be live-streamed at https://mediacentrallive.princeton.edu/   

Title: Reliable machine learning in the wild

Abstract:  Machine learning systems are widely deployed today, but they are unreliable. They can fail – and with catastrophic consequences – on subpopulations of the data, such as particular demographic groups, or when deployed in different environments from what they were trained on. In this talk, I will describe our work towards building reliable machine learning systems that are robust to these failures. First, I will show how we can use influence functions to understand the predictions and failures of existing models through the lens of their training data. Second, I will discuss the use of distributionally robust optimization to train models that perform well across all subpopulations. Third, I will describe WILDS – a benchmark of in-the-wild distribution shifts spanning applications such as pathology, conservation, remote sensing, and drug discovery – and show how current state-of-the-art methods, which perform well on synthetic distribution shifts, still fail to be robust on these real-world shifts. Finally, I will describe our work on building more reliable COVID-19 models, using anonymized cellphone mobility data, to inform public health policy; this is a challenging application as the underlying environment is often changing and there is substantial heterogeneity across demographic subpopulations.

Bio: Pang Wei Koh is a PhD student at Stanford, advised by Percy Liang. He studies the theory and practice of building reliable machine learning systems. His research has been published in Nature and Cell, featured in media outlets such as The New York Times and The Washington Post, and recognized by best paper awards at ICML and KDD, a Meta Research PhD fellowship, and the Kennedy Prize for best honors thesis at Stanford. Prior to his PhD, he was the 3rd employee and Director of Partnerships at Coursera.