CS Colloquium Series: week of March 21-25

Here is the full list of CS Colloquium talks for next week. All talks will be recorded. ~~~~~ Speaker: Xinyun Chen, University of California, Berkeley Date: Monday, March 21, 2022 Time: 12:30pm EST Location: CS 105 Host: Jia Deng Event page: [ https://www.cs.princeton.edu/events/26175 | https://www.cs.princeton.edu/events/26175 ] This talk will be live-streamed at [ https://mediacentrallive.princeton.edu/ | https://mediacentrallive.princeton.edu/ ] Title: Learning-Based Program Synthesis: Learning for Program Synthesis and Program Synthesis for Learning Abstract: With the advancement of modern technologies, programming becomes ubiquitous not only among professional software developers, but also for general computer users. However, gaining programming expertise is time-consuming and challenging. Therefore, program synthesis has many applications, where the computer automatically synthesizes programs from specifications such as natural language descriptions and input-output examples. In this talk, I will present my work on learning-based program synthesis, where I have developed deep learning techniques for various program synthesis problems. Despite the remarkable success of deep neural networks for many domains, including natural language processing and computer vision, existing deep neural networks are still insufficient for handling challenging symbolic reasoning and generalization problems. My learning-based program synthesis research lies in two folds: (1) learning to synthesize programs from potentially ambiguous and complex specifications; and (2) neural-symbolic learning for language understanding. I will first talk about program synthesis applications, where my work demonstrates the applicability of learning-based program synthesizers for production usage. I will then present my work on neural-symbolic frameworks that integrate symbolic components into neural networks, which achieve better reasoning and generalization capabilities. In closing, I will discuss the challenges and opportunities of further improving the complexity and generalizability of learning-based program synthesis for future work. Bio: Xinyun Chen is a Ph.D. candidate at UC Berkeley, working with Prof. Dawn Song. Her research lies at the intersection of deep learning, programming languages, and security. Her recent research focuses on learning-based program synthesis and adversarial machine learning. She received the Facebook Fellowship in 2020, and Rising Stars in Machine Learning in 2021. Her work SpreadsheetCoder for spreadsheet formula prediction was integrated into Google Sheets, and she was part of the AlphaCode team when she interned at DeepMind. ~~~~~ Speaker: Sherry Tongshuang Wu, University of Washington Date: Tuesday, March 22, 2022 Time: 12:30pm EST Location: CS 105 Hosts: Andrés Monroy-Hernández & Adam Finkelstein Event page: [ https://www.cs.princeton.edu/events/26176 | https://www.cs.princeton.edu/events/26176 ] This talk will be live-streamed at [ https://mediacentrallive.princeton.edu/ | https://mediacentrallive.princeton.edu/ ] Title: Interactive AI Model Debugging and Correction Abstract: Research in Artificial Intelligence (AI) has advanced at an incredible pace, to the point where it is making its way into our everyday lives, explicitly and behind the scenes. However, beneath their impressive progress, many AI models hide deficiencies that amplify social biases or even cause fatal accidents. How do we identify, improve, and cope with imperfect models, while still benefiting from their use? I will discuss my work empowering humans to interact with AI models in order to debug and correct them. I will describe both (1) how I help experts run scalable and testable analyses on models in development, and (2) how I help end users collaborate with deployed AI in a transparent and controllable way. In my final remarks, I will discuss my future research perspectives on building human-centered AI through data-centric approaches. Bio: Sherry Tongshuang Wu is a final year Ph.D. Candidate in Computer Science & Engineering at the University of Washington, advised by Jeffrey Heer and Dan Weld. She received her B.Eng in CSE from the Hong Kong University of Science and Technology. Her research lies at the intersection of Human-Computer Interaction (HCI) and Natural Language Processing (NLP), and aims to empower humans to debug and correct AI models interactively, both when the model is under active development, and after it is deployed for end users. Sherry has authored 19 papers in top-tier NLP, HCI and Visualization conferences and journals such as ACL, CHI, TOCHI, TVCG, etc, including a best paper award (top-1) and an honorable mention (top-3). You can find out more about her at https://homes.cs.washington.edu/~wtshuang/.

Here is the full list of CS Colloquium talks for next week. All talks will be recorded. ~~~~~ Speaker: Rowan Zellers, University of Washington Date: Monday, March 28, 2022 Time: 12:30pm EST Location: CS 105 Host: Danqi Chen Event page: [ https://www.cs.princeton.edu/events/26178 | https://www.cs.princeton.edu/events/26178 ] This talk will be live-streamed at [ https://mediacentrallive.princeton.edu/ | https://mediacentrallive.princeton.edu/ ] Title: Grounding Language by Seeing, Hearing, and Interacting Abstract: As humans, our understanding of language is grounded in a rich mental model about “how the world works” – that we learn through perception and interaction. We use this understanding to reason beyond what we literally observe or read, imagining how situations might unfold in the world. Machines today struggle at this kind of reasoning, which limits how they can communicate with humans. In my talk, I will discuss three lines of work to bridge this gap between machines and humans. I will first discuss how we might measure grounded understanding. I will introduce a suite of approaches for constructing benchmarks, using machines in the loop to filter out spurious biases. Next, I will introduce PIGLeT: a model that learns physical commonsense understanding by interacting with the world through simulation, using this knowledge to ground language. From an English-language description of an event, PIGLeT can anticipate how the world state might change – outperforming text-only models that are orders of magnitude larger. Finally, I will introduce MERLOT, which learns about situations in the world by watching millions of YouTube videos with transcribed speech. Through training objectives inspired by the developmental psychology idea of multimodal reentry, MERLOT learns to jointly reason over language, vision, and sound. Together, these directions suggest a path forward for building machines that learn language rooted in the world. Bio: Rowan Zellers is a final year PhD candidate at the University of Washington in Computer Science & Engineering, advised by Yejin Choi and Ali Farhadi. His research focuses on enabling machines to understand language, vision, sound, and the world beyond these modalities. He has been recognized through an NSF Graduate Fellowship and a NeurIPS 2021 outstanding paper award. His work has appeared in several media outlets, including Wired, the Washington Post, and the New York Times. In the past, he graduated from Harvey Mudd College with a B.S. in Computer Science & Mathematics, and has interned at the Allen Institute for AI. ~~~~~ Speaker: Sai Swaminathan, Carnegie Mellon University Date: Tuesday, March 29, 2022 Time: 12:30pm EST Location: CS 105 Hosts: Andrés Monroy-Hernández & Adam Finkelstein Event page: [ https://www.cs.princeton.edu/events/26184 | https://www.cs.princeton.edu/events/26184 ] This talk will be live-streamed for the Princeton University community at [ https://mediacentrallive.princeton.edu/ | https://mediacentrallive.princeton.edu/ ] *Note, this live-stream is only available to Princeton netID holders. Title: Computational Infrastructure Materials for the Networked & Interactive Built Environment Abstract: From roads to roofs, homes to high-rises, my inspiration is the promise of building cyber-physical infrastructure for human interaction and enabling smart city applications. Unfortunately, there are several challenges in achieving this vision due to the end of Moore's law, Dennard scaling, and our limited views on how computing systems are manufactured. To date, device manufacturing has focused primarily on miniaturization—packing the most functionality into the smallest form factor—despite our physical infrastructure being much larger in scale. We need to think creatively, design devices in new form factors (made in structural forms like walls, tables, facades, etc.) and materials of various kinds (those with extreme mechanical strength) that make up our built environments. There remain several challenges at the nexus of device power, form factor, and scale for designing our cyber-physical infrastructure. This talk will introduce "computational infrastructure materials" that enable us to build energy-efficient sensing, actuation, and communication in the networked physical infrastructure (e.g., buildings, sidewalks) forms. Specifically, I will talk about how to enable our infrastructure materials (e.g., concrete, wood, composites) to do computation: (1) as they bear large amounts of forces (~4000 lbs) (2) enable battery-free sensing and activity recognition in long distances (~70km), (3) actuate large-structures in response to user interaction and (4) enable battery-free wireless communication. Taken together, these capabilities in infrastructure materials enable a range of applications in the built environment, such as digital buildings, accessibility, and ultimately towards creating sustainable and resilient cyber-physical infrastructure for human interaction. I will conclude by discussing open problems and challenges for this emerging research area. Bio: Sai Swaminathan is a Ph.D. Candidate at the Human-Computer Interaction Institute in the School of Computer Science of Carnegie Mellon University. He is advised by Scott Hudson in the DevLab. He works at the intersection of Human-Computer Interaction, Ubiquitous Computing, and Computational Materials. He has published award-winning work at top-tier HCI venues, including ACM CHI, IMWUT (UbiComp), UIST, and CSCW. His work has also been featured in news outlets such as the New Scientist, Makezine, and HacksterIO. He has worked at numerous research institutions such as the Manufacturing Science group at Oakridge National Lab (ORNL), Microsoft Research, INRIA, and Xerox Research. You can find out more about him at [ http://www.saiganesh.net/ | www.saiganesh.net ] ~~~~~ Speaker: Feras Saad, Massachusetts Institute of Technology Date: Thursday, March 31, 2022 Time: 12:30pm EST Location: CS 105 Host: Ryan Adams Event page: [ https://www.cs.princeton.edu/events/26183 | https://www.cs.princeton.edu/events/26183 ] This talk will be live-streamed at [ https://mediacentrallive.princeton.edu/ | https://mediacentrallive.princeton.edu/ ] Title: Scalable Structure Learning and Inference via Probabilistic Programming Abstract: Probabilistic programming supports probabilistic modeling, learning, and inference by representing sophisticated probabilistic models as computer programs in new programming languages. This talk presents efficient probabilistic programming-based techniques that address two fundamental challenges in scaling and automating structure learning and inference over complex data. First, I will describe scalable structure learning methods that make it possible to automatically synthesize probabilistic programs in an online setting by performing Bayesian inference over hierarchies of flexibly structured symbolic program representations, for discovering models of time series data, tabular data, and relational data. Second, I will present fast compilers and symbolic analyses that compute exact answers to a broad range of inference queries about these learned programs, which lets us extract interpretable patterns and make accurate predictions in real time. I will demonstrate how these techniques deliver state-of-the-art performance in terms of runtime, accuracy, robustness, and programmability by drawing on several examples from real-world applications, which include adapting to extreme novelty in economic time series, online forecasting of flu rates given sparse multivariate observations, discovering stochastic motion models of zebrafish hunting, and verifying the fairness of machine learning classifiers. Bio: Feras Saad is a PhD candidate in Computer Science at MIT working at the intersection of programming languages, probabilistic machine learning, and computational statistics. His research is accompanied with a collection of popular open-source probabilistic programming systems used by collaborators at Intel, Takeda, Liberty Mutual, IBM, and the Bill & Melinda Gates Foundation for practical applications of structure learning and probabilistic inference. Feras' MEng thesis on probabilistic programming and data science has been recognized with the 1st Place Computer Science Thesis Award at MIT.
participants (1)
-
Emily C. Lawrence