TODAY 3/13/2023 - ECE Seminar Sabrina Neuman, Harvard University
ECE SEMINAR Speaker: Sabrina Neuman, Harvard University Title: Designing Computing Systems for Robotics and Physically Embodied Deployments Day: Monday, March 13, 2023 Time: 4:30 PM Location: B205 Engineering Quadrangle Host: David Wentzlaff Abstract: Emerging applications that interact heavily with the physical world (e.g., robotics, medical devices, the internet of things, augmented and virtual reality, and machine learning on edge devices) present critical challenges for modern computer architecture, including hard real-time constraints, strict power budgets, diverse deployment scenarios, and a critical need for safety, security, and reliability. Hardware acceleration can provide high-performance and energy-efficient computation, but design requirements are shaped by the physical characteristics of the target electrical, biological, or mechanical deployment; external operating conditions; application performance demands; and the constraints of the size, weight, area, and power allocated to onboard computing-- leading to a combinatorial explosion of the computing system design space. To address this challenge, I identify common computational patterns shaped by the physical characteristics of the deployment scenario (e.g., geometric constraints, timescales, physics, biometrics), and distill this real-world information into systematic design flows that span the software-hardware system stack, from applications down to circuits. An example of this approach is robomorphic computing: a systematic design methodology that transforms robot morphology into customized accelerator hardware morphology by leveraging physical robot features such as limb topology and joint type to determine parallelism and matrix sparsity patterns in streamlined linear algebra functional units in the accelerator. Using robomorphic computing, we designed an accelerator for a critical bottleneck in robot motion planning and implemented the design on an FPGA for a manipulator arm, demonstrating significant speedups over state-of-the-art CPU and GPU solutions. Taking a broader view, in order to design generalized computing systems for robotics and other physically embodied applications, the traditional computing system stack must be expanded to enable co-design with physical real-world information, and new methodologies are needed to implement designs with minimal user intervention. In this talk, I will discuss my recent work in designing computing systems for robotics, and outline a future of systematic co-design of computing systems with the real world. Bio: Sabrina M. Neuman is a postdoctoral NSF Computing Innovation Fellow at Harvard University. Her research interests are in computer architecture design informed by explicit application-level and domain-specific insights. She is particularly focused on robotics applications because of their heavy computational demands and potential to improve the well-being of individuals in society. She received her S.B., M.Eng., and Ph.D. from MIT. She is a 2021 EECS Rising Star, and her work on robotics acceleration has received Honorable Mention in IEEE Micro Top Picks 2022 and IEEE Micro Top Picks 2023.
CITP Seminar Speaker: Michael P. Kim, UC Berkeley Title: Foundations of Responsible Machine Learning Day: Monday, March 20, 2023 Time: 4:30 PM Location: CS 105 Event webpage: [ https://citp.princeton.edu/event/citp-seminar-michael-p-kim/ | https://citp.princeton.edu/event/citp-seminar-michael-p-kim/ ] A ttendance restricted to Princeton University faculty, staff and students. Abstract: Algorithms make predictions about people constantly. The spread of such prediction systems has raised concerns that machine learning algorithms may exhibit problematic behavior, especially against individuals from marginalized groups. This talk will provide an overview of research building a theory of “responsible” machine learning. It will highlight a notion of fairness in prediction, called Multicalibration (ICML’18), which requires predictions to be well-calibrated, not simply overall, but on every group that can be meaningfully identified from data. This “multi-group” approach strengthens the guarantees of group fairness definitions, without incurring the costs (statistical and computational) associated with individual-level protections. Additionally, a new paradigm will be presented for learning, Outcome Indistinguishability (STOC’21), which provides a broad framework for learning predictors satisfying formal guarantees of responsibility. Finally, the threat of Undetectable Backdoors (FOCS’22) will be discussed which represent a serious challenge for building trust in machine learning models. Bio: Michael P. Kim is a postdoctoral research fellow at the Miller Institute for Basic Research in Science at UC Berkeley, hosted by Shafi Goldwasser. Before this, Kim completed his Ph.D. in computer science at Stanford University, advised by Omer Reingold. Kim’s research addresses basic questions about the appropriate use of machine learning algorithms that make predictions about people. More generally, Kim is interested in how the computational lens (i.e., algorithms and complexity theory) can provide insights into emerging societal and scientific challenges.
CITP Seminar Speaker: Michael P. Kim, UC Berkeley Title: Foundations of Responsible Machine Learning Day: Monday, March 20, 2023 Time: 4:30 PM Location: CS 105 Event webpage: [ https://citp.princeton.edu/event/citp-seminar-michael-p-kim/ | https://citp.princeton.edu/event/citp-seminar-michael-p-kim/ ] A ttendance restricted to Princeton University faculty, staff and students. Abstract: Algorithms make predictions about people constantly. The spread of such prediction systems has raised concerns that machine learning algorithms may exhibit problematic behavior, especially against individuals from marginalized groups. This talk will provide an overview of research building a theory of “responsible” machine learning. It will highlight a notion of fairness in prediction, called Multicalibration (ICML’18), which requires predictions to be well-calibrated, not simply overall, but on every group that can be meaningfully identified from data. This “multi-group” approach strengthens the guarantees of group fairness definitions, without incurring the costs (statistical and computational) associated with individual-level protections. Additionally, a new paradigm will be presented for learning, Outcome Indistinguishability (STOC’21), which provides a broad framework for learning predictors satisfying formal guarantees of responsibility. Finally, the threat of Undetectable Backdoors (FOCS’22) will be discussed which represent a serious challenge for building trust in machine learning models. Bio: Michael P. Kim is a postdoctoral research fellow at the Miller Institute for Basic Research in Science at UC Berkeley, hosted by Shafi Goldwasser. Before this, Kim completed his Ph.D. in computer science at Stanford University, advised by Omer Reingold. Kim’s research addresses basic questions about the appropriate use of machine learning algorithms that make predictions about people. More generally, Kim is interested in how the computational lens (i.e., algorithms and complexity theory) can provide insights into emerging societal and scientific challenges.
participants (1)
-
Emily C. Lawrence